Sample records for calculated sample sizes

  1. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  2. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  3. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  4. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  5. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  6. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  7. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  8. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    PubMed

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  9. Sample size calculation for a proof of concept study.

    PubMed

    Yin, Yin

    2002-05-01

    Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.

  10. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  11. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 4 2013-10-01 2013-10-01 false Calculating Sample Size for NYTD Follow-Up Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... REQUIREMENTS APPLICABLE TO TITLE IV-E Pt. 1356, App. C Appendix C to Part 1356—Calculating Sample Size for NYTD...

  13. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review

    PubMed Central

    Morris, Tom; Gray, Laura

    2017-01-01

    Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637

  14. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  15. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  16. Sample size calculations for case-control studies

    Cancer.gov

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  17. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  18. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  19. Sample size considerations using mathematical models: an example with Chlamydia trachomatis infection and its sequelae pelvic inflammatory disease.

    PubMed

    Herzog, Sereina A; Low, Nicola; Berghold, Andrea

    2015-06-19

    The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.

  20. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review.

    PubMed

    Kristunas, Caroline; Morris, Tom; Gray, Laura

    2017-11-15

    To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Any, not limited to healthcare settings. Any taking part in an SW-CRT published up to March 2016. The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22-0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials

    PubMed Central

    Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla

    2016-01-01

    Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897

  2. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    PubMed

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  3. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  4. 40 CFR 90.706 - Engine sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... = emission test result for an individual engine. x = mean of emission test results of the actual sample. FEL... test with the last test result from the previous model year and then calculate the required sample size.... Test results used to calculate the variables in the following Sample Size Equation must be final...

  5. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome.

    PubMed

    Kristunas, Caroline A; Smith, Karen L; Gray, Laura J

    2017-03-07

    The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.

  6. A note on sample size calculation for mean comparisons based on noncentral t-statistics.

    PubMed

    Chow, Shein-Chung; Shao, Jun; Wang, Hansheng

    2002-11-01

    One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.

  7. Sample size considerations for clinical research studies in nuclear cardiology.

    PubMed

    Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J

    2015-12-01

    Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.

  8. Sample size and power for cost-effectiveness analysis (part 1).

    PubMed

    Glick, Henry A

    2011-03-01

    Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.

  9. Sample size calculations for comparative clinical trials with over-dispersed Poisson process data.

    PubMed

    Matsui, Shigeyuki

    2005-05-15

    This paper develops a new formula for sample size calculations for comparative clinical trials with Poisson or over-dispersed Poisson process data. The criteria for sample size calculations is developed on the basis of asymptotic approximations for a two-sample non-parametric test to compare the empirical event rate function between treatment groups. This formula can accommodate time heterogeneity, inter-patient heterogeneity in event rate, and also, time-varying treatment effects. An application of the formula to a trial for chronic granulomatous disease is provided. Copyright 2004 John Wiley & Sons, Ltd.

  10. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  11. Standard Deviation for Small Samples

    ERIC Educational Resources Information Center

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  12. Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.

    PubMed

    Gajewski, Byron J; Mayo, Matthew S

    2006-08-15

    A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.

  13. Sample size considerations when groups are the appropriate unit of analyses

    PubMed Central

    Sadler, Georgia Robins; Ko, Celine Marie; Alisangco, Jennifer; Rosbrook, Bradley P.; Miller, Eric; Fullerton, Judith

    2007-01-01

    This paper discusses issues to be considered by nurse researchers when groups should be used as a unit of randomization. Advantages and disadvantages are presented, with statistical calculations needed to determine effective sample size. Examples of these concepts are presented using data from the Black Cosmetologists Promoting Health Program. Different hypothetical scenarios and their impact on sample size are presented. Given the complexity of calculating sample size when using groups as a unit of randomization, it’s advantageous for researchers to work closely with statisticians when designing and implementing studies that anticipate the use of groups as the unit of randomization. PMID:17693219

  14. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.

  15. The size of a pilot study for a clinical trial should be calculated in relation to considerations of precision and efficiency.

    PubMed

    Sim, Julius; Lewis, Martyn

    2012-03-01

    To investigate methods to determine the size of a pilot study to inform a power calculation for a randomized controlled trial (RCT) using an interval/ratio outcome measure. Calculations based on confidence intervals (CIs) for the sample standard deviation (SD). Based on CIs for the sample SD, methods are demonstrated whereby (1) the observed SD can be adjusted to secure the desired level of statistical power in the main study with a specified level of confidence; (2) the sample for the main study, if calculated using the observed SD, can be adjusted, again to obtain the desired level of statistical power in the main study; (3) the power of the main study can be calculated for the situation in which the SD in the pilot study proves to be an underestimate of the true SD; and (4) an "efficient" pilot size can be determined to minimize the combined size of the pilot and main RCT. Trialists should calculate the appropriate size of a pilot study, just as they should the size of the main RCT, taking into account the twin needs to demonstrate efficiency in terms of recruitment and to produce precise estimates of treatment effect. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size.

    PubMed

    Heidel, R Eric

    2016-01-01

    Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  17. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  18. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  19. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.

    PubMed

    Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra

    2016-11-20

    The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  2. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    PubMed

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of the WPC or BPC can increase the required number of clusters. By illustrating how the parameters required for sample size calculations arise from the CRXO design and by providing guidance on both how to choose values for the parameters and perform the sample size calculations, the implementation of the sample size formulae for CRXO trials may improve.

  3. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  4. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    ERIC Educational Resources Information Center

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  5. Sample size calculation in economic evaluations.

    PubMed

    Al, M J; van Hout, B A; Michel, B C; Rutten, F F

    1998-06-01

    A simulation method is presented for sample size calculation in economic evaluations. As input the method requires: the expected difference and variance of costs and effects, their correlation, the significance level (alpha) and the power of the testing method and the maximum acceptable ratio of incremental effectiveness to incremental costs. The method is illustrated with data from two trials. The first compares primary coronary angioplasty with streptokinase in the treatment of acute myocardial infarction, in the second trial, lansoprazole is compared with omeprazole in the treatment of reflux oesophagitis. These case studies show how the various parameters influence the sample size. Given the large number of parameters that have to be specified in advance, the lack of knowledge about costs and their standard deviation, and the difficulty of specifying the maximum acceptable ratio of incremental effectiveness to incremental costs, the conclusion of the study is that from a technical point of view it is possible to perform a sample size calculation for an economic evaluation, but one should wonder how useful it is.

  6. [A Review on the Use of Effect Size in Nursing Research].

    PubMed

    Kang, Hyuncheol; Yeon, Kyupil; Han, Sang Tae

    2015-10-01

    The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.

  7. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  8. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  9. Effect of roll hot press temperature on crystallite size of PVDF film

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartono, Ambran, E-mail: ambranhartono@yahoo.com; Sanjaya, Edi; Djamal, Mitra

    2014-03-24

    Fabrication PVDF films have been made using Hot Roll Press. Preparation of samples carried out for nine different temperatures. This condition is carried out to see the effect of Roll Hot Press temperature on the size of the crystallite of PVDF films. To obtain the diffraction pattern of sample characterization is performed using X-Ray Diffraction. Furthermore, from the diffraction pattern is obtained, the calculation to determine the crystallite size of the sample by using the Scherrer equation. From the experimental results and the calculation of crystallite sizes obtained for the samples with temperature 130 °C up to 170 °C respectivelymore » increased from 7.2 nm up to 20.54 nm. These results show that increasing temperatures will also increase the size of the crystallite of the sample. This happens because with the increasing temperature causes the higher the degree of crystallization of PVDF film sample is formed, so that the crystallite size also increases. This condition indicates that the specific volume or size of the crystals depends on the magnitude of the temperature as it has been studied by Nakagawa.« less

  10. A general approach for sample size calculation for the three-arm 'gold standard' non-inferiority design.

    PubMed

    Stucke, Kathrin; Kieser, Meinhard

    2012-12-10

    In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Reporting and methodological quality of sample size calculations in cluster randomized trials could be improved: a review.

    PubMed

    Rutterford, Clare; Taljaard, Monica; Dixon, Stephanie; Copas, Andrew; Eldridge, Sandra

    2015-06-01

    To assess the quality of reporting and accuracy of a priori estimates used in sample size calculations for cluster randomized trials (CRTs). We reviewed 300 CRTs published between 2000 and 2008. The prevalence of reporting sample size elements from the 2004 CONSORT recommendations was evaluated and a priori estimates compared with those observed in the trial. Of the 300 trials, 166 (55%) reported a sample size calculation. Only 36 of 166 (22%) reported all recommended descriptive elements. Elements specific to CRTs were the worst reported: a measure of within-cluster correlation was specified in only 58 of 166 (35%). Only 18 of 166 articles (11%) reported both a priori and observed within-cluster correlation values. Except in two cases, observed within-cluster correlation values were either close to or less than a priori values. Even with the CONSORT extension for cluster randomization, the reporting of sample size elements specific to these trials remains below that necessary for transparent reporting. Journal editors and peer reviewers should implement stricter requirements for authors to follow CONSORT recommendations. Authors should report observed and a priori within-cluster correlation values to enable comparisons between these over a wider range of trials. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  12. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  13. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  14. Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2015-01-01

    Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…

  15. Sampling strategies for radio-tracking coyotes

    USGS Publications Warehouse

    Smith, G.J.; Cary, J.R.; Rongstad, O.J.

    1981-01-01

    Ten coyotes radio-tracked for 24 h periods were most active at night and moved little during daylight hours. Home-range size determined from radio-locations of 3 adult coyotes increased with the number of locations until an asymptote was reached at about 35-40 independent day locations or 3 6 nights of hourly radio-locations. Activity of the coyote did not affect the asymptotic nature of the home-range calculations, but home-range sizes determined from more than 3 nights of hourly locations were considerably larger than home-range sizes determined from daylight locations. Coyote home-range sizes were calculated from daylight locations, full-night tracking periods, and half-night tracking periods. Full- and half-lnight sampling strategies involved obtaining hourly radio-locations during 12 and 6 h periods, respectively. The half-night sampling strategy was the best compromise for our needs, as it adequately indexed the home-range size, reduced time and energy spent, and standardized the area calculation without requiring the researcher to become completely nocturnal. Sight tracking also provided information about coyote activity and sociability.

  16. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  17. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  18. Are power calculations useful? A multicentre neuroimaging study

    PubMed Central

    Suckling, John; Henty, Julian; Ecker, Christine; Deoni, Sean C; Lombardo, Michael V; Baron-Cohen, Simon; Jezzard, Peter; Barnes, Anna; Chakrabarti, Bhismadev; Ooi, Cinly; Lai, Meng-Chuan; Williams, Steven C; Murphy, Declan GM; Bullmore, Edward

    2014-01-01

    There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources. PMID:24644267

  19. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  20. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  1. Angiographic core laboratory reproducibility analyses: implications for planning clinical trials using coronary angiography and left ventriculography end-points.

    PubMed

    Steigen, Terje K; Claudio, Cheryl; Abbott, David; Schulzer, Michael; Burton, Jeff; Tymchak, Wayne; Buller, Christopher E; John Mancini, G B

    2008-06-01

    To assess reproducibility of core laboratory performance and impact on sample size calculations. Little information exists about overall reproducibility of core laboratories in contradistinction to performance of individual technicians. Also, qualitative parameters are being adjudicated increasingly as either primary or secondary end-points. The comparative impact of using diverse indexes on sample sizes has not been previously reported. We compared initial and repeat assessments of five quantitative parameters [e.g., minimum lumen diameter (MLD), ejection fraction (EF), etc.] and six qualitative parameters [e.g., TIMI myocardial perfusion grade (TMPG) or thrombus grade (TTG), etc.], as performed by differing technicians and separated by a year or more. Sample sizes were calculated from these results. TMPG and TTG were also adjudicated by a second core laboratory. MLD and EF were the most reproducible, yielding the smallest sample size calculations, whereas percent diameter stenosis and centerline wall motion require substantially larger trials. Of the qualitative parameters, all except TIMI flow grade gave reproducibility characteristics yielding sample sizes of many 100's of patients. Reproducibility of TMPG and TTG was only moderately good both within and between core laboratories, underscoring an intrinsic difficulty in assessing these. Core laboratories can be shown to provide reproducibility performance that is comparable to performance commonly ascribed to individual technicians. The differences in reproducibility yield huge differences in sample size when comparing quantitative and qualitative parameters. TMPG and TTG are intrinsically difficult to assess and conclusions based on these parameters should arise only from very large trials.

  2. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  4. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    PubMed

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  5. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    PubMed

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  7. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    PubMed

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  8. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

    ERIC Educational Resources Information Center

    Dong, Nianbo; Maynard, Rebecca

    2013-01-01

    This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

  9. How large a training set is needed to develop a classifier for microarray data?

    PubMed

    Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M

    2008-01-01

    A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.

  10. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  11. Electrical and magnetic properties of nano-sized magnesium ferrite

    NASA Astrophysics Data System (ADS)

    T, Smitha; X, Sheena; J, Binu P.; Mohammed, E. M.

    2015-02-01

    Nano-sized magnesium ferrite was synthesized using sol-gel techniques. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. Vibration Sample Magnetometer was used to record the magnetic measurements. XRD analysis reveals the prepared sample is single phasic without any impurity. Particle size calculation shows the average crystallite size of the sample is 19nm. FTIR analysis confirmed spinel structure of the prepared samples. Magnetic measurement study shows that the sample is ferromagnetic with high degree of isotropy. Hysterisis loop was traced at temperatures 100K and 300K. DC electrical resistivity measurements show semiconducting nature of the sample.

  12. Thermal conductivity of graphene mediated by strain and size

    DOE PAGES

    Kuang, Youdi; Shi, Sanqiang; Wang, Xinjiang; ...

    2016-06-09

    Based on first-principles calculations and full iterative solution of the linearized Boltzmann–Peierls transport equation for phonons, we systematically investigate effects of strain, size and temperature on the thermal conductivity k of suspended graphene. The calculated size-dependent and temperature-dependent k for finite samples agree well with experimental data. The results show that, contrast to the convergent room-temperature k = 5450 W/m-K of unstrained graphene at a sample size ~8 cm, k of strained graphene diverges with increasing the sample size even at high temperature. Out-of-plane acoustic phonons are responsible for the significant size effect in unstrained and strained graphene due tomore » their ultralong mean free path and acoustic phonons with wavelength smaller than 10 nm contribute 80% to the intrinsic room temperature k of unstrained graphene. Tensile strain hardens the flexural modes and increases their lifetimes, causing interesting dependence of k on sample size and strain due to the competition between boundary scattering and intrinsic phonon–phonon scattering. k of graphene can be tuned within a large range by strain for the size larger than 500 μm. These findings shed light on the nature of thermal transport in two-dimensional materials and may guide predicting and engineering k of graphene by varying strain and size.« less

  13. Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling

    PubMed Central

    2006-01-01

    Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling. PMID:16937083

  14. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Orphan therapies: making best use of postmarket data.

    PubMed

    Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling

    2014-08-01

    Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.

  16. Characteristics of randomised trials on diseases in the digestive system registered in ClinicalTrials.gov: a retrospective analysis.

    PubMed

    Wildt, Signe; Krag, Aleksander; Gluud, Liselotte

    2011-01-01

    Objectives To evaluate the adequacy of reporting of protocols for randomised trials on diseases of the digestive system registered in http://ClinicalTrials.gov and the consistency between primary outcomes, secondary outcomes and sample size specified in http://ClinicalTrials.gov and published trials. Methods Randomised phase III trials on adult patients with gastrointestinal diseases registered before January 2009 in http://ClinicalTrials.gov were eligible for inclusion. From http://ClinicalTrials.gov all data elements in the database required by the International Committee of Medical Journal Editors (ICMJE) member journals were extracted. The subsequent publications for registered trials were identified. For published trials, data concerning publication date, primary and secondary endpoint, sample size, and whether the journal adhered to ICMJE principles were extracted. Differences between primary and secondary outcomes, sample size and sample size calculations data in http://ClinicalTrials.gov and in the published paper were registered. Results 105 trials were evaluated. 66 trials (63%) were published. 30% of trials were registered incorrectly after their completion date. Several data elements of the required ICMJE data list were not filled in, with missing data in 22% and 11%, respectively, of cases concerning the primary outcome measure and sample size. In 26% of the published papers, data on sample size calculations were missing and discrepancies between sample size reporting in http://ClinicalTrials.gov and published trials existed. Conclusion The quality of registration of randomised controlled trials still needs improvement.

  17. Power and sample size for multivariate logistic modeling of unmatched case-control studies.

    PubMed

    Gail, Mitchell H; Haneuse, Sebastien

    2017-01-01

    Sample size calculations are needed to design and assess the feasibility of case-control studies. Although such calculations are readily available for simple case-control designs and univariate analyses, there is limited theory and software for multivariate unconditional logistic analysis of case-control data. Here we outline the theory needed to detect scalar exposure effects or scalar interactions while controlling for other covariates in logistic regression. Both analytical and simulation methods are presented, together with links to the corresponding software.

  18. GLIMMPSE Lite: Calculating Power and Sample Size on Smartphone Devices

    PubMed Central

    Munjal, Aarti; Sakhadeo, Uttara R.; Muller, Keith E.; Glueck, Deborah H.; Kreidler, Sarah M.

    2014-01-01

    Researchers seeking to develop complex statistical applications for mobile devices face a common set of difficult implementation issues. In this work, we discuss general solutions to the design challenges. We demonstrate the utility of the solutions for a free mobile application designed to provide power and sample size calculations for univariate, one-way analysis of variance (ANOVA), GLIMMPSE Lite. Our design decisions provide a guide for other scientists seeking to produce statistical software for mobile platforms. PMID:25541688

  19. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  20. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  1. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Power/Sample Size Calculations for Assessing Correlates of Risk in Clinical Efficacy Trials

    PubMed Central

    Gilbert, Peter B.; Janes, Holly E.; Huang, Yunda

    2016-01-01

    In a randomized controlled clinical trial that assesses treatment efficacy, a common objective is to assess the association of a measured biomarker response endpoint with the primary study endpoint in the active treatment group, using a case-cohort, case-control, or two-phase sampling design. Methods for power and sample size calculations for such biomarker association analyses typically do not account for the level of treatment efficacy, precluding interpretation of the biomarker association results in terms of biomarker effect modification of treatment efficacy, with detriment that the power calculations may tacitly and inadvertently assume that the treatment harms some study participants. We develop power and sample size methods accounting for this issue, and the methods also account for inter-individual variability of the biomarker that is not biologically relevant (e.g., due to technical measurement error). We focus on a binary study endpoint and on a biomarker subject to measurement error that is normally distributed or categorical with two or three levels. We illustrate the methods with preventive HIV vaccine efficacy trials, and include an R package implementing the methods. PMID:27037797

  3. Sample Size Requirements and Study Duration for Testing Main Effects and Interactions in Completely Randomized Factorial Designs When Time to Event is the Outcome

    PubMed Central

    Moser, Barry Kurt; Halabi, Susan

    2013-01-01

    In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial. The change in the power produced by a reduced number of observations or by accruing no patients to certain factorial combinations is also described. PMID:25530661

  4. Investigation of the Specht density estimator

    NASA Technical Reports Server (NTRS)

    Speed, F. M.; Rydl, L. M.

    1971-01-01

    The feasibility of using the Specht density estimator function on the IBM 360/44 computer is investigated. Factors such as storage, speed, amount of calculations, size of the smoothing parameter and sample size have an effect on the results. The reliability of the Specht estimator for normal and uniform distributions and the effects of the smoothing parameter and sample size are investigated.

  5. Test equality between two binary screening tests with a confirmatory procedure restricted on screen positives.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2015-01-01

    In studies of screening accuracy, we may commonly encounter the data in which a confirmatory procedure is administered to only those subjects with screen positives for ethical concerns. We focus our discussion on simultaneously testing equality of sensitivity and specificity between two binary screening tests when only subjects with screen positives receive the confirmatory procedure. We develop four asymptotic test procedures and one exact test procedure. We derive sample size calculation formula for a desired power of detecting a difference at a given nominal [Formula: see text]-level. We employ Monte Carlo simulation to evaluate the performance of these test procedures and the accuracy of the sample size calculation formula developed here in a variety of situations. Finally, we use the data obtained from a study of the prostate-specific-antigen test and digital rectal examination test on 949 Black men to illustrate the practical use of these test procedures and the sample size calculation formula.

  6. Estimating numbers of females with cubs-of-the-year in the Yellowstone grizzly bear population

    USGS Publications Warehouse

    Keating, K.A.; Schwartz, C.C.; Haroldson, M.A.; Moody, D.

    2001-01-01

    For grizzly bears (Ursus arctos horribilis) in the Greater Yellowstone Ecosystem (GYE), minimum population size and allowable numbers of human-caused mortalities have been calculated as a function of the number of unique females with cubs-of-the-year (FCUB) seen during a 3- year period. This approach underestimates the total number of FCUB, thereby biasing estimates of population size and sustainable mortality. Also, it does not permit calculation of valid confidence bounds. Many statistical methods can resolve or mitigate these problems, but there is no universal best method. Instead, relative performances of different methods can vary with population size, sample size, and degree of heterogeneity among sighting probabilities for individual animals. We compared 7 nonparametric estimators, using Monte Carlo techniques to assess performances over the range of sampling conditions deemed plausible for the Yellowstone population. Our goal was to estimate the number of FCUB present in the population each year. Our evaluation differed from previous comparisons of such estimators by including sample coverage methods and by treating individual sightings, rather than sample periods, as the sample unit. Consequently, our conclusions also differ from earlier studies. Recommendations regarding estimators and necessary sample sizes are presented, together with estimates of annual numbers of FCUB in the Yellowstone population with bootstrap confidence bounds.

  7. Experimental design, power and sample size for animal reproduction experiments.

    PubMed

    Chapman, Phillip L; Seidel, George E

    2008-01-01

    The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.

  8. Luminescence isochron dating: a new approach using different grain sizes.

    PubMed

    Zhao, H; Li, S H

    2002-01-01

    A new approach to isochron dating is described using different sizes of quartz and K-feldspar grains. The technique can be applied to sites with time-dependent external dose rates. It is assumed that any underestimation of the equivalent dose (De) using K-feldspar is by a factor F, which is independent of grain size (90-350 microm) for a given sample. Calibration of the beta source for different grain sizes is discussed, and then the sample ages are calculated using the differences between quartz and K-feldspar De from grains of similar size. Two aeolian sediment samples from north-eastern China are used to illustrate the application of the new method. It is confirmed that the observed values of De derived using K-feldspar underestimate the expected doses (based on the quartz De) but, nevertheless, these K-feldspar De values correlate linearly with the calculated internal dose rate contribution, supporting the assumption that the underestimation factor F is independent of grain size. The isochron ages are also compared with the results obtained using quartz De and the measured external dose rates.

  9. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Treesearch

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  10. Sample size of the reference sample in a case-augmented study.

    PubMed

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Sample size calculations for the design of cluster randomized trials: A summary of methodology.

    PubMed

    Gao, Fei; Earnest, Arul; Matchar, David B; Campbell, Michael J; Machin, David

    2015-05-01

    Cluster randomized trial designs are growing in popularity in, for example, cardiovascular medicine research and other clinical areas and parallel statistical developments concerned with the design and analysis of these trials have been stimulated. Nevertheless, reviews suggest that design issues associated with cluster randomized trials are often poorly appreciated and there remain inadequacies in, for example, describing how the trial size is determined and the associated results are presented. In this paper, our aim is to provide pragmatic guidance for researchers on the methods of calculating sample sizes. We focus attention on designs with the primary purpose of comparing two interventions with respect to continuous, binary, ordered categorical, incidence rate and time-to-event outcome variables. Issues of aggregate and non-aggregate cluster trials, adjustment for variation in cluster size and the effect size are detailed. The problem of establishing the anticipated magnitude of between- and within-cluster variation to enable planning values of the intra-cluster correlation coefficient and the coefficient of variation are also described. Illustrative examples of calculations of trial sizes for each endpoint type are included. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Accounting for missing data in the estimation of contemporary genetic effective population size (N(e) ).

    PubMed

    Peel, D; Waples, R S; Macbeth, G M; Do, C; Ovenden, J R

    2013-03-01

    Theoretical models are often applied to population genetic data sets without fully considering the effect of missing data. Researchers can deal with missing data by removing individuals that have failed to yield genotypes and/or by removing loci that have failed to yield allelic determinations, but despite their best efforts, most data sets still contain some missing data. As a consequence, realized sample size differs among loci, and this poses a problem for unbiased methods that must explicitly account for random sampling error. One commonly used solution for the calculation of contemporary effective population size (N(e) ) is to calculate the effective sample size as an unweighted mean or harmonic mean across loci. This is not ideal because it fails to account for the fact that loci with different numbers of alleles have different information content. Here we consider this problem for genetic estimators of contemporary effective population size (N(e) ). To evaluate bias and precision of several statistical approaches for dealing with missing data, we simulated populations with known N(e) and various degrees of missing data. Across all scenarios, one method of correcting for missing data (fixed-inverse variance-weighted harmonic mean) consistently performed the best for both single-sample and two-sample (temporal) methods of estimating N(e) and outperformed some methods currently in widespread use. The approach adopted here may be a starting point to adjust other population genetics methods that include per-locus sample size components. © 2012 Blackwell Publishing Ltd.

  13. Application of SAXS and SANS in evaluation of porosity, pore size distribution and surface area of coal

    USGS Publications Warehouse

    Radlinski, A.P.; Mastalerz, Maria; Hinde, A.L.; Hainbuchner, M.; Rauch, H.; Baron, M.; Lin, J.S.; Fan, L.; Thiyagarajan, P.

    2004-01-01

    This paper discusses the applicability of small angle X-ray scattering (SAXS) and small angle neutron scattering (SANS) techniques for determining the porosity, pore size distribution and internal specific surface area in coals. The method is noninvasive, fast, inexpensive and does not require complex sample preparation. It uses coal grains of about 0.8 mm size mounted in standard pellets as used for petrographic studies. Assuming spherical pore geometry, the scattering data are converted into the pore size distribution in the size range 1 nm (10 A??) to 20 ??m (200,000 A??) in diameter, accounting for both open and closed pores. FTIR as well as SAXS and SANS data for seven samples of oriented whole coals and corresponding pellets with vitrinite reflectance (Ro) values in the range 0.55% to 5.15% are presented and analyzed. Our results demonstrate that pellets adequately represent the average microstructure of coal samples. The scattering data have been used to calculate the maximum surface area available for methane adsorption. Total porosity as percentage of sample volume is calculated and compared with worldwide trends. By demonstrating the applicability of SAXS and SANS techniques to determine the porosity, pore size distribution and surface area in coals, we provide a new and efficient tool, which can be used for any type of coal sample, from a thin slice to a representative sample of a thick seam. ?? 2004 Elsevier B.V. All rights reserved.

  14. Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.

    PubMed

    Jung, Sin-Ho

    2017-07-01

    In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.

  15. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  16. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    PubMed

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  17. Using Sieving and Unknown Sand Samples for a Sedimentation-Stratigraphy Class Project with Linkage to Introductory Courses

    ERIC Educational Resources Information Center

    Videtich, Patricia E.; Neal, William J.

    2012-01-01

    Using sieving and sample "unknowns" for instructional grain-size analysis and interpretation of sands in undergraduate sedimentology courses has advantages over other techniques. Students (1) learn to calculate and use statistics; (2) visually observe differences in the grain-size fractions, thereby developing a sense of specific size…

  18. Are catchment-wide erosion rates really "Catchment-Wide"? Effects of grain size on erosion rates determined from 10Be

    NASA Astrophysics Data System (ADS)

    Reitz, M. A.; Seeber, L.; Schaefer, J. M.; Ferguson, E. K.

    2012-12-01

    Early studies pioneering the method for catchment wide erosion rates by measuring 10Be in alluvial sediment were taken at river mouths and used the sand size grain fraction from the riverbeds in order to average upstream erosion rates and measure erosion patterns. Finer particles (<0.0625 mm) were excluded to reduce the possibility of a wind-blown component of sediment and coarser particles (>2 mm) were excluded to better approximate erosion from the entire upstream catchment area (coarse grains are generally found near the source). Now that the sensitivity of 10Be measurements is rapidly increasing, we can precisely measure erosion rates from rivers eroding active tectonic regions. These active regions create higher energy drainage systems that erode faster and carry coarser sediment. In these settings, does the sand-sized fraction fully capture the average erosion of the upstream drainage area? Or does a different grain size fraction provide a more accurate measure of upstream erosion? During a study of the Neto River in Calabria, southern Italy, we took 8 samples along the length of the river, focusing on collecting samples just below confluences with major tributaries, in order to use the high-resolution erosion rate data to constrain tectonic motion. The samples we measured were sieved to either a 0.125 mm - 0.710 mm fraction or the 0.125 mm - 4 mm fraction (depending on how much of the former was available). After measuring these 8 samples for 10Be and determining erosion rates, we used the approach by Granger et al. [1996] to calculate the subcatchment erosion rates between each sample point. In the subcatchments of the river where we used grain sizes up to 4 mm, we measured very low 10Be concentrations (corresponding to high erosion rates) and calculated nonsensical subcatchment erosion rates (i.e. negative rates). We, therefore, hypothesize that the coarser grain sizes we included are preferentially sampling a smaller upstream area, and not the entire upstream catchment, which is assumed when measurements are based solely on the sand sized fraction. To test this hypothesis, we used samples with a variety of grain sizes from the Shillong Plateau. We sieved 5 samples into three grain size fractions: 0.125 mm - 710 mm, 710 mm - 4 mm, and >4 mm and measured 10Be concentrations in each fraction. Although there is some variation in the grain size fraction that yields the highest erosion rate, generally, the coarser grain size fractions have higher erosion rates. More significant are the results when calculating the subcatchment erosion rates, which suggest that even medium sized grains (710 mm - 4 mm) are sampling an area smaller than the entire upstream area; this finding is consistent with the nonsensical results from the Neto River study. This result has numerous implications for the interpretations of 10Be erosion rates: most importantly, an alluvial sample may not be averaging the entire upstream area, even when using the sand size fraction - resulting erosion rates more pertinent for that sample point rather than the entire catchment.

  19. Selection of the effect size for sample size determination for a continuous response in a superiority clinical trial using a hybrid classical and Bayesian procedure.

    PubMed

    Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N

    2016-06-01

    When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the effect size which the study has a high probability of correctly detecting based on the available prior information on the difference [Formula: see text] and the standard deviation [Formula: see text] provides a valuable, substantiated estimate that can form the basis for discussion about the study's feasibility during the design phase. © The Author(s) 2016.

  20. Statistical power calculations for mixed pharmacokinetic study designs using a population approach.

    PubMed

    Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel

    2014-09-01

    Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.

  1. Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs

    PubMed Central

    Lakens, Daniël

    2013-01-01

    Effect sizes are the most important outcome of empirical studies. Most articles on effect sizes highlight their importance to communicate the practical significance of results. For scientists themselves, effect sizes are most useful because they facilitate cumulative science. Effect sizes can be used to determine the sample size for follow-up studies, or examining effects across studies. This article aims to provide a practical primer on how to calculate and report effect sizes for t-tests and ANOVA's such that effect sizes can be used in a-priori power analyses and meta-analyses. Whereas many articles about effect sizes focus on between-subjects designs and address within-subjects designs only briefly, I provide a detailed overview of the similarities and differences between within- and between-subjects designs. I suggest that some research questions in experimental psychology examine inherently intra-individual effects, which makes effect sizes that incorporate the correlation between measures the best summary of the results. Finally, a supplementary spreadsheet is provided to make it as easy as possible for researchers to incorporate effect size calculations into their workflow. PMID:24324449

  2. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  3. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  4. Reliability of confidence intervals calculated by bootstrap and classical methods using the FIA 1-ha plot design

    Treesearch

    H. T. Schreuder; M. S. Williams

    2000-01-01

    In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...

  5. Reference interval computation: which method (not) to choose?

    PubMed

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2012-07-11

    When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Inference and sample size calculation for clinical trials with incomplete observations of paired binary outcomes.

    PubMed

    Zhang, Song; Cao, Jing; Ahn, Chul

    2017-02-20

    We investigate the estimation of intervention effect and sample size determination for experiments where subjects are supposed to contribute paired binary outcomes with some incomplete observations. We propose a hybrid estimator to appropriately account for the mixed nature of observed data: paired outcomes from those who contribute complete pairs of observations and unpaired outcomes from those who contribute either pre-intervention or post-intervention outcomes. We theoretically prove that if incomplete data are evenly distributed between the pre-intervention and post-intervention periods, the proposed estimator will always be more efficient than the traditional estimator. A numerical research shows that when the distribution of incomplete data is unbalanced, the proposed estimator will be superior when there is moderate-to-strong positive within-subject correlation. We further derive a closed-form sample size formula to help researchers determine how many subjects need to be enrolled in such studies. Simulation results suggest that the calculated sample size maintains the empirical power and type I error under various design configurations. We demonstrate the proposed method using a real application example. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    USGS Publications Warehouse

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than adequate for the majority of sedimentological applications, especially considering that the autocorrelation technique is estimated to be at least 100 times faster than traditional methods.

  8. A Monte Carlo Program for Simulating Selection Decisions from Personnel Tests

    ERIC Educational Resources Information Center

    Petersen, Calvin R.; Thain, John W.

    1976-01-01

    Relative to test and criterion parameters and cutting scores, the correlation coefficient, sample size, and number of samples to be drawn (all inputs), this program calculates decision classification rates across samples and for combined samples. Several other related indices are also computed. (Author)

  9. Probability of coincidental similarity among the orbits of small bodies - I. Pairing

    NASA Astrophysics Data System (ADS)

    Jopek, Tadeusz Jan; Bronikowska, Małgorzata

    2017-09-01

    Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.

  10. Stocking, Forest Type, and Stand Size Class - The Southern Forest Inventory and Analysis Unit's Calculation of Three Important Stand Descriptors

    Treesearch

    Dennis M. May

    1990-01-01

    The procedures by which the Southern Forest Inventory and Analysis unit calculates stocking from tree data collected on inventory sample plots are described in this report. Stocking is then used to ascertain two other important stand descriptors: forest type and stand size class. Inventory data for three plots from the recently completed 1989 Tennessee survey are used...

  11. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  12. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    PubMed

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  13. Influence of multidroplet size distribution on icing collection efficiency

    NASA Technical Reports Server (NTRS)

    Chang, H.-P.; Kimble, K. R.; Frost, W.; Shaw, R. J.

    1983-01-01

    Calculation of collection efficiencies of two-dimensional airfoils for a monodispersed droplet icing cloud and a multidispersed droplet is carried out. Comparison is made with the experimental results reported in the NACA Technical Note series. The results of the study show considerably improved agreement with experiment when multidroplet size distributions are employed. The study then investigates the effect of collection efficiency on airborne particle droplet size sampling instruments. The biased effect introduced due to sampling from different collection volumes is predicted.

  14. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  15. Small studies may overestimate the effect sizes in critical care meta-analyses: a meta-epidemiological study

    PubMed Central

    2013-01-01

    Introduction Small-study effects refer to the fact that trials with limited sample sizes are more likely to report larger beneficial effects than large trials. However, this has never been investigated in critical care medicine. Thus, the present study aimed to examine the presence and extent of small-study effects in critical care medicine. Methods Critical care meta-analyses involving randomized controlled trials and reported mortality as an outcome measure were considered eligible for the study. Component trials were classified as large (≥100 patients per arm) and small (<100 patients per arm) according to their sample sizes. Ratio of odds ratio (ROR) was calculated for each meta-analysis and then RORs were combined using a meta-analytic approach. ROR<1 indicated larger beneficial effect in small trials. Small and large trials were compared in methodological qualities including sequence generating, blinding, allocation concealment, intention to treat and sample size calculation. Results A total of 27 critical care meta-analyses involving 317 trials were included. Of them, five meta-analyses showed statistically significant RORs <1, and other meta-analyses did not reach a statistical significance. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Large trials showed significantly better reporting quality than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data. Conclusions Small trials are more likely to report larger beneficial effects than large trials in critical care medicine, which could be partly explained by the lower methodological quality in small trials. Caution should be practiced in the interpretation of meta-analyses involving small trials. PMID:23302257

  16. Design of the value of imaging in enhancing the wellness of your heart (VIEW) trial and the impact of uncertainty on power.

    PubMed

    Ambrosius, Walter T; Polonsky, Tamar S; Greenland, Philip; Goff, David C; Perdue, Letitia H; Fortmann, Stephen P; Margolis, Karen L; Pajewski, Nicholas M

    2012-04-01

    Although observational evidence has suggested that the measurement of coronary artery calcium (CAC) may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether CAC testing leads to improved patient outcomes. To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of nonfatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, nonfatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. We have proposed a sample size of 30,000 (800 events), which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events), and 27,078 (722 events) provide 80%, 85%, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8-89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9-78.0), 82.5% (82.5-82.6), and 87.2% (87.2-87.3), respectively. These power estimates are dependent on form and parameters of the prior distributions. Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power.

  17. Design of the Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) Trial and the Impact of Uncertainty on Power

    PubMed Central

    Ambrosius, Walter T.; Polonsky, Tamar S.; Greenland, Philip; Goff, David C.; Perdue, Letitia H.; Fortmann, Stephen P.; Margolis, Karen L.; Pajewski, Nicholas M.

    2014-01-01

    Background Although observational evidence has suggested that the measurement of CAC may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether coronary artery calcium (CAC) testing leads to improved patient outcomes. Purpose To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. Methods The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of non-fatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, non-fatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including: (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. Results We have proposed a sample size of 30,000 (800 events) which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events) and 27,078 (722 events) provide 80, 85, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8 to 89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9 to 78.0), 82.5% (82.5 to 82.6), and 87.2% (87.2 to 87.3), respectively. Limitations These power estimates are dependent on form and parameters of the prior distributions. Conclusions Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power. PMID:22333998

  18. OCT Amplitude and Speckle Statistics of Discrete Random Media.

    PubMed

    Almasian, Mitra; van Leeuwen, Ton G; Faber, Dirk J

    2017-11-01

    Speckle, amplitude fluctuations in optical coherence tomography (OCT) images, contains information on sub-resolution structural properties of the imaged sample. Speckle statistics could therefore be utilized in the characterization of biological tissues. However, a rigorous theoretical framework relating OCT speckle statistics to structural tissue properties has yet to be developed. As a first step, we present a theoretical description of OCT speckle, relating the OCT amplitude variance to size and organization for samples of discrete random media (DRM). Starting the calculations from the size and organization of the scattering particles, we analytically find expressions for the OCT amplitude mean, amplitude variance, the backscattering coefficient and the scattering coefficient. We assume fully developed speckle and verify the validity of this assumption by experiments on controlled samples of silica microspheres suspended in water. We show that the OCT amplitude variance is sensitive to sub-resolution changes in size and organization of the scattering particles. Experimentally determined and theoretically calculated optical properties are compared and in good agreement.

  19. High energy ball milling study of Fe{sub 2}MnSn Heusler alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Vivek Kumar, E-mail: vivek.jain129@gmail.com; Lakshmi, N.; Jain, Vishal

    The structural and magnetic properties of as-melted and high energy ball milled alloy samples have been studied by X-ray diffraction, DC magnetization and electronic structure calculations by means of density functional theory. The observed properties are compared to that of the bulk sample. There is a very good enhancement of saturation magnetization and coercivity in the nano-sized samples as compared to bulk which is explained in terms of structural disordering and size effect.

  20. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    PubMed

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  1. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    PubMed

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  2. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist

    NASA Astrophysics Data System (ADS)

    Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  3. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist.

    PubMed

    Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  4. Simulation of Particle Size Effect on Dynamic Properties and Fracture of PTFE-W-Al Composites

    NASA Astrophysics Data System (ADS)

    Herbold, Eric; Cai, Jing; Benson, David; Nesterenko, Vitali

    2007-06-01

    Recent investigations of the dynamic compressive strength of cold isostatically pressed (CIP) composites of polytetrafluoroethylene (PTFE), tungsten and aluminum powders show significant differences depending on the size of metallic particles. PTFE and aluminum mixtures are known to be energetic under dynamic and thermal loading. The addition of tungsten increases density and overall strength of the sample. Multi-material Eulerian and arbitrary Lagrangian-Eulerian methods were used for the investigation due to the complexity of the microstructure, relatively large deformations and the ability to handle the formation of free surfaces in a natural manner. The calculations indicate that the observed dependence of sample strength on particle size is due to the formation of force chains under dynamic loading in samples with small particle sizes even at larger porosity in comparison with samples with large grain size and larger density.

  5. Calculation for tensile strength and fracture toughness of granite with three kinds of grain sizes using three-point-bending test

    PubMed Central

    Yu, Miao; Wei, Chenhui; Niu, Leilei; Li, Shaohua; Yu, Yongjun

    2018-01-01

    Tensile strength and fracture toughness, important parameters of the rock for engineering applications are difficult to measure. Thus this paper selected three kinds of granite samples (grain sizes = 1.01mm, 2.12mm and 3mm), used the combined experiments of physical and numerical simulation (RFPA-DIP version) to conduct three-point-bending (3-p-b) tests with different notches and introduced the acoustic emission monitor system to analyze the fracture mechanism around the notch tips. To study the effects of grain size on the tensile strength and toughness of rock samples, a modified fracture model was established linking fictitious crack to the grain size so that the microstructure of the specimens and fictitious crack growth can be considered together. The fractal method was introduced to represent microstructure of three kinds of granites and used to determine the length of fictitious crack. It is a simple and novel method to calculate the tensile strength and fracture toughness directly. Finally, the theoretical model was verified by the comparison to the numerical experiments by calculating the nominal strength σn and maximum loads Pmax. PMID:29596422

  6. Calculation for tensile strength and fracture toughness of granite with three kinds of grain sizes using three-point-bending test.

    PubMed

    Yu, Miao; Wei, Chenhui; Niu, Leilei; Li, Shaohua; Yu, Yongjun

    2018-01-01

    Tensile strength and fracture toughness, important parameters of the rock for engineering applications are difficult to measure. Thus this paper selected three kinds of granite samples (grain sizes = 1.01mm, 2.12mm and 3mm), used the combined experiments of physical and numerical simulation (RFPA-DIP version) to conduct three-point-bending (3-p-b) tests with different notches and introduced the acoustic emission monitor system to analyze the fracture mechanism around the notch tips. To study the effects of grain size on the tensile strength and toughness of rock samples, a modified fracture model was established linking fictitious crack to the grain size so that the microstructure of the specimens and fictitious crack growth can be considered together. The fractal method was introduced to represent microstructure of three kinds of granites and used to determine the length of fictitious crack. It is a simple and novel method to calculate the tensile strength and fracture toughness directly. Finally, the theoretical model was verified by the comparison to the numerical experiments by calculating the nominal strength σn and maximum loads Pmax.

  7. Rock magnetic properties estimated from coercivity - blocking temperature diagram: application to recent volcanic rocks

    NASA Astrophysics Data System (ADS)

    Terada, T.; Sato, M.; Mochizuki, N.; Yamamoto, Y.; Tsunakawa, H.

    2013-12-01

    Magnetic properties of ferromagnetic minerals generally depend on their chemical composition, crystal structure, size, and shape. In the usual paleomagnetic study, we use a bulk sample which is the assemblage of magnetic minerals showing broad distributions of various magnetic properties. Microscopic and Curie-point observations of the bulk sample enable us to identify the constituent magnetic minerals, while other measurements, for example, stepwise thermal and/or alternating field demagnetizations (ThD, AFD) make it possible to estimate size, shape and domain state of the constituent magnetic grains. However, estimation based on stepwise demagnetizations has a limitation that magnetic grains with the same coercivity Hc (or blocking temperature Tb) can be identified as the single population even though they could have different size and shape. Dunlop and West (1969) carried out mapping of grain size and coercivity (Hc) using pTRM. However, it is considered that their mapping method is basically applicable to natural rocks containing only SD grains, since the grain sizes are estimated on the basis of the single domain theory (Neel, 1949). In addition, it is impossible to check thermal alteration due to laboratory heating in their experiment. In the present study we propose a new experimental method which makes it possible to estimate distribution of size and shape of magnetic minerals in a bulk sample. The present method is composed of simple procedures: (1) imparting ARM to a bulk sample, (2) ThD at a certain temperature, (3) stepwise AFD on the remaining ARM, (4) repeating the steps (1) ~ (3) with ThD at elevating temperatures up to the Curie temperature of the sample. After completion of the whole procedures, ARM spectra are calculated and mapped on the HC-Tb plane (hereafter called HC-Tb diagram). We analyze the Hc-Tb diagrams as follows: (1) For uniaxial SD populations, theoretical curve for a certain grain size (or shape anisotropy) is drawn on the Hc-Tb diagram. The curves are calculated using the single domain theory, since coercivity and blocking temperature of uniaxial SD grains can be expressed as a function of size and shape. (2) Boundary between SD and MD grains are calculated and drawn on the Hc-Tb diagram according to the theory by Butler and Banerjee (1975). (3) Theoretical predictions by (1) and (2) are compared with the obtained ARM spectra to estimate quantitive distribution of size, shape and domain state of magnetic grains in the sample. This mapping method has been applied to three samples: Hawaiian basaltic lava extruded in 1995, Ueno basaltic lava formed during Matsuyama chron, and Oshima basaltic lava extruded in 1986. We will discuss physical states of magnetic grains (size, shape, domain state, etc.) and their possible origins.

  8. Ballistic and Diffusive Thermal Conductivity of Graphene

    NASA Astrophysics Data System (ADS)

    Saito, Riichiro; Masashi, Mizuno; Dresselhaus, Mildred S.

    2018-02-01

    This paper is a contribution to the Physical Review Applied collection in memory of Mildred S. Dresselhaus. Phonon-related thermal conductivity of graphene is calculated as a function of the temperature and sample size of graphene in which the crossover of ballistic and diffusive thermal conductivity occurs at around 100 K. The diffusive thermal conductivity of graphene is evaluated by calculating the phonon mean free path for each phonon mode in which the anharmonicity of a phonon and the phonon scattering by a 13C isotope are taken into account. We show that phonon-phonon scattering of out-of-plane acoustic phonon by the anharmonic potential is essential for the largest thermal conductivity. Using the calculated results, we can design the optimum sample size, which gives the largest thermal conductivity at a given temperature for applying thermal conducting devices.

  9. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  10. Measures of precision for dissimilarity-based multivariate analysis of ecological communities

    PubMed Central

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826

  11. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  12. Exact tests using two correlated binomial variables in contemporary cancer clinical trials.

    PubMed

    Yu, Jihnhee; Kepner, James L; Iyer, Renuka

    2009-12-01

    New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.

  13. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  14. Sample size for estimating mean and coefficient of variation in species of crotalarias.

    PubMed

    Toebe, Marcos; Machado, Letícia N; Tartaglia, Francieli L; Carvalho, Juliana O DE; Bandeira, Cirineu T; Cargnelutti Filho, Alberto

    2018-04-16

    The objective of this study was to determine the sample size necessary to estimate the mean and coefficient of variation in four species of crotalarias (C. juncea, C. spectabilis, C. breviflora and C. ochroleuca). An experiment was carried out for each species during the season 2014/15. At harvest, 1,000 pods of each species were randomly collected. In each pod were measured: mass of pod with and without seeds, length, width and height of pods, number and mass of seeds per pod, and mass of hundred seeds. Measures of central tendency, variability and distribution were calculated, and the normality was verified. The sample size necessary to estimate the mean and coefficient of variation with amplitudes of the confidence interval of 95% (ACI95%) of 2%, 4%, ..., 20% was determined by resampling with replacement. The sample size varies among species and characters, being necessary a larger sample size to estimate the mean in relation of the necessary for the coefficient of variation.

  15. Improving the quality of biomarker discovery research: the right samples and enough of them.

    PubMed

    Pepe, Margaret S; Li, Christopher I; Feng, Ziding

    2015-06-01

    Biomarker discovery research has yielded few biomarkers that validate for clinical use. A contributing factor may be poor study designs. The goal in discovery research is to identify a subset of potentially useful markers from a large set of candidates assayed on case and control samples. We recommend the PRoBE design for selecting samples. We propose sample size calculations that require specifying: (i) a definition for biomarker performance; (ii) the proportion of useful markers the study should identify (Discovery Power); and (iii) the tolerable number of useless markers amongst those identified (False Leads Expected, FLE). We apply the methodology to a study of 9,000 candidate biomarkers for risk of colon cancer recurrence where a useful biomarker has positive predictive value ≥ 30%. We find that 40 patients with recurrence and 160 without recurrence suffice to filter out 98% of useless markers (2% FLE) while identifying 95% of useful biomarkers (95% Discovery Power). Alternative methods for sample size calculation required more assumptions. Biomarker discovery research should utilize quality biospecimen repositories and include sample sizes that enable markers meeting prespecified performance characteristics for well-defined clinical applications to be identified. The scientific rigor of discovery research should be improved. ©2015 American Association for Cancer Research.

  16. Catching ghosts with a coarse net: use and abuse of spatial sampling data in detecting synchronization

    PubMed Central

    2017-01-01

    Synchronization of population dynamics in different habitats is a frequently observed phenomenon. A common mathematical tool to reveal synchronization is the (cross)correlation coefficient between time courses of values of the population size of a given species where the population size is evaluated from spatial sampling data. The corresponding sampling net or grid is often coarse, i.e. it does not resolve all details of the spatial configuration, and the evaluation error—i.e. the difference between the true value of the population size and its estimated value—can be considerable. We show that this estimation error can make the value of the correlation coefficient very inaccurate or even irrelevant. We consider several population models to show that the value of the correlation coefficient calculated on a coarse sampling grid rarely exceeds 0.5, even if the true value is close to 1, so that the synchronization is effectively lost. We also observe ‘ghost synchronization’ when the correlation coefficient calculated on a coarse sampling grid is close to 1 but in reality the dynamics are not correlated. Finally, we suggest a simple test to check the sampling grid coarseness and hence to distinguish between the true and artifactual values of the correlation coefficient. PMID:28202589

  17. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  18. Simulation of Particle Size Effect on Dynamic Properties and Fracture of PTFE-W-Al Composites

    NASA Astrophysics Data System (ADS)

    Herbold, E. B.; Cai, J.; Benson, D. J.; Nesterenko, V. F.

    2007-12-01

    Recent investigations of the dynamic compressive strength of cold isostatically pressed composites of polytetrafluoroethylene (PTFE), tungsten (W) and aluminum (Al) powders show significant differences depending on the size of metallic particles. The addition of W increases the density and changes the overall strength of the sample depending on the size of W particles. To investigate relatively large deformations, multi-material Eulerian and arbitrary Lagrangian-Eulerian methods, which have the ability to efficiently handle the formation of free surfaces, were used. The calculations indicate that the increased sample strength with fine metallic particles is due to the dynamic formation of force chains. This phenomenon occurs for samples with a higher porosity of the PTFE matrix compared to samples with larger particle size of W and a higher density PTFE matrix.

  19. Addressing the "Replication Crisis": Using Original Studies to Design Replication Studies with Appropriate Statistical Power.

    PubMed

    Anderson, Samantha F; Maxwell, Scott E

    2017-01-01

    Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.

  20. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  1. System-size convergence of point defect properties: The case of the silicon vacancy

    NASA Astrophysics Data System (ADS)

    Corsetti, Fabiano; Mostofi, Arash A.

    2011-07-01

    We present a comprehensive study of the vacancy in bulk silicon in all its charge states from 2+ to 2-, using a supercell approach within plane-wave density-functional theory, and systematically quantify the various contributions to the well-known finite size errors associated with calculating formation energies and stable charge state transition levels of isolated defects with periodic boundary conditions. Furthermore, we find that transition levels converge faster with respect to supercell size when only the Γ-point is sampled in the Brillouin zone, as opposed to a dense k-point sampling. This arises from the fact that defect level at the Γ-point quickly converges to a fixed value which correctly describes the bonding at the defect center. Our calculated transition levels with 1000-atom supercells and Γ-point only sampling are in good agreement with available experimental results. We also demonstrate two simple and accurate approaches for calculating the valence band offsets that are required for computing formation energies of charged defects, one based on a potential averaging scheme and the other using maximally-localized Wannier functions (MLWFs). Finally, we show that MLWFs provide a clear description of the nature of the electronic bonding at the defect center that verifies the canonical Watkins model.

  2. Sample Size Calculations for Micro-randomized Trials in mHealth

    PubMed Central

    Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A.

    2015-01-01

    The use and development of mobile interventions are experiencing rapid growth. In “just-in-time” mobile interventions, treatments are provided via a mobile device and they are intended to help an individual make healthy decisions “in the moment,” and thus have a proximal, near future impact. Currently the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a “micro-randomized” trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. PMID:26707831

  3. Non-Born-Oppenheimer self-consistent field calculations with cubic scaling

    NASA Astrophysics Data System (ADS)

    Moncada, Félix; Posada, Edwin; Flores-Moreno, Roberto; Reyes, Andrés

    2012-05-01

    An efficient nuclear molecular orbital methodology is presented. This approach combines an auxiliary density functional theory for electrons (ADFT) and a localized Hartree product (LHP) representation for the nuclear wave function. A series of test calculations conducted on small molecules exposed that energy and geometry errors introduced by the use of ADFT and LHP approximations are small and comparable to those obtained by the use of electronic ADFT. In addition, sample calculations performed on (HF)n chains disclosed that the combined ADFT/LHP approach scales cubically with system size (n) as opposed to the quartic scaling of Hartree-Fock/LHP or DFT/LHP methods. Even for medium size molecules the improved scaling of the ADFT/LHP approach resulted in speedups of at least 5x with respect to Hartree-Fock/LHP calculations. The ADFT/LHP method opens up the possibility of studying nuclear quantum effects on large size systems that otherwise would be impractical.

  4. Using simulation to aid trial design: Ring-vaccination trials.

    PubMed

    Hitchings, Matt David Thomas; Grais, Rebecca Freeman; Lipsitch, Marc

    2017-03-01

    The 2014-6 West African Ebola epidemic highlights the need for rigorous, rapid clinical trial methods for vaccines. A challenge for trial design is making sample size calculations based on incidence within the trial, total vaccine effect, and intracluster correlation, when these parameters are uncertain in the presence of indirect effects of vaccination. We present a stochastic, compartmental model for a ring vaccination trial. After identification of an index case, a ring of contacts is recruited and either vaccinated immediately or after 21 days. The primary outcome of the trial is total vaccine effect, counting cases only from a pre-specified window in which the immediate arm is assumed to be fully protected and the delayed arm is not protected. Simulation results are used to calculate necessary sample size and estimated vaccine effect. Under baseline assumptions about vaccine properties, monthly incidence in unvaccinated rings and trial design, a standard sample-size calculation neglecting dynamic effects estimated that 7,100 participants would be needed to achieve 80% power to detect a difference in attack rate between arms, while incorporating dynamic considerations in the model increased the estimate to 8,900. This approach replaces assumptions about parameters at the ring level with assumptions about disease dynamics and vaccine characteristics at the individual level, so within this framework we were able to describe the sensitivity of the trial power and estimated effect to various parameters. We found that both of these quantities are sensitive to properties of the vaccine, to setting-specific parameters over which investigators have little control, and to parameters that are determined by the study design. Incorporating simulation into the trial design process can improve robustness of sample size calculations. For this specific trial design, vaccine effectiveness depends on properties of the ring vaccination design and on the measurement window, as well as the epidemiologic setting.

  5. Lunar soils grain size catalog

    NASA Technical Reports Server (NTRS)

    Graf, John C.

    1993-01-01

    This catalog compiles every available grain size distribution for Apollo surface soils, trench samples, cores, and Luna 24 soils. Original laboratory data are tabled, and cumulative weight distribution curves and histograms are plotted. Standard statistical parameters are calculated using the method of moments. Photos and location comments describe the sample environment and geological setting. This catalog can help researchers describe the geotechnical conditions and site variability of the lunar surface essential to the design of a lunar base.

  6. A U-statistics based approach to sample size planning of two-arm trials with discrete outcome criterion aiming to establish either superiority or noninferiority.

    PubMed

    Wellek, Stefan

    2017-02-28

    In current practice, the most frequently applied approach to the handling of ties in the Mann-Whitney-Wilcoxon (MWW) test is based on the conditional distribution of the sum of mid-ranks, given the observed pattern of ties. Starting from this conditional version of the testing procedure, a sample size formula was derived and investigated by Zhao et al. (Stat Med 2008). In contrast, the approach we pursue here is a nonconditional one exploiting explicit representations for the variances of and the covariance between the two U-statistics estimators involved in the Mann-Whitney form of the test statistic. The accuracy of both ways of approximating the sample sizes required for attaining a prespecified level of power in the MWW test for superiority with arbitrarily tied data is comparatively evaluated by means of simulation. The key qualitative conclusions to be drawn from these numerical comparisons are as follows: With the sample sizes calculated by means of the respective formula, both versions of the test maintain the level and the prespecified power with about the same degree of accuracy. Despite the equivalence in terms of accuracy, the sample size estimates obtained by means of the new formula are in many cases markedly lower than that calculated for the conditional test. Perhaps, a still more important advantage of the nonconditional approach based on U-statistics is that it can be also adopted for noninferiority trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. On the validity of the Poisson assumption in sampling nanometer-sized aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damit, Brian E; Wu, Dr. Chang-Yu; Cheng, Mengdawn

    2014-01-01

    A Poisson process is traditionally believed to apply to the sampling of aerosols. For a constant aerosol concentration, it is assumed that a Poisson process describes the fluctuation in the measured concentration because aerosols are stochastically distributed in space. Recent studies, however, have shown that sampling of micrometer-sized aerosols has non-Poissonian behavior with positive correlations. The validity of the Poisson assumption for nanometer-sized aerosols has not been examined and thus was tested in this study. Its validity was tested for four particle sizes - 10 nm, 25 nm, 50 nm and 100 nm - by sampling from indoor air withmore » a DMA- CPC setup to obtain a time series of particle counts. Five metrics were calculated from the data: pair-correlation function (PCF), time-averaged PCF, coefficient of variation, probability of measuring a concentration at least 25% greater than average, and posterior distributions from Bayesian inference. To identify departures from Poissonian behavior, these metrics were also calculated for 1,000 computer-generated Poisson time series with the same mean as the experimental data. For nearly all comparisons, the experimental data fell within the range of 80% of the Poisson-simulation values. Essentially, the metrics for the experimental data were indistinguishable from a simulated Poisson process. The greater influence of Brownian motion for nanometer-sized aerosols may explain the Poissonian behavior observed for smaller aerosols. Although the Poisson assumption was found to be valid in this study, it must be carefully applied as the results here do not definitively prove applicability in all sampling situations.« less

  8. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies

    PubMed Central

    2014-01-01

    Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686

  9. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    PubMed

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  10. How conservative is Fisher's exact test? A quantitative evaluation of the two-sample comparative binomial trial.

    PubMed

    Crans, Gerald G; Shuster, Jonathan J

    2008-08-15

    The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd

  11. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    PubMed

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  12. [An investigation of the statistical power of the effect size in randomized controlled trials for the treatment of patients with type 2 diabetes mellitus using Chinese medicine].

    PubMed

    Ma, Li-Xin; Liu, Jian-Ping

    2012-01-01

    To investigate whether the power of the effect size was based on adequate sample size in randomized controlled trials (RCTs) for the treatment of patients with type 2 diabetes mellitus (T2DM) using Chinese medicine. China Knowledge Resource Integrated Database (CNKI), VIP Database for Chinese Technical Periodicals (VIP), Chinese Biomedical Database (CBM), and Wangfang Data were systematically recruited using terms like "Xiaoke" or diabetes, Chinese herbal medicine, patent medicine, traditional Chinese medicine, randomized, controlled, blinded, and placebo-controlled. Limitation was set on the intervention course > or = 3 months in order to identify the information of outcome assessement and the sample size. Data collection forms were made according to the checking lists found in the CONSORT statement. Independent double data extractions were performed on all included trials. The statistical power of the effects size for each RCT study was assessed using sample size calculation equations. (1) A total of 207 RCTs were included, including 111 superiority trials and 96 non-inferiority trials. (2) Among the 111 superiority trials, fasting plasma glucose (FPG) and glycosylated hemoglobin HbA1c (HbA1c) outcome measure were reported in 9% and 12% of the RCTs respectively with the sample size > 150 in each trial. For the outcome of HbA1c, only 10% of the RCTs had more than 80% power. For FPG, 23% of the RCTs had more than 80% power. (3) In the 96 non-inferiority trials, the outcomes FPG and HbA1c were reported as 31% and 36% respectively. These RCTs had a samples size > 150. For HbA1c only 36% of the RCTs had more than 80% power. For FPG, only 27% of the studies had more than 80% power. The sample size for statistical analysis was distressingly low and most RCTs did not achieve 80% power. In order to obtain a sufficient statistic power, it is recommended that clinical trials should establish clear research objective and hypothesis first, and choose scientific and evidence-based study design and outcome measurements. At the same time, calculate required sample size to ensure a precise research conclusion.

  13. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  14. The cost of large numbers of hypothesis tests on power, effect size and sample size.

    PubMed

    Lazzeroni, L C; Ray, A

    2012-01-01

    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.

  15. Numerical calculations of spectral turnover and synchrotron self-absorption in CSS and GPS radio sources

    NASA Astrophysics Data System (ADS)

    Jeyakumar, S.

    2016-06-01

    The dependence of the turnover frequency on the linear size is presented for a sample of Giga-hertz Peaked Spectrum and Compact Steep Spectrum radio sources derived from complete samples. The dependence of the luminosity of the emission at the peak frequency with the linear size and the peak frequency is also presented for the galaxies in the sample. The luminosity of the smaller sources evolve strongly with the linear size. Optical depth effects have been included to the 3D model for the radio source of Kaiser to study the spectral turnover. Using this model, the observed trend can be explained by synchrotron self-absorption. The observed trend in the peak-frequency-linear-size plane is not affected by the luminosity evolution of the sources.

  16. TableSim--A program for analysis of small-sample categorical data.

    Treesearch

    David J. Rugg

    2003-01-01

    Documents a computer program for calculating correct P-values of 1-way and 2-way tables when sample sizes are small. The program is written in Fortran 90; the executable code runs in 32-bit Microsoft-- command line environments.

  17. Neuromuscular dose-response studies: determining sample size.

    PubMed

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  18. Estimating the size of hidden populations using respondent-driven sampling data: Case examples from Morocco

    PubMed Central

    Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S

    2015-01-01

    Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908

  19. A Quantitative Test of the Applicability of Independent Scattering to High Albedo Planetary Regoliths

    NASA Technical Reports Server (NTRS)

    Goguen, Jay D.

    1993-01-01

    To test the hypothesis that the independent scattering calculation widely used to model radiative transfer in atmospheres and clouds will give a useful approximation to the intensity and linear polarization of visible light scattered from an optically thick surface of transparent particles, laboratory measurements are compared to the independent scattering calculation for a surface of spherical particles with known optical constants and size distribution. Because the shape, size distribution, and optical constants of the particles are known, the independent scattering calculation is completely determined and the only remaining unknown is the net effect of the close packing of the particles in the laboratory sample surface...

  20. Single and simultaneous binary mergers in Wright-Fisher genealogies.

    PubMed

    Melfi, Andrew; Viswanath, Divakar

    2018-05-01

    The Kingman coalescent is a commonly used model in genetics, which is often justified with reference to the Wright-Fisher (WF) model. Current proofs of convergence of WF and other models to the Kingman coalescent assume a constant sample size. However, sample sizes have become quite large in human genetics. Therefore, we develop a convergence theory that allows the sample size to increase with population size. If the haploid population size is N and the sample size is N 1∕3-ϵ , ϵ>0, we prove that Wright-Fisher genealogies involve at most a single binary merger in each generation with probability converging to 1 in the limit of large N. Single binary merger or no merger in each generation of the genealogy implies that the Kingman partition distribution is obtained exactly. If the sample size is N 1∕2-ϵ , Wright-Fisher genealogies may involve simultaneous binary mergers in a single generation but do not involve triple mergers in the large N limit. The asymptotic theory is verified using numerical calculations. Variable population sizes are handled algorithmically. It is found that even distant bottlenecks can increase the probability of triple mergers as well as simultaneous binary mergers in WF genealogies. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Using e-mail recruitment and an online questionnaire to establish effect size: A worked example.

    PubMed

    Kirkby, Helen M; Wilson, Sue; Calvert, Melanie; Draper, Heather

    2011-06-09

    Sample size calculations require effect size estimations. Sometimes, effect size estimations and standard deviation may not be readily available, particularly if efficacy is unknown because the intervention is new or developing, or the trial targets a new population. In such cases, one way to estimate the effect size is to gather expert opinion. This paper reports the use of a simple strategy to gather expert opinion to estimate a suitable effect size to use in a sample size calculation. Researchers involved in the design and analysis of clinical trials were identified at the University of Birmingham and via the MRC Hubs for Trials Methodology Research. An email invited them to participate.An online questionnaire was developed using the free online tool 'Survey Monkey©'. The questionnaire described an intervention, an electronic participant information sheet (e-PIS), which may increase recruitment rates to a trial. Respondents were asked how much they would need to see recruitment rates increased by, based on 90%. 70%, 50% and 30% baseline rates, (in a hypothetical study) before they would consider using an e-PIS in their research.Analyses comprised simple descriptive statistics. The invitation to participate was sent to 122 people; 7 responded to say they were not involved in trial design and could not complete the questionnaire, 64 attempted it, 26 failed to complete it. Thirty-eight people completed the questionnaire and were included in the analysis (response rate 33%; 38/115). Of those who completed the questionnaire 44.7% (17/38) were at the academic grade of research fellow 26.3% (10/38) senior research fellow, and 28.9% (11/38) professor. Dependent upon the baseline recruitment rates presented in the questionnaire, participants wanted recruitment rate to increase from 6.9% to 28.9% before they would consider using the intervention. This paper has shown that in situations where effect size estimations cannot be collected from previous research, opinions from researchers and trialists can be quickly and easily collected by conducting a simple study using email recruitment and an online questionnaire. The results collected from the survey were successfully used in sample size calculations for a PhD research study protocol.

  2. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    PubMed

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  3. Sample size and number of outcome measures of veterinary randomised controlled trials of pharmaceutical interventions funded by different sources, a cross-sectional study.

    PubMed

    Wareham, K J; Hyde, R M; Grindlay, D; Brennan, M L; Dean, R S

    2017-10-04

    Randomised controlled trials (RCTs) are a key component of the veterinary evidence base. Sample sizes and defined outcome measures are crucial components of RCTs. To describe the sample size and number of outcome measures of veterinary RCTs either funded by the pharmaceutical industry or not, published in 2011. A structured search of PubMed identified RCTs examining the efficacy of pharmaceutical interventions. Number of outcome measures, number of animals enrolled per trial, whether a primary outcome was identified, and the presence of a sample size calculation were extracted from the RCTs. The source of funding was identified for each trial and groups compared on the above parameters. Literature searches returned 972 papers; 86 papers comprising 126 individual trials were analysed. The median number of outcomes per trial was 5.0; there were no significant differences across funding groups (p = 0.133). The median number of animals enrolled per trial was 30.0; this was similar across funding groups (p = 0.302). A primary outcome was identified in 40.5% of trials and was significantly more likely to be stated in trials funded by a pharmaceutical company. A very low percentage of trials reported a sample size calculation (14.3%). Failure to report primary outcomes, justify sample sizes and the reporting of multiple outcome measures was a common feature in all of the clinical trials examined in this study. It is possible some of these factors may be affected by the source of funding of the studies, but the influence of funding needs to be explored with a larger number of trials. Some veterinary RCTs provide a weak evidence base and targeted strategies are required to improve the quality of veterinary RCTs to ensure there is reliable evidence on which to base clinical decisions.

  4. Measures of precision for dissimilarity-based multivariate analysis of ecological communities.

    PubMed

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.

  5. Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach

    NASA Astrophysics Data System (ADS)

    Xiao, T.

    2012-12-01

    One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.

  6. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than...

  7. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than...

  8. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than...

  9. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... OF HUMAN DEVELOPMENT SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES THE ADMINISTRATION ON CHILDREN, YOUTH AND FAMILIES, FOSTER CARE MAINTENANCE PAYMENTS, ADOPTION ASSISTANCE, AND CHILD AND FAMILY SERVICES... applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than...

  10. Evaluation of Sampling Recommendations From the Influenza Virologic Surveillance Right Size Roadmap for Idaho.

    PubMed

    Rosenthal, Mariana; Anderson, Katey; Tengelsen, Leslie; Carter, Kris; Hahn, Christine; Ball, Christopher

    2017-08-24

    The Right Size Roadmap was developed by the Association of Public Health Laboratories and the Centers for Disease Control and Prevention to improve influenza virologic surveillance efficiency. Guidelines were provided to state health departments regarding representativeness and statistical estimates of specimen numbers needed for seasonal influenza situational awareness, rare or novel influenza virus detection, and rare or novel influenza virus investigation. The aim of this study was to compare Roadmap sampling recommendations with Idaho's influenza virologic surveillance to determine implementation feasibility. We calculated the proportion of medically attended influenza-like illness (MA-ILI) from Idaho's influenza-like illness surveillance among outpatients during October 2008 to May 2014, applied data to Roadmap-provided sample size calculators, and compared calculations with actual numbers of specimens tested for influenza by the Idaho Bureau of Laboratories (IBL). We assessed representativeness among patients' tested specimens to census estimates by age, sex, and health district residence. Among outpatients surveilled, Idaho's mean annual proportion of MA-ILI was 2.30% (20,834/905,818) during a 5-year period. Thus, according to Roadmap recommendations, Idaho needs to collect 128 specimens from MA-ILI patients/week for situational awareness, 1496 influenza-positive specimens/week for detection of a rare or novel influenza virus at 0.2% prevalence, and after detection, 478 specimens/week to confirm true prevalence is ≤2% of influenza-positive samples. The mean number of respiratory specimens Idaho tested for influenza/week, excluding the 2009-2010 influenza season, ranged from 6 to 24. Various influenza virus types and subtypes were collected and specimen submission sources were representative in terms of geographic distribution, patient age range and sex, and disease severity. Insufficient numbers of respiratory specimens are submitted to IBL for influenza laboratory testing. Increased specimen submission would facilitate meeting Roadmap sample size recommendations. ©Mariana Rosenthal, Katey Anderson, Leslie Tengelsen, Kris Carter, Christine Hahn, Christopher Ball. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 24.08.2017.

  11. Evaluation of Sampling Recommendations From the Influenza Virologic Surveillance Right Size Roadmap for Idaho

    PubMed Central

    2017-01-01

    Background The Right Size Roadmap was developed by the Association of Public Health Laboratories and the Centers for Disease Control and Prevention to improve influenza virologic surveillance efficiency. Guidelines were provided to state health departments regarding representativeness and statistical estimates of specimen numbers needed for seasonal influenza situational awareness, rare or novel influenza virus detection, and rare or novel influenza virus investigation. Objective The aim of this study was to compare Roadmap sampling recommendations with Idaho’s influenza virologic surveillance to determine implementation feasibility. Methods We calculated the proportion of medically attended influenza-like illness (MA-ILI) from Idaho’s influenza-like illness surveillance among outpatients during October 2008 to May 2014, applied data to Roadmap-provided sample size calculators, and compared calculations with actual numbers of specimens tested for influenza by the Idaho Bureau of Laboratories (IBL). We assessed representativeness among patients’ tested specimens to census estimates by age, sex, and health district residence. Results Among outpatients surveilled, Idaho’s mean annual proportion of MA-ILI was 2.30% (20,834/905,818) during a 5-year period. Thus, according to Roadmap recommendations, Idaho needs to collect 128 specimens from MA-ILI patients/week for situational awareness, 1496 influenza-positive specimens/week for detection of a rare or novel influenza virus at 0.2% prevalence, and after detection, 478 specimens/week to confirm true prevalence is ≤2% of influenza-positive samples. The mean number of respiratory specimens Idaho tested for influenza/week, excluding the 2009-2010 influenza season, ranged from 6 to 24. Various influenza virus types and subtypes were collected and specimen submission sources were representative in terms of geographic distribution, patient age range and sex, and disease severity. Conclusions Insufficient numbers of respiratory specimens are submitted to IBL for influenza laboratory testing. Increased specimen submission would facilitate meeting Roadmap sample size recommendations. PMID:28838883

  12. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    PubMed

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the measurement of hours worked each week by GPs strongly varied according to the number of GPs included and the frequency of measurements per GP during the week measured. The best balance between both dimensions will depend upon different circumstances, such as the target group and the budget available.

  13. The role of intramolecular nonbonded interaction and angle sampling in single-step free energy perturbation

    NASA Astrophysics Data System (ADS)

    Chiang, Ying-Chih; Pang, Yui Tik; Wang, Yi

    2016-12-01

    Single-step free energy perturbation (sFEP) has often been proposed as an efficient tool for a quick free energy scan due to its straightforward protocol and the ability to recycle an existing molecular dynamics trajectory for free energy calculations. Although sFEP is expected to fail when the sampling of a system is inefficient, it is often expected to hold for an alchemical transformation between ligands with a moderate difference in their sizes, e.g., transforming a benzene into an ethylbenzene. Yet, exceptions were observed in calculations for anisole and methylaniline, which have similar physical sizes as ethylbenzene. In this study, we show that such exceptions arise from the sampling inefficiency on an unexpected rigid degree of freedom, namely, the bond angle θ. The distributions of θ differ dramatically between two end states of a sFEP calculation, i.e., the conformation of the ligand changes significantly during the alchemical transformation process. Our investigation also reveals the interrelation between the ligand conformation and the intramolecular nonbonded interactions. This knowledge suggests a best combination of the ghost ligand potential and the dual topology setting, which improves the accuracy in a single reference sFEP calculation by bringing down its error from around 5kBT to kBT.

  14. Interventions to Improve Medication Adherence in Hypertensive Patients: Systematic Review and Meta-analysis.

    PubMed

    Conn, Vicki S; Ruppar, Todd M; Chase, Jo-Ana D; Enriquez, Maithe; Cooper, Pamela S

    2015-12-01

    This systematic review applied meta-analytic procedures to synthesize medication adherence interventions that focus on adults with hypertension. Comprehensive searching located trials with medication adherence behavior outcomes. Study sample, design, intervention characteristics, and outcomes were coded. Random-effects models were used in calculating standardized mean difference effect sizes. Moderator analyses were conducted using meta-analytic analogues of ANOVA and regression to explore associations between effect sizes and sample, design, and intervention characteristics. Effect sizes were calculated for 112 eligible treatment-vs.-control group outcome comparisons of 34,272 subjects. The overall standardized mean difference effect size between treatment and control subjects was 0.300. Exploratory moderator analyses revealed interventions were most effective among female, older, and moderate- or high-income participants. The most promising intervention components were those linking adherence behavior with habits, giving adherence feedback to patients, self-monitoring of blood pressure, using pill boxes and other special packaging, and motivational interviewing. The most effective interventions employed multiple components and were delivered over many days. Future research should strive for minimizing risks of bias common in this literature, especially avoiding self-report adherence measures.

  15. Snow particles extracted from X-ray computed microtomography imagery and their single-scattering properties

    NASA Astrophysics Data System (ADS)

    Ishimoto, Hiroshi; Adachi, Satoru; Yamaguchi, Satoru; Tanikawa, Tomonori; Aoki, Teruo; Masuda, Kazuhiko

    2018-04-01

    Sizes and shapes of snow particles were determined from X-ray computed microtomography (micro-CT) images, and their single-scattering properties were calculated at visible and near-infrared wavelengths using a Geometrical Optics Method (GOM). We analyzed seven snow samples including fresh and aged artificial snow and natural snow obtained from field samples. Individual snow particles were numerically extracted, and the shape of each snow particle was defined by applying a rendering method. The size distribution and specific surface area distribution were estimated from the geometrical properties of the snow particles, and an effective particle radius was derived for each snow sample. The GOM calculations at wavelengths of 0.532 and 1.242 μm revealed that the realistic snow particles had similar scattering phase functions as those of previously modeled irregular shaped particles. Furthermore, distinct dendritic particles had a characteristic scattering phase function and asymmetry factor. The single-scattering properties of particles of effective radius reff were compared with the size-averaged single-scattering properties. We found that the particles of reff could be used as representative particles for calculating the average single-scattering properties of the snow. Furthermore, the single-scattering properties of the micro-CT particles were compared to those of particle shape models using our current snow retrieval algorithm. For the single-scattering phase function, the results of the micro-CT particles were consistent with those of a conceptual two-shape model. However, the particle size dependence differed for the single-scattering albedo and asymmetry factor.

  16. Field application of a multi-frequency acoustic instrument to monitor sediment for silt erosion study in Pelton turbine in Himalayan region, India

    NASA Astrophysics Data System (ADS)

    Rai, A. K.; Kumar, A.; Hies, T.; Nguyen, H. H.

    2016-11-01

    High sediment load passing through hydropower components erodes the hydraulic components resulting in loss of efficiency, interruptions in power production and downtime for repair/maintenance, especially in Himalayan regions. The size and concentration of sediment play a major role in silt erosion. The traditional process of collecting samples manually to analyse in laboratory cannot suffice the need of monitoring temporal variation in sediment properties. In this study, a multi-frequency acoustic instrument was applied at desilting chamber to monitor sediment size and concentration entering the turbine. The sediment size and concentration entering the turbine were also measured with manual samples collected twice daily. The samples collected manually were analysed in laboratory with a laser diffraction instrument for size and concentration apart from analysis by drying and filtering methods for concentration. A conductivity probe was used to calculate total dissolved solids, which was further used in results from drying method to calculate suspended solid content of the samples. The acoustic instrument was found to provide sediment concentration values similar to drying and filtering methods. However, no good match was found between mean grain size from the acoustic method with the current status of development and laser diffraction method in the first field application presented here. The future versions of the software and significant sensitivity improvements of the ultrasonic transducers are expected to increase the accuracy in the obtained results. As the instrument is able to capture the concentration and in the future most likely more accurate mean grain size of the suspended sediments, its application for monitoring silt erosion in hydropower plant shall be highly useful.

  17. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    PubMed

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  18. Metrological characterization of X-ray diffraction methods at different acquisition geometries for determination of crystallite size in nano-scale materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uvarov, Vladimir, E-mail: vladimiru@savion.huji.ac.il; Popov, Inna

    2013-11-15

    Crystallite size values were determined by X-ray diffraction methods for 183 powder samples. The tested size range was from a few to about several hundred nanometers. Crystallite size was calculated with direct use of the Scherrer equation, the Williamson–Hall method and the Rietveld procedure via the application of a series of commercial and free software. The results were statistically treated to estimate the significance of the difference in size resulting from these methods. We also estimated effect of acquisition conditions (Bragg–Brentano, parallel-beam geometry, step size, counting time) and data processing on the calculated crystallite size values. On the basis ofmore » the obtained results it is possible to conclude that direct use of the Scherrer equation, Williamson–Hall method and the Rietveld refinement employed by a series of software (EVA, PCW and TOPAS respectively) yield very close results for crystallite sizes less than 60 nm for parallel beam geometry and less than 100 nm for Bragg–Brentano geometry. However, we found that despite the fact that the differences between the crystallite sizes, which were calculated by various methods, are small by absolute values, they are statistically significant in some cases. The values of crystallite size determined from XRD were compared with those obtained by imaging in a transmission (TEM) and scanning electron microscopes (SEM). It was found that there was a good correlation in size only for crystallites smaller than 50 – 60 nm. Highlights: • The crystallite sizes for 183 nanopowders were calculated using different XRD methods • Obtained results were subject to statistical treatment • Results obtained with Bragg-Brentano and parallel beam geometries were compared • Influence of conditions of XRD pattern acquisition on results was estimated • Calculated by XRD crystallite sizes were compared with same obtained by TEM and SEM.« less

  19. Size-exclusion chromatography of perfluorosulfonated ionomers.

    PubMed

    Mourey, T H; Slater, L A; Galipo, R C; Koestner, R J

    2011-08-26

    A size-exclusion chromatography (SEC) method in N,N-dimethylformamide containing 0.1 M LiNO(3) is shown to be suitable for the determination of molar mass distributions of three classes of perfluorosulfonated ionomers, including Nafion(®). Autoclaving sample preparation is optimized to prepare molecular solutions free of aggregates, and a solvent exchange method concentrates the autoclaved samples to enable the use of molar-mass-sensitive detection. Calibration curves obtained from light scattering and viscometry detection suggest minor variation in the specific refractive index increment across the molecular size distributions, which introduces inaccuracies in the calculation of local absolute molar masses and intrinsic viscosities. Conformation plots that combine apparent molar masses from light scattering detection with apparent intrinsic viscosities from viscometry detection partially compensate for the variations in refractive index increment. The conformation plots are consistent with compact polymer conformations, and they provide Mark-Houwink-Sakurada constants that can be used to calculate molar mass distributions without molar-mass-sensitive detection. Unperturbed dimensions and characteristic ratios calculated from viscosity-molar mass relationships indicate unusually free rotation of the perfluoroalkane backbones and may suggest limitations to applying two-parameter excluded volume theories for these ionomers. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  1. Sample size calculation for studies with grouped survival data.

    PubMed

    Li, Zhiguo; Wang, Xiaofei; Wu, Yuan; Owzar, Kouros

    2018-06-10

    Grouped survival data arise often in studies where the disease status is assessed at regular visits to clinic. The time to the event of interest can only be determined to be between two adjacent visits or is right censored at one visit. In data analysis, replacing the survival time with the endpoint or midpoint of the grouping interval leads to biased estimators of the effect size in group comparisons. Prentice and Gloeckler developed a maximum likelihood estimator for the proportional hazards model with grouped survival data and the method has been widely applied. Previous work on sample size calculation for designing studies with grouped data is based on either the exponential distribution assumption or the approximation of variance under the alternative with variance under the null. Motivated by studies in HIV trials, cancer trials and in vitro experiments to study drug toxicity, we develop a sample size formula for studies with grouped survival endpoints that use the method of Prentice and Gloeckler for comparing two arms under the proportional hazards assumption. We do not impose any distributional assumptions, nor do we use any approximation of variance of the test statistic. The sample size formula only requires estimates of the hazard ratio and survival probabilities of the event time of interest and the censoring time at the endpoints of the grouping intervals for one of the two arms. The formula is shown to perform well in a simulation study and its application is illustrated in the three motivating examples. Copyright © 2018 John Wiley & Sons, Ltd.

  2. An internal pilot design for prospective cancer screening trials with unknown disease prevalence.

    PubMed

    Brinton, John T; Ringham, Brandy M; Glueck, Deborah H

    2015-10-13

    For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.

  3. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  4. First Principles Study of Nanodiamond Optical and Electronic Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raty, J; Galli, G

    2004-10-21

    Nanometer sized diamond has been found in meteorites, proto-planetary nebulae and interstellar dusts, as well as in residues of detonation and in diamond films. Remarkably, the size distribution of diamond nanoparticles appears to be peaked around 2-5 nm, and to be largely independent of preparation conditions. Using ab-initio calculations, we have shown that in this size range nanodiamond has a fullerene-like surface and, unlike silicon and germanium, exhibits very weak quantum confinement effects. We called these carbon nanoparticles bucky-diamonds: their atomic structure, predicted by simulations, is consistent with many experimental findings. In addition, we carried out calculations of the stabilitymore » of nanodiamond which provided a unifying explanation of its size distribution in extra-terrestrial samples, and in ultra-crystalline diamond films.« less

  5. a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution

    NASA Astrophysics Data System (ADS)

    Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin

    Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.

  6. What is a species? A new universal method to measure differentiation and assess the taxonomic rank of allopatric populations, using continuous variables

    PubMed Central

    Donegan, Thomas M.

    2018-01-01

    Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266

  7. Micrometer-scale particle sizing by laser diffraction: critical impact of the imaginary component of refractive index.

    PubMed

    Beekman, Alice; Shan, Daxian; Ali, Alana; Dai, Weiguo; Ward-Smith, Stephen; Goldenberg, Merrill

    2005-04-01

    This study evaluated the effect of the imaginary component of the refractive index on laser diffraction particle size data for pharmaceutical samples. Excipient particles 1-5 microm in diameter (irregular morphology) were measured by laser diffraction. Optical parameters were obtained and verified based on comparison of calculated vs. actual particle volume fraction. Inappropriate imaginary components of the refractive index can lead to inaccurate results, including false peaks in the size distribution. For laser diffraction measurements, obtaining appropriate or "effective" imaginary components of the refractive index was not always straightforward. When the recommended criteria such as the concentration match and the fit of the scattering data gave similar results for very different calculated size distributions, a supplemental technique, microscopy with image analysis, was used to decide between the alternatives. Use of effective optical parameters produced a good match between laser diffraction data and microscopy/image analysis data. The imaginary component of the refractive index can have a major impact on particle size results calculated from laser diffraction data. When performed properly, laser diffraction and microscopy with image analysis can yield comparable results.

  8. XRD analysis of undoped and Fe doped TiO{sub 2} nanoparticles by Williamson Hall method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bharti, Bandna; Barman, P. B.; Kumar, Rajesh, E-mail: rajesh.kumar@juit.ac.in

    2015-08-28

    Undoped and Fe doped titanium dioxide (TiO{sub 2}) nanoparticles were synthesized by sol-gel method at room temperature. The synthesized samples were annealed at 500°C. For structural analysis, the prepared samples were characterized by X-ray diffraction (XRD). The crystallite size of TiO{sub 2} and Fe doped TiO{sub 2} nanoparticles were calculated by Scherer’s formula, and was found to be 15 nm and 11 nm, respectively. Reduction in crystallite size of TiO{sub 2} with Fe doping was observed. The anatase phase of Fe-doped TiO{sub 2} nanoparticles was also confirmed by X-ray diffraction. By using Williamson-Hall method, lattice strain and crystallite size weremore » also calculated. Williamson–Hall plot indicates the presence of compressive strain for TiO{sub 2} and tensile strain for Fe-TiO{sub 2} nanoparticles annealed at 500°C.« less

  9. An internal pilot study for a randomized trial aimed at evaluating the effectiveness of iron interventions in children with non-anemic iron deficiency: the OptEC trial.

    PubMed

    Abdullah, Kawsari; Thorpe, Kevin E; Mamak, Eva; Maguire, Jonathon L; Birken, Catherine S; Fehlings, Darcy; Hanley, Anthony J; Macarthur, Colin; Zlotkin, Stanley H; Parkin, Patricia C

    2015-07-14

    The OptEC trial aims to evaluate the effectiveness of oral iron in young children with non-anemic iron deficiency (NAID). The initial sample size calculated for the OptEC trial ranged from 112-198 subjects. Given the uncertainty regarding the parameters used to calculate the sample, an internal pilot study was conducted. The objectives of this internal pilot study were to obtain reliable estimate of parameters (standard deviation and design factor) to recalculate the sample size and to assess the adherence rate and reasons for non-adherence in children enrolled in the pilot study. The first 30 subjects enrolled into the OptEC trial constituted the internal pilot study. The primary outcome of the OptEC trial is the Early Learning Composite (ELC). For estimation of the SD of the ELC, descriptive statistics of the 4 month follow-up ELC scores were assessed within each intervention group. The observed SD within each group was then pooled to obtain an estimated SD (S2) of the ELC. Correlation (ρ) between the ELC measured at baseline and follow-up was assessed. Recalculation of the sample size was performed using analysis of covariance (ANCOVA) method which uses the design factor (1- ρ(2)). Adherence rate was calculated using a parent reported rate of missed doses of the study intervention. The new estimate of the SD of the ELC was found to be 17.40 (S2). The design factor was (1- ρ2) = 0.21. Using a significance level of 5%, power of 80%, S2 = 17.40 and effect estimate (Δ) ranging from 6-8 points, the new sample size based on ANCOVA method ranged from 32-56 subjects (16-28 per group). Adherence ranged between 14% and 100% with 44% of the children having an adherence rate ≥ 86%. Information generated from our internal pilot study was used to update the design of the full and definitive trial, including recalculation of sample size, determination of the adequacy of adherence, and application of strategies to improve adherence. ClinicalTrials.gov Identifier: NCT01481766 (date of registration: November 22, 2011).

  10. Measurement of the bed material of gravel-bed rivers

    USGS Publications Warehouse

    Milhous, R.T.; ,

    2002-01-01

    The measurement of the physical properties of a gravel-bed river is important in the calculation of sediment transport and physical habitat values for aquatic animals. These properties are not always easy to measure. One recent report on flushing of fines from the Klamath River did not contain information on one location because the grain size distribution of the armour could not be measured on a dry river bar. The grain size distribution could have been measured using a barrel sampler and converting the measurements to the same as would have been measured if a dry bar existed at the site. In another recent paper the porosity was calculated from an average value relation from the literature. The results of that paper may be sensitive to the actual value of porosity. Using the bulk density sampling technique based on a water displacement process presented in this paper the porosity could have been calculated from the measured bulk density. The principle topics of this paper are the measurement of the size distribution of the armour, and measurement of the porosity of the substrate. The 'standard' method of sampling of the armour is to do a Wolman-type count of the armour on a dry section of the river bed. When a dry bar does not exist the armour in an area of the wet streambed is to sample and the measurements transformed analytically to the same type of results that would have been obtained from the standard Wolman procedure. A comparison of the results for the San Miguel River in Colorado shows significant differences in the median size of the armour. The method use to determine the porosity is not 'high-tech' and there is a need improve knowledge of the porosity because of the importance of porosity in the aquatic ecosystem. The technique is to measure the in-situ volume of a substrate sample by measuring the volume of a frame over the substrate and then repeated the volume measurement after the sample is obtained from within the frame. The difference in the volumes is the volume of the sample.

  11. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    PubMed

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Sample size allocation for food item radiation monitoring and safety inspection.

    PubMed

    Seto, Mayumi; Uriu, Koichiro

    2015-03-01

    The objective of this study is to identify a procedure for determining sample size allocation for food radiation inspections of more than one food item to minimize the potential risk to consumers of internal radiation exposure. We consider a simplified case of food radiation monitoring and safety inspection in which a risk manager is required to monitor two food items, milk and spinach, in a contaminated area. Three protocols for food radiation monitoring with different sample size allocations were assessed by simulating random sampling and inspections of milk and spinach in a conceptual monitoring site. Distributions of (131)I and radiocesium concentrations were determined in reference to (131)I and radiocesium concentrations detected in Fukushima prefecture, Japan, for March and April 2011. The results of the simulations suggested that a protocol that allocates sample size to milk and spinach based on the estimation of (131)I and radiocesium concentrations using the apparent decay rate constants sequentially calculated from past monitoring data can most effectively minimize the potential risks of internal radiation exposure. © 2014 Society for Risk Analysis.

  15. Comparative optical studies of ZnO and ZnO-TiO2 - Metal oxide nanoparticle

    NASA Astrophysics Data System (ADS)

    Vijayalakshmi, R. Vanathi; Asvini, V.; Kumar, P. Praveen; Ravichandran, K.

    2018-05-01

    A comparative study was carried out to show the enhancement in optical activity of bimetal oxide nanoparticle (ZnO - TiO2) than metal oxide nanoparticle (ZnO), which can preferably be used for optical applications. The samples were prepared by wet chemical method and crystalline structure of the samples as hexagonal - primitive for ZnO and tetragonal - bcc for ZnO-TiO2 was confirmed by XRD measurements. The average grain size of ZnO - 19.89nm and ZnO-TiO2- 49.89 nm was calculated by Debye- Scherrer formula. The structure and particle size of the sample was analyzed by FESEM images. The direct band gap energy of ZnO (3.9eV) and ZnO - TiO2(4.68eV) was calculated by Kubelka-Munk Function, from which it is clear that the band gap energy increases in bimetal oxide to a desired level than in its pure form. The photoluminescence study shows that the emitted wavelength of the samples lies exactly around the visible region.

  16. Sampling and data handling methods for inhalable particulate sampling. Final report nov 78-dec 80

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, W.B.; Cushing, K.M.; Johnson, J.W.

    1982-05-01

    The report reviews the objectives of a research program on sampling and measuring particles in the inhalable particulate (IP) size range in emissions from stationary sources, and describes methods and equipment required. A computer technique was developed to analyze data on particle-size distributions of samples taken with cascade impactors from industrial process streams. Research in sampling systems for IP matter included concepts for maintaining isokinetic sampling conditions, necessary for representative sampling of the larger particles, while flowrates in the particle-sizing device were constant. Laboratory studies were conducted to develop suitable IP sampling systems with overall cut diameters of 15 micrometersmore » and conforming to a specified collection efficiency curve. Collection efficiencies were similarly measured for a horizontal elutriator. Design parameters were calculated for horizontal elutriators to be used with impactors, the EPA SASS train, and the EPA FAS train. Two cyclone systems were designed and evaluated. Tests on an Andersen Size Selective Inlet, a 15-micrometer precollector for high-volume samplers, showed its performance to be with the proposed limits for IP samplers. A stack sampling system was designed in which the aerosol is diluted in flow patterns and with mixing times simulating those in stack plumes.« less

  17. Magnesium and Silicon Isotopes in HASP Glasses from Apollo 16 Lunar Soil 61241

    NASA Technical Reports Server (NTRS)

    Herzog, G. F.; Delaney, J. S.; Lindsay, F.; Alexander, C. M. O'D; Chakrabarti, R.; Jacobsen, S. B.; Whattam, S.; Korotev, R.; Zeigler, R. A.

    2012-01-01

    The high-Al (>28 wt %), silica-poor (<45 wt %) (HASP) feldspathic glasses of Apollo 16 are widely regarded as the evaporative residues of impacts in the lunar regolith [1-3]. By virtue of their small size, apparent homogeneity, and high inferred formation temperatures, the HASP glasses appear to be good samples in which to study fractionation processes that may accompany open system evaporation. Calculations suggest that HASP glasses with present-day Al2O3 concentrations of up to 40 wt% may have lost 19 wt% of their original masses, calculated as the oxides of iron and silicon, via evaporation [4]. We report Mg and Si isotope abundances in 10 HASP glasses and 2 impact-glass spherules from a 64-105 m grain-size fraction taken from Apollo 16 soil sample 61241.

  18. Risk of bias reporting in the recent animal focal cerebral ischaemia literature.

    PubMed

    Bahor, Zsanett; Liao, Jing; Macleod, Malcolm R; Bannach-Brown, Alexandra; McCann, Sarah K; Wever, Kimberley E; Thomas, James; Ottavi, Thomas; Howells, David W; Rice, Andrew; Ananiadou, Sophia; Sena, Emily

    2017-10-15

    Findings from in vivo research may be less reliable where studies do not report measures to reduce risks of bias. The experimental stroke community has been at the forefront of implementing changes to improve reporting, but it is not known whether these efforts are associated with continuous improvements. Our aims here were firstly to validate an automated tool to assess risks of bias in published works, and secondly to assess the reporting of measures taken to reduce the risk of bias within recent literature for two experimental models of stroke. We developed and used text analytic approaches to automatically ascertain reporting of measures to reduce risk of bias from full-text articles describing animal experiments inducing middle cerebral artery occlusion (MCAO) or modelling lacunar stroke. Compared with previous assessments, there were improvements in the reporting of measures taken to reduce risks of bias in the MCAO literature but not in the lacunar stroke literature. Accuracy of automated annotation of risk of bias in the MCAO literature was 86% (randomization), 94% (blinding) and 100% (sample size calculation); and in the lacunar stroke literature accuracy was 67% (randomization), 91% (blinding) and 96% (sample size calculation). There remains substantial opportunity for improvement in the reporting of animal research modelling stroke, particularly in the lacunar stroke literature. Further, automated tools perform sufficiently well to identify whether studies report blinded assessment of outcome, but improvements are required in the tools to ascertain whether randomization and a sample size calculation were reported. © 2017 The Author(s).

  19. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    PubMed

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  20. Intraherd correlation coefficients and design effects for bovine viral diarrhoea, infectious bovine rhinotracheitis, leptospirosis and neosporosis in cow-calf system herds in North-eastern Mexico.

    PubMed

    Segura-Correa, J C; Domínguez-Díaz, D; Avalos-Ramírez, R; Argaez-Sosa, J

    2010-09-01

    Knowledge of the intraherd correlation coefficient (ICC) and design (D) effect for infectious diseases could be of interest in sample size calculation and to provide the correct standard errors of prevalence estimates in cluster or two-stage samplings surveys. Information on 813 animals from 48 non-vaccinated cow-calf herds from North-eastern Mexico was used. The ICC for the bovine viral diarrhoea (BVD), infectious bovine rhinotracheitis (IBR), leptospirosis and neosporosis diseases were calculated using a Bayesian approach adjusting for the sensitivity and specificity of the diagnostic tests. The ICC and D values for BVD, IBR, leptospirosis and neosporosis were 0.31 and 5.91, 0.18 and 3.88, 0.22 and 4.53, and 0.11 and 2.68, respectively. The ICC and D values were different from 0 and D greater than 1, therefore large sample sizes are required to obtain the same precision in prevalence estimates than for a random simple sampling design. The report of ICC and D values is of great help in planning and designing two-stage sampling studies. 2010 Elsevier B.V. All rights reserved.

  1. Caught Ya! A School-Based Practical Activity to Evaluate the Capture-Mark-Release-Recapture Method

    ERIC Educational Resources Information Center

    Kingsnorth, Crawford; Cruickshank, Chae; Paterson, David; Diston, Stephen

    2017-01-01

    The capture-mark-release-recapture method provides a simple way to estimate population size. However, when used as part of ecological sampling, this method does not easily allow an opportunity to evaluate the accuracy of the calculation because the actual population size is unknown. Here, we describe a method that can be used to measure the…

  2. Practical Advice on Calculating Confidence Intervals for Radioprotection Effects and Reducing Animal Numbers in Radiation Countermeasure Experiments

    PubMed Central

    Landes, Reid D.; Lensing, Shelly Y.; Kodell, Ralph L.; Hauer-Jensen, Martin

    2014-01-01

    The dose of a substance that causes death in P% of a population is called an LDP, where LD stands for lethal dose. In radiation research, a common LDP of interest is the radiation dose that kills 50% of the population by a specified time, i.e., lethal dose 50 or LD50. When comparing LD50 between two populations, relative potency is the parameter of interest. In radiation research, this is commonly known as the dose reduction factor (DRF). Unfortunately, statistical inference on dose reduction factor is seldom reported. We illustrate how to calculate confidence intervals for dose reduction factor, which may then be used for statistical inference. Further, most dose reduction factor experiments use hundreds, rather than tens of animals. Through better dosing strategies and the use of a recently available sample size formula, we also show how animal numbers may be reduced while maintaining high statistical power. The illustrations center on realistic examples comparing LD50 values between a radiation countermeasure group and a radiation-only control. We also provide easy-to-use spreadsheets for sample size calculations and confidence interval calculations, as well as SAS® and R code for the latter. PMID:24164553

  3. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    NASA Astrophysics Data System (ADS)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses with LAI and clip harvest data to determine whether LAI can be used as a suitable proxy for aboveground standing biomass. We also compared optimal sample sizes derived from LAI data, and clip-harvest data from two different size clip harvest areas (0.1m by 1m vs. 0.1m by 2m). Sample sizes were calculated in order to estimate the mean to within a standardized level of uncertainty that will be used to guide sampling effort across all vegetation types (i.e. estimated within × 10% with 95% confidence). Finally, we employed a Semivariogram approach to determine optimal sample size and spacing.

  4. Determination of the conversion gain and the accuracy of its measurement for detector elements and arrays

    NASA Astrophysics Data System (ADS)

    Beecken, B. P.; Fossum, E. R.

    1996-07-01

    Standard statistical theory is used to calculate how the accuracy of a conversion-gain measurement depends on the number of samples. During the development of a theoretical basis for this calculation, a model is developed that predicts how the noise levels from different elements of an ideal detector array are distributed. The model can also be used to determine what dependence the accuracy of measured noise has on the size of the sample. These features have been confirmed by experiment, thus enhancing the credibility of the method for calculating the uncertainty of a measured conversion gain. detector-array uniformity, charge coupled device, active pixel sensor.

  5. An Analytic Solution to the Computation of Power and Sample Size for Genetic Association Studies under a Pleiotropic Mode of Inheritance.

    PubMed

    Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A

    2016-01-01

    Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.

  6. Comparison of anticipated and actual control group outcomes in randomised trials in paediatric oncology provides evidence that historically controlled studies are biased in favour of the novel treatment.

    PubMed

    Moroz, Veronica; Wilson, Jayne S; Kearns, Pamela; Wheatley, Keith

    2014-12-10

    Historically controlled studies are commonly undertaken in paediatric oncology, despite their potential biases. Our aim was to compare the outcome of the control group in randomised controlled trials (RCTs) in paediatric oncology with those anticipated in the sample size calculations in the protocols. Our rationale was that, had these RCTs been performed as historical control studies instead, the available outcome data used to calculate the sample size in the RCT would have been used as the historical control outcome data. A systematic search was undertaken for published paediatric oncology RCTs using the Cochrane Central Register of Controlled Trials (CENTRAL) database from its inception up to July 2013. Data on sample size assumptions and observed outcomes (timetoevent and proportions) were extracted to calculate differences between randomised and historical control outcomes, and a one-sample t-test was employed to assess whether the difference between anticipated and observed control groups differed from zero. Forty-eight randomised questions were included. The median year of publication was 2005, and the range was from 1976 to 2010. There were 31 superiority and 11 equivalence/noninferiority randomised questions with time-to-event outcomes. The median absolute difference between observed and anticipated control outcomes was 5.0% (range: -23 to +34), and the mean difference was 3.8% (95% CI: +0.57 to +7.0; P = 0.022). Because the observed control group (that is, standard treatment arm) in RCTs performed better than anticipated, we found that historically controlled studies that used similar assumptions for the standard treatment were likely to overestimate the benefit of new treatments, potentially leading to children with cancer being given ineffective therapy that may have additional toxicity.

  7. Interpolation Approach To Computer-Generated Holograms

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko

    1983-10-01

    A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.

  8. Blinded and unblinded internal pilot study designs for clinical trials with count data.

    PubMed

    Schneider, Simon; Schmidli, Heinz; Friede, Tim

    2013-07-01

    Internal pilot studies are a popular design feature to address uncertainties in the sample size calculations caused by vague information on nuisance parameters. Despite their popularity, only very recently blinded sample size reestimation procedures for trials with count data were proposed and their properties systematically investigated. Although blinded procedures are favored by regulatory authorities, practical application is somewhat limited by fears that blinded procedures are prone to bias if the treatment effect was misspecified in the planning. Here, we compare unblinded and blinded procedures with respect to bias, error rates, and sample size distribution. We find that both procedures maintain the desired power and that the unblinded procedure is slightly liberal whereas the actual significance level of the blinded procedure is close to the nominal level. Furthermore, we show that in situations where uncertainty about the assumed treatment effect exists, the blinded estimator of the control event rate is biased in contrast to the unblinded estimator, which results in differences in mean sample sizes in favor of the unblinded procedure. However, these differences are rather small compared to the deviations of the mean sample sizes from the sample size required to detect the true, but unknown effect. We demonstrate that the variation of the sample size resulting from the blinded procedure is in many practically relevant situations considerably smaller than the one of the unblinded procedures. The methods are extended to overdispersed counts using a quasi-likelihood approach and are illustrated by trials in relapsing multiple sclerosis. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Longitudinal white matter change in frontotemporal dementia subtypes and sporadic late onset Alzheimer's disease.

    PubMed

    Elahi, Fanny M; Marx, Gabe; Cobigo, Yann; Staffaroni, Adam M; Kornak, John; Tosun, Duygu; Boxer, Adam L; Kramer, Joel H; Miller, Bruce L; Rosen, Howard J

    2017-01-01

    Degradation of white matter microstructure has been demonstrated in frontotemporal lobar degeneration (FTLD) and Alzheimer's disease (AD). In preparation for clinical trials, ongoing studies are investigating the utility of longitudinal brain imaging for quantification of disease progression. To date only one study has examined sample size calculations based on longitudinal changes in white matter integrity in FTLD. To quantify longitudinal changes in white matter microstructural integrity in the three canonical subtypes of frontotemporal dementia (FTD) and AD using diffusion tensor imaging (DTI). 60 patients with clinical diagnoses of FTD, including 27 with behavioral variant frontotemporal dementia (bvFTD), 14 with non-fluent variant primary progressive aphasia (nfvPPA), and 19 with semantic variant PPA (svPPA), as well as 19 patients with AD and 69 healthy controls were studied. We used a voxel-wise approach to calculate annual rate of change in fractional anisotropy (FA) and mean diffusivity (MD) in each group using two time points approximately one year apart. Mean rates of change in FA and MD in 48 atlas-based regions-of-interest, as well as global measures of cognitive function were used to calculate sample sizes for clinical trials (80% power, alpha of 5%). All FTD groups showed statistically significant baseline and longitudinal white matter degeneration, with predominant involvement of frontal tracts in the bvFTD group, frontal and temporal tracts in the PPA groups and posterior tracts in the AD group. Longitudinal change in MD yielded a larger number of regions with sample sizes below 100 participants per therapeutic arm in comparison with FA. SvPPA had the smallest sample size based on change in MD in the fornix (n = 41 participants per study arm to detect a 40% effect of drug), and nfvPPA and AD had their smallest sample sizes based on rate of change in MD within the left superior longitudinal fasciculus (n = 49 for nfvPPA, and n = 23 for AD). BvFTD generally showed the largest sample size estimates (minimum n = 140 based on MD in the corpus callosum). The corpus callosum appeared to be the best region for a potential study that would include all FTD subtypes. Change in global measure of functional status (CDR box score) yielded the smallest sample size for bvFTD (n = 71), but clinical measures were inferior to white matter change for the other groups. All three of the canonical subtypes of FTD are associated with significant change in white matter integrity over one year. These changes are consistent enough that drug effects in future clinical trials could be detected with relatively small numbers of participants. While there are some differences in regions of change across groups, the genu of the corpus callosum is a region that could be used to track progression in studies that include all subtypes.

  10. Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size

    PubMed Central

    Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa

    2016-01-01

    Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913

  11. Influences of sampling size and pattern on the uncertainty of correlation estimation between soil water content and its influencing factors

    NASA Astrophysics Data System (ADS)

    Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua

    2017-12-01

    In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.

  12. Element enrichment factor calculation using grain-size distribution and functional data regression.

    PubMed

    Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R

    2015-01-01

    In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Structural and Electronic Properties of Isolated Nanodiamonds: A Theoretical Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raty, J; Galli, G

    2004-09-09

    Nanometer sized diamond has been found in meteorites, proto-planetary nebulae and interstellar dusts, as well as in residues of detonation and in diamond films. Remarkably, the size distribution of diamond nanoparticles appears to be peaked around 2-5 nm, and to be largely independent of preparation conditions. Using ab-initio calculations, we have shown that in this size range nanodiamond has a fullerene-like surface and, unlike silicon and germanium, exhibit very weak quantum confinement effects. We called these carbon nanoparticles bucky-diamonds: their atomic structure, predicted by simulations, is consistent with many experimental findings. In addition, we carried out calculations of the stabilitymore » of nanodiamond which provided a unifying explanation of its size distribution in extra-terrestrial samples, and in ultra-crystalline diamond films. Here we present a summary of our theoretical results and we briefly outline work in progress on doping of nanodiamond with nitrogen.« less

  14. Sample Size in Clinical Cardioprotection Trials Using Myocardial Salvage Index, Infarct Size, or Biochemical Markers as Endpoint.

    PubMed

    Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan

    2016-03-09

    Cardiac magnetic resonance (CMR) can quantify myocardial infarct (MI) size and myocardium at risk (MaR), enabling assessment of myocardial salvage index (MSI). We assessed how MSI impacts the number of patients needed to reach statistical power in relation to MI size alone and levels of biochemical markers in clinical cardioprotection trials and how scan day affect sample size. Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid scan day bias masking a treatment effect of 25%. Sample size in cardioprotection trials can be reduced 46% to 65% without compromising statistical power when using MSI by CMR as an outcome variable instead of MI size alone or biochemical markers. It is essential to ensure lack of bias in scan day between treatment and control arms to avoid compromising statistical power. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  15. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  16. Phononic thermal conductivity in silicene: the role of vacancy defects and boundary scattering

    NASA Astrophysics Data System (ADS)

    Barati, M.; Vazifehshenas, T.; Salavati-fard, T.; Farmanbar, M.

    2018-04-01

    We calculate the thermal conductivity of free-standing silicene using the phonon Boltzmann transport equation within the relaxation time approximation. In this calculation, we investigate the effects of sample size and different scattering mechanisms such as phonon–phonon, phonon-boundary, phonon-isotope and phonon-vacancy defect. We obtain some similar results to earlier works using a different model and provide a more detailed analysis of the phonon conduction behavior and various mode contributions. We show that the dominant contribution to the thermal conductivity of silicene, which originates from the in-plane acoustic branches, is about 70% at room temperature and this contribution becomes larger by considering vacancy defects. Our results indicate that while the thermal conductivity of silicene is significantly suppressed by the vacancy defects, the effect of isotopes on the phononic transport is small. Our calculations demonstrate that by removing only one of every 400 silicon atoms, a substantial reduction of about 58% in thermal conductivity is achieved. Furthermore, we find that the phonon-boundary scattering is important in defectless and small-size silicene samples, especially at low temperatures.

  17. Microstress, strain, band gap tuning and photocatalytic properties of thermally annealed and Cu-doped ZnO nanoparticles

    NASA Astrophysics Data System (ADS)

    Prasad, Neena; V. M. M, Saipavitra; Swaminathan, Hariharan; Thangaraj, Pandiyarajan; Ramalinga Viswanathan, Mangalaraja; Balasubramanian, Karthikeyan

    2016-06-01

    ZnO nanoparticles and Cu-doped ZnO nanoparticles were prepared by co-precipitation method. Also, a part of the pure ZnO nanoparticles were annealed at 750 °C for 3, 6, and 9 h. X-ray diffraction studies were carried out and the lattice parameters, unit cell volume, interplanar spacing, and Young's modulus were calculated for all the samples, and also the crystallite size was found using the Scherrer method. X-ray peak broadening analysis was used to estimate the crystallite sizes and the strain using the Williamson-Hall (W-H) method and the size-strain plot (SSP) method. Stress and the energy density were calculated using the W-H method assuming different models such as uniform deformation model, uniform strain deformation model, uniform deformation energy density model, and the SSP method. Optical absorption properties of the samples were understood from their UV-visible spectra. Photocatalytic activities of ZnO and 5 % Cu-doped ZnO were observed by the degradation of methylene blue dye in aqueous medium under the irradiation of 20-W compact fluorescent lamp for an hour.

  18. A Bayesian sequential design using alpha spending function to control type I error.

    PubMed

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  19. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    PubMed

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  20. Re-estimating sample size in cluster randomised trials with active recruitment within clusters.

    PubMed

    van Schie, S; Moerbeek, M

    2014-08-30

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re-estimate the variances and re-calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re-estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Preparation and Characterization of Niobium Doped Lead-Telluride Glass Ceramics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sathish, M.; Eraiah, B.; Anavekar, R. V.

    2011-07-15

    Niobium-lead-telluride glass ceramics of composition xNb{sub 2}O{sub 5}-(20-x) pbO-80TeO{sub 2}(where x = 0.1 mol% to 0.5 mol%) were prepared by using conventional melt quenching method. The prepared glass samples were initially amorphous in nature after annealed at 400 deg. c all samples were crystallized. This was confined by X-ray diffraction and scanning electron microscopy. The particle size of these glass ceramics have been calculated by using Debye-Scherer formula and the particle size is in the order of 15 nm to 60 nm. The scanning electron microscopy (SEM) photograph shows the presence of needle-like crystals in these samples.

  2. Freeway travel speed calculation model based on ETC transaction data.

    PubMed

    Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang

    2014-01-01

    Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers.

  3. Multiscale Pore Throat Network Reconstruction of Tight Porous Media Constrained by Mercury Intrusion Capillary Pressure and Nuclear Magnetic Resonance Measurements

    NASA Astrophysics Data System (ADS)

    Xu, R.; Prodanovic, M.

    2017-12-01

    Due to the low porosity and permeability of tight porous media, hydrocarbon productivity strongly depends on the pore structure. Effective characterization of pore/throat sizes and reconstruction of their connectivity in tight porous media remains challenging. Having a representative pore throat network, however, is valuable for calculation of other petrophysical properties such as permeability, which is time-consuming and costly to obtain by experimental measurements. Due to a wide range of length scales encountered, a combination of experimental methods is usually required to obtain a comprehensive picture of the pore-body and pore-throat size distributions. In this work, we combine mercury intrusion capillary pressure (MICP) and nuclear magnetic resonance (NMR) measurements by percolation theory to derive pore-body size distribution, following the work by Daigle et al. (2015). However, in their work, the actual pore-throat sizes and the distribution of coordination numbers are not well-defined. To compensate for that, we build a 3D unstructured two-scale pore throat network model initialized by the measured porosity and the calculated pore-body size distributions, with a tunable pore-throat size and coordination number distribution, which we further determine by matching the capillary pressure vs. saturation curve from MICP measurement, based on the fact that the mercury intrusion process is controlled by both the pore/throat size distributions and the connectivity of the pore system. We validate our model by characterizing several core samples from tight Middle East carbonate, and use the network model to predict the apparent permeability of the samples under single phase fluid flow condition. Results show that the permeability we get is in reasonable agreement with the Coreval experimental measurements. The pore throat network we get can be used to further calculate relative permeability curves and simulate multiphase flow behavior, which will provide valuable insights into the production optimization and enhanced oil recovery design.

  4. Synthesis characterization and luminescence studies of gamma irradiated nanocrystalline yttrium oxide

    NASA Astrophysics Data System (ADS)

    Shivaramu, N. J.; Lakshminarasappa, B. N.; Nagabhushana, K. R.; Singh, Fouran

    2016-02-01

    Nanocrystalline Y2O3 is synthesized by solution combustion technique using urea and glycine as fuels. X-ray diffraction (XRD) pattern of as prepared sample shows amorphous nature while annealed samples show cubic nature. The average crystallite size is calculated using Scherrer's formula and is found to be in the range 14-30 nm for samples synthesized using urea and 15-20 nm for samples synthesized using glycine respectively. Field emission scanning electron microscopy (FE-SEM) image of 1173 K annealed Y2O3 samples show well separated spherical shape particles and the average particle size is found to be in the range 28-35 nm. Fourier transformed infrared (FTIR) and Raman spectroscopy reveals a stretching of Y-O bond. Electron spin resonance (ESR) shows V- center, O2- and Y2 + defects. A broad photoluminescence (PL) emission with peak at 386 nm is observed when the sample is excited with 252 nm. Thermoluminescence (TL) properties of γ-irradiated Y2O3 nanopowder are studied at a heating rate of 5 K s- 1. The samples prepared by using urea show a prominent and well resolved peak at 383 K and a weak one at 570 K. It is also found that TL glow peak intensity (Im1) at 383 K increases with increase in γ-dose up to 6.0 kGy and then decreases with increase in dose. However, glycine used Y2O3 shows a prominent TL glow with peaks at 396 K and 590 K. Among the fuels, urea used Y2O3 shows simple and well resolved TL glows. This might be due to fuel and hence particle size effect. The kinetic parameters are calculated by Chen's glow curve peak shape method and results are discussed in detail.

  5. Parameterization of Shortwave Cloud Optical Properties for a Mixture of Ice Particle Habits for use in Atmospheric Models

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.

  6. Sampling guidelines for oral fluid-based surveys of group-housed animals.

    PubMed

    Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J

    2017-09-01

    Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Computer program for the calculation of grain size statistics by the method of moments

    USGS Publications Warehouse

    Sawyer, Michael B.

    1977-01-01

    A computer program is presented for a Hewlett-Packard Model 9830A desk-top calculator (1) which calculates statistics using weight or point count data from a grain-size analysis. The program uses the method of moments in contrast to the more commonly used but less inclusive graphic method of Folk and Ward (1957). The merits of the program are: (1) it is rapid; (2) it can accept data in either grouped or ungrouped format; (3) it allows direct comparison with grain-size data in the literature that have been calculated by the method of moments; (4) it utilizes all of the original data rather than percentiles from the cumulative curve as in the approximation technique used by the graphic method; (5) it is written in the computer language BASIC, which is easily modified and adapted to a wide variety of computers; and (6) when used in the HP-9830A, it does not require punching of data cards. The method of moments should be used only if the entire sample has been measured and the worker defines the measured grain-size range. (1) Use of brand names in this paper does not imply endorsement of these products by the U.S. Geological Survey.

  8. Microstructural, optical and electrical transport properties of Cd-doped SnO2 nanoparticles

    NASA Astrophysics Data System (ADS)

    Ahmad, Naseem; Khan, Shakeel; Mohsin Nizam Ansari, Mohd

    2018-03-01

    We have successfully investigated the structural, optical and dielectric properties of Cd assimilated SnO2 nanoparticles synthesized via very convenient precipitation route. The structural properties were studied by x-ray diffraction method (XRD) and Fourier Transform Infrared (FTIR) Spectroscopy. As-synthesized samples in the form of powder were examined for its morphology and average particle size by Transmission electron microscopy (TEM). The optical properties were studied by diffuse reflectance spectroscopy. Dielectric properties such that complex dielectric constant and ac conductivity were investigated by LCR meter. Average crystallite size calculated by XRD and average particle size obtained from TEM were found to be consistent and below 50 nm for all samples. The optical band gap of as-synthesized powder samples from absorption study was found in the range of 3.76 to 3.97 eV. The grain boundary parameters such that Rgb, Cgb and τ were evaluated using impedance spectroscopy.

  9. Impact of crystalline defects and size on X-ray line broadening: A phenomenological approach for tetragonal SnO{sub 2} nanocrystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muhammed Shafi, P.; Chandra Bose, A., E-mail: acbose@nitt.edu

    2015-05-15

    Nanocrystalline tin oxide (SnO{sub 2}) powders with different grain size were prepared by chemical precipitation method. The reaction was carried out by varying the period of hydrolysis and the as-prepared samples were annealed at different temperatures. The samples were characterized using X-ray powder diffractometer and transmission electron microscopy. The microstrain and crystallite size were calculated for all the samples by using Williamson-Hall (W-H) models namely, isotropic strain model (ISM), anisotropic strain model (ASM) and uniform deformation energy density model (UDEDM). The morphology and particle size were determined using TEM micrographs. The directional dependant young’s modulus was modified as an equationmore » relating elastic compliances (s{sub ij}) and Miller indices of the lattice plane (hkl) for tetragonal crystal system and also the equation for elastic compliance in terms of stiffness constants was derived. The changes in crystallite size and microstrain due to lattice defects were observed while varying the hydrolysis time and the annealing temperature. The dependence of crystallite size on lattice strain was studied. The results were correlated with the available studies on electrical properties using impedance spectroscopy.« less

  10. Sediment quantity and quality in three impoundments in Massachusetts

    USGS Publications Warehouse

    Zimmerman, Marc James; Breault, Robert F.

    2003-01-01

    As part of a study with an overriding goal of providing information that would assist State and Federal agencies in developing screening protocols for managing sediments impounded behind dams that are potential candidates for removal, the U.S Geological Survey determined sediment quantity and quality at three locations: one on the French River and two on Yokum Brook, a tributary to the west branch of the Westfield River. Data collected with a global positioning system, a geographic information system, and sediment-thickness data aided in the creation of sediment maps and the calculation of sediment volumes at Perryville Pond on the French River in Webster, Massachusetts, and at the Silk Mill and Ballou Dams on Yokum Brook in Becket, Massachusetts. From these data the following sediment volumes were determined: Perryville Pond, 71,000 cubic yards, Silk Mill, 1,600 cubic yards, and Ballou, 800 cubic yards. Sediment characteristics were assessed in terms of grain size and concentrations of potentially hazardous organic compounds and metals. Assessment of the approaches and methods used at study sites indicated that ground-penetrating radar produced data that were extremely difficult and time-consuming to interpret for the three study sites. Because of these difficulties, a steel probe was ultimately used to determine sediment depth and extent for inclusion in the sediment maps. Use of these methods showed that, where sampling sites were accessible, a machine-driven coring device would be preferable to the physically exhausting, manual sediment-coring methods used in this investigation. Enzyme-linked immunosorbent assays were an effective tool for screening large numbers of samples for a range of organic contaminant compounds. An example calculation of the number of samples needed to characterize mean concentrations of contaminants indicated that the number of samples collected for most analytes was adequate; however, additional analyses for lead, copper, silver, arsenic, total petroleum hydrocarbons, and chlordane are needed to meet the criteria determined from the calculations. Particle-size analysis did not reveal a clear spatial distribution pattern at Perryville Pond. On average, less than 65 percent of each sample was greater in size than very fine sand. The sample with the highest percentage of clay-sized particles (24.3 percent) was collected just upstream from the dam and generally had the highest concentrations of contaminants determined here. In contrast, more than 90 percent of the sediment samples in the Becket impoundments had grain sizes larger than very fine sand; as determined by direct observation, rocks, cobbles, and boulders constituted a substantial amount of the material impounded at Becket. In general, the highest percentages of the finest particles, clays, occurred in association with the highest concentrations of contaminants. Enzyme-linked immunosorbent assays of the Perryville samples showed the widespread presence of petroleum hydrocarbons (16 out of 26 samples), polycyclic aromatic hydrocarbons (23 out of 26 samples), and chlordane (18 out of 26 samples); polychlorinated biphenyls were detected in five samples from four locations. Neither petroleum hydrocarbons nor polychlorinated biphenyls were detected at Becket, and chlordane was detected in only one sample. All 14 Becket samples contained polycyclic aromatic hydrocarbons. Replicate quality-control analyses revealed consistent results between paired samples. Samples from throughout Perryville Pond contained a number of metals at potentially toxic concentrations. These metals included arsenic, cadmium, copper, lead, nickel, and zinc. At Becket, no metals were found in elevated concentrations. In general, most of the concentrations of organic compounds and metals detected in Perryville Pond exceeded standards for benthic organisms, but only rarely exceeded standards for human contact. The most highly contaminated samples were

  11. LQAS usefulness in an emergency department.

    PubMed

    de la Orden, Susana Granado; Rodríguez-Rieiro, Cristina; Sánchez-Gómez, Amaya; García, Ana Chacón; Hernández-Fernández, Tomás; Revilla, Angel Abad; Escribano, Dolores Vigil; Pérez, Paz Rodríguez

    2008-01-01

    This paper aims to explore lot quality assurance sampling (LQAS) applicability and usefulness in the evaluation of quality indicators in a hospital emergency department (ED) and to determine the degree of compliance with quality standards according to this sampling method. Descriptive observational research in the Hospital General Universitario Gregorio Marañón (HGUGM) emergency department (ED). Patients older than 15 years, diagnosed with dyspnoea, chest pain, urinary tract colic or bronchial asthma attending the HGUGM ED from December 2005 to May 2006, and patients admitted during 2005 with exacerbation of chronic obstructive pulmonary disease or acute meningitis were included in the study. Sample sizes were calculated using LQAS. Different quality indicators, one for each process, were selected. The upper (acceptable quality level (AQL)) and lower thresholds (rejectable quality level (RQL)) were established considering risk alpha = 5 per cent and beta = 20 per cent, and the minimum number of observations required was calculated. It was impossible to reach the necessary sample size for bronchial asthma and urinary tract colic patients. For chest pain, acute exacerbation of chronic obstructive pulmonary disease, and acute meningitis, quality problems were detected. The lot was accepted only for the dyspnoea indicator. The usefulness of LQAS to detect quality problems in the management of health processes in one hospital's ED. The LQAS could complement traditional sampling methods.

  12. The size effect to O2- -Ce4+ charge transfer emission and band gap structure of Sr2 CeO4.

    PubMed

    Wang, Wenjun; Pan, Yu; Zhang, Wenying; Liu, Xiaoguang; Li, Ling

    2018-04-24

    Sr 2 CeO 4 phosphors with different crystalline sizes were synthesized by the sol-gel method or the solid-state reaction. Their crystalline size, luminescence intensity of O 2- -Ce 4+ charge transfer and energy gaps were obtained through the characterization by X-ray diffraction, photoluminescence spectra, as well as UV-visible diffuse reflectance measurements. An inverse relationship between photoluminescence (PL) spectra and crystalline size was observed when the heating temperature was from 1000°C to 1300°C. In addition, band energy calculated for all samples showed that a reaction temperature of 1200°C for the solid-state method and 1100°C for sol-gel method gave the largest values, which corresponded with the smallest crystalline size. Correlation between PL intensity and crystalline size showed an inverse relationship. Band structure, density of states and partial density of states of the crystal were calculated to analyze the mechanism using the cambrige sequential total energy package (CASTEP) module integrated with Materials Studio software. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Is Political Activism on Social Media an initiator of Psychological Stress?

    PubMed

    Hisam, Aliya; Safoor, Iqra; Khurshid, Nawal; Aslam, Aakash; Zaid, Farhan; Muzaffar, Ayesha

    2017-01-01

    To find out the association of psychological stress with political activism on social networking sites (SNS) in adults. To find association of psychological stress and political activism with age, gender and occupational status. A descriptive cross-sectional study of 8 months (Aug 2014 to March 2015) was conducted on young adults between age group of 20-40 years of different universities of Rawalpindi, Pakistan. Closed ended standardized questionnaires (i.e. Cohen Perceived Stress-10) were distributed via non-probability convenient sampling among a total sample size of 237. Sample size was calculated using WHO sample size calculator and data was analyzed in STATA version 12. The mean age of participants was 21.06±1.425 years. Out of the 237 participants, 150 (63.3%) were males and 87 (36.7%) females. Regarding their occupation, 13 (51.9%) were military cadets, 8 (3.4%) were consultant, 47 (19.8%) medical officer, 3 (1.3%) PG students and 56 (23.6%) MBBS students. Significant association of occupation was established with both political activism and psychological stress (p=0.4 and p=0.002 respectively). Among 237 individuals, 91 (38.4%) were stressed out and 146 (61.6%) were not. Among whole sample, political activists on SNS were found to be 23 (9.7%). Out of these 23 individuals who were politically active, 15 (65.2%) were stressed out and 8 (34.7%) were not. A significant association between stress and political activism was established (p=0.005). Political activism via social networking sites is playing significant role on adult person's mental health in terms of stress among different occupation.

  14. Investigating textural controls on Archie's porosity exponent using process-based, pore-scale modelling

    NASA Astrophysics Data System (ADS)

    Niu, Q.; Zhang, C.

    2017-12-01

    Archie's law is an important empirical relationship linking the electrical resistivity of geological materials to their porosity. It has been found experimentally that the porosity exponent m in Archie's law in sedimentary rocks might be related to the degree of cementation, and therefore m is termed as "cementation factor" in most literatures. Despite it has been known for many years, there is lack of well-accepted physical interpretations of the porosity exponent. Some theoretical and experimental evidences have also shown that m may be controlled by the particle and/or pore shape. In this study, we conduct a pore-scale modeling of the porosity exponent that incorporates different geological processes. The evolution of m of eight synthetic samples with different particle sizes and shapes are calculated during two geological processes, i.e., compaction and cementation. The numerical results show that in dilute conditions, m is controlled by the particle shape. As the samples deviate from dilute conditions, m increases gradually due to the strong interaction between particles. When the samples are at static equilibrium, m is noticeably larger than its values at dilution condition. The numerical simulation results also show that both geological compaction and cementation induce a significant increase in m. In addition, the geometric characteristics of these samples (e.g., pore space/throat size, and their distributions) during compaction and cementation are also calculated. Preliminary analysis shows a unique correlation between the pore size broadness and porosity exponent for all eight samples. However, such a correlation is not found between m and other geometric characteristics.

  15. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  16. Accelerating potential of mean force calculations for lipid membrane permeation: System size, reaction coordinate, solute-solute distance, and cutoffs

    NASA Astrophysics Data System (ADS)

    Nitschke, Naomi; Atkovska, Kalina; Hub, Jochen S.

    2016-09-01

    Molecular dynamics simulations are capable of predicting the permeability of lipid membranes for drug-like solutes, but the calculations have remained prohibitively expensive for high-throughput studies. Here, we analyze simple measures for accelerating potential of mean force (PMF) calculations of membrane permeation, namely, (i) using smaller simulation systems, (ii) simulating multiple solutes per system, and (iii) using shorter cutoffs for the Lennard-Jones interactions. We find that PMFs for membrane permeation are remarkably robust against alterations of such parameters, suggesting that accurate PMF calculations are possible at strongly reduced computational cost. In addition, we evaluated the influence of the definition of the membrane center of mass (COM), used to define the transmembrane reaction coordinate. Membrane-COM definitions based on all lipid atoms lead to artifacts due to undulations and, consequently, to PMFs dependent on membrane size. In contrast, COM definitions based on a cylinder around the solute lead to size-independent PMFs, down to systems of only 16 lipids per monolayer. In summary, compared to popular setups that simulate a single solute in a membrane of 128 lipids with a Lennard-Jones cutoff of 1.2 nm, the measures applied here yield a speedup in sampling by factor of ˜40, without reducing the accuracy of the calculated PMF.

  17. Coalescent: an open-science framework for importance sampling in coalescent theory.

    PubMed

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.

  18. Estimating the size of the MSM populations for 38 European countries by calculating the survey-surveillance discrepancies (SSD) between self-reported new HIV diagnoses from the European MSM internet survey (EMIS) and surveillance-reported HIV diagnoses among MSM in 2009

    PubMed Central

    2013-01-01

    Background Comparison of rates of newly diagnosed HIV infections among MSM across countries is challenging for a variety of reasons, including the unknown size of MSM populations. In this paper we propose a method of triangulating surveillance data with data collected in a pan-European MSM Internet Survey (EMIS) to estimate the sizes of the national MSM populations and the rates at which HIV is being diagnosed amongst them by calculating survey-surveillance discrepancies (SSD) as a measure of selection biases of survey participants. Methods In 2010, the first EMIS collected self-reported data on HIV diagnoses among more than 180,000 MSM in 38 countries of Europe. These data were compared with data from national HIV surveillance systems to explore possible sampling and reporting biases in the two approaches. The Survey-Surveillance Discrepancy (SSD) represents the ratio of survey members diagnosed in 2009 (HIVsvy) to total survey members (Nsvy), divided by the ratio of surveillance reports of diagnoses in 2009 (HIVpop) to the estimated total MSM population (Npop). As differences in household internet access may be a key component of survey selection biases, we analysed the relationship between household internet access and SSD in countries conducting consecutive MSM internet surveys at different time points with increasing levels of internet access. The empirically defined SSD was used to calculate the respective MSM population sizes (Npop), using the formula Npop = HIVpop*Nsvy*SSD/HIVsvy. Results Survey-surveillance discrepancies for consecutive MSM internet surveys between 2003 and 2010 with different levels of household internet access were best described by a potential equation, with high SSD at low internet access, declining to a level around 2 with broad access. The lowest SSD was calculated for the Netherlands with 1.8, the highest for Moldova with 9.0. Taking the best available estimate for surveillance reports of HIV diagnoses among MSM in 2009 (HIVpop), the relative MSM population sizes were between 0.03% and 5.6% of the adult male population aged 15–64. The correlation between recently diagnosed (2009) HIV in EMIS participants and HIV diagnosed among MSM in 2009 as reported in the national surveillance systems was very high (R2 = 0.88) when using the calculated MSM population size. Conclusions Npop and HIVpop were unreliably low for several countries. We discuss and identify possible measurement errors for countries with calculated MSM population sizes above 3% and below 1% of the adult male population. In most cases the number of new HIV diagnoses in MSM in the surveillance system appears too low. In some cases, measurement errors may be due to small EMIS sample sizes. It must be assumed that the SSD is modified by country-specific factors. Comparison of community-based survey data with surveillance data suggests only minor sampling biases in the former that – except for a few countries - do not seriously distort inter-country comparability, despite large variations in participation rates across countries. Internet surveys are useful complements to national surveillance systems, highlighting deficiencies and allowing estimates of the range of newly diagnosed infections among MSM in countries where surveillance systems fail to accurately provide such data. PMID:24088198

  19. Cosmic ray exposure ages of iron meteorites, complex irradiation and the constancy of cosmic ray flux in the past

    NASA Technical Reports Server (NTRS)

    Marti, K.; Lavielle, B.; Regnier, S.

    1984-01-01

    While previous calculations of potassium ages assumed a constant cosmic ray flux and a single stage (no change in size) exposure of iron meteorites, present calculations relaxed these constancy assumptions and the results reveal multistage irradiations for some 25% of the meteorites studied, implying multiple breakup in space. The distribution of exposure ages suggests several major collisions (based on chemical composition and structure), although the calibration of age scales is not yet complete. It is concluded that shielding-corrected (corrections which depend on size and position of sample) production rates are consistent for the age bracket of 300 to 900 years. These production rates differ in a systematic way from those calculated for present day fluxes of cosmic rays (such as obtained for the last few million years).

  20. Rapid Convergence of Energy and Free Energy Profiles with Quantum Mechanical Size in Quantum Mechanical-Molecular Mechanical Simulations of Proton Transfer in DNA.

    PubMed

    Das, Susanta; Nam, Kwangho; Major, Dan Thomas

    2018-03-13

    In recent years, a number of quantum mechanical-molecular mechanical (QM/MM) enzyme studies have investigated the dependence of reaction energetics on the size of the QM region using energy and free energy calculations. In this study, we revisit the question of QM region size dependence in QM/MM simulations within the context of energy and free energy calculations using a proton transfer in a DNA base pair as a test case. In the simulations, the QM region was treated with a dispersion-corrected AM1/d-PhoT Hamiltonian, which was developed to accurately describe phosphoryl and proton transfer reactions, in conjunction with an electrostatic embedding scheme using the particle-mesh Ewald summation method. With this rigorous QM/MM potential, we performed rather extensive QM/MM sampling, and found that the free energy reaction profiles converge rapidly with respect to the QM region size within ca. ±1 kcal/mol. This finding suggests that the strategy of QM/MM simulations with reasonably sized and selected QM regions, which has been employed for over four decades, is a valid approach for modeling complex biomolecular systems. We point to possible causes for the sensitivity of the energy and free energy calculations to the size of the QM region, and potential implications.

  1. Estimating the Size of the Methamphetamine-Using Population in New York City Using Network Sampling Techniques.

    PubMed

    Dombrowski, Kirk; Khan, Bilal; Wendel, Travis; McLean, Katherine; Misshula, Evan; Curtis, Ric

    2012-12-01

    As part of a recent study of the dynamics of the retail market for methamphetamine use in New York City, we used network sampling methods to estimate the size of the total networked population. This process involved sampling from respondents' list of co-use contacts, which in turn became the basis for capture-recapture estimation. Recapture sampling was based on links to other respondents derived from demographic and "telefunken" matching procedures-the latter being an anonymized version of telephone number matching. This paper describes the matching process used to discover the links between the solicited contacts and project respondents, the capture-recapture calculation, the estimation of "false matches", and the development of confidence intervals for the final population estimates. A final population of 12,229 was estimated, with a range of 8235 - 23,750. The techniques described here have the special virtue of deriving an estimate for a hidden population while retaining respondent anonymity and the anonymity of network alters, but likely require larger sample size than the 132 persons interviewed to attain acceptable confidence levels for the estimate.

  2. A Naturalistic Study of Driving Behavior in Older Adults and Preclinical Alzheimer Disease.

    PubMed

    Babulal, Ganesh M; Stout, Sarah H; Benzinger, Tammie L S; Ott, Brian R; Carr, David B; Webb, Mollie; Traub, Cindy M; Addison, Aaron; Morris, John C; Warren, David K; Roe, Catherine M

    2017-01-01

    A clinical consequence of symptomatic Alzheimer's disease (AD) is impaired driving performance. However, decline in driving performance may begin in the preclinical stage of AD. We used a naturalistic driving methodology to examine differences in driving behavior over one year in a small sample of cognitively normal older adults with ( n = 10) and without ( n = 10) preclinical AD. As expected with a small sample size, there were no statistically significant differences between the two groups, but older adults with preclinical AD drove less often, were less likely to drive at night, and had fewer aggressive behaviors such as hard braking, speeding, and sudden acceleration. The sample size required to power a larger study to determine differences was calculated.

  3. A comparative study of the physical properties of Cu-Zn ferrites annealed under different atmospheres and temperatures: Magnetic enhancement of Cu0.5Zn0.5Fe2O4 nanoparticles by a reducing atmosphere

    NASA Astrophysics Data System (ADS)

    Gholizadeh, Ahmad

    2018-04-01

    In the present work, the influence of different sintering atmospheres and temperatures on physical properties of the Cu0.5Zn0.5Fe2O4 nanoparticles including the redistribution of Zn2+ and Fe3+ ions, the oxidation of Fe atoms in the lattice, crystallite sizes, IR bands, saturation magnetization and magnetic core sizes have been investigated. The fitting of XRD patterns by using Fullprof program and also FT-IR measurement show the formation of a cubic structure with no presence of impurity phase for all the samples. The unit cell parameter of the samples sintered at the air- and inert-ambient atmospheres trend to decrease with sintering temperature, but for the samples sintered under carbon monoxide-ambient atmosphere increase. The magnetization curves versus the applied magnetic field, indicate different behaviour for the samples sintered at 700 °C with the respect to the samples sintered at 300 °C. Also, the saturation magnetization increases with the sintering temperature and reach a maximum 61.68 emu/g in the sample sintered under reducing atmosphere at 600 °C. The magnetic particle size distributions of samples have been calculated by fitting the M-H curves with the size distributed Langevin function. The results obtained from the XRD and FTIR measurements suggest that the magnetic core size has the dominant effect in variation of the saturation magnetization of the samples.

  4. Theory of Positron Annihilation in Helium-Filled Bubbles in Plutonium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterne, P A; Pask, J E

    2003-02-13

    Positron annihilation lifetime spectroscopy is a sensitive probe of vacancies and voids in materials. This non-destructive measurement technique can identify the presence of specific defects in materials at the part-per-million level. Recent experiments by Asoka-Kumar et al. have identified two lifetime components in aged plutonium samples--a dominant lifetime component of around 182 ps and a longer lifetime component of around 350-400ps. This second component appears to increase with the age of the sample, and accounts for only about 5 percent of the total intensity in 35 year-old plutonium samples. First-principles calculations of positron lifetimes are now used extensively to guidemore » the interpretation of positron lifetime data. At Livermore, we have developed a first-principles finite-element-based method for calculating positron lifetimes for defects in metals. This method is capable of treating system cell sizes of several thousand atoms, allowing us to model defects in plutonium ranging in size from a mono-vacancy to helium-filled bubbles of over 1 nm in diameter. In order to identify the defects that account for the observed lifetime values, we have performed positron lifetime calculations for a set of vacancies, vacancy clusters, and helium-filled vacancy clusters in delta-plutonium. The calculations produced values of 143ps for defect-free delta-Pu and 255ps for a mono-vacancy in Pu, both of which are inconsistent with the dominant experimental lifetime component of 182ps. Larger vacancy clusters have even longer lifetimes. The observed positron lifetime is significantly shorter than the calculated lifetimes for mono-vacancies and larger vacancy clusters, indicating that open vacancy clusters are not the dominant defect in the aged plutonium samples. When helium atoms are introduced into the vacancy cluster, the positron lifetime is reduced due to the increased density of electrons available for annihilation. For a mono-vacancy in Pu containing one helium atom, the calculated lifetime is 190 ps, while a di-vacancy containing two helium atoms has a positron lifetime of 205 ps. In general, increasing the helium density in a vacancy cluster or He-filled bubble reduces the positron lifetime, so that the same lifetime value can arise fi-om a range of vacancy cluster sizes with different helium densities. In order to understand the variation of positron lifetime with vacancy cluster size and helium density in the defect, we have performed over 60 positron lifetime calculations with vacancy cluster sizes ranging from 1 to 55 vacancies and helium densities ranging fi-om zero to five helium atoms per vacancy. The results indicate that the experimental lifetime of 182 ps is consistent with the theoretical value of 190 ps for a mono-vacancy with a single helium atom, but that slightly better agreement is obtained for larger clusters of 6 or more vacancies containing 2-3 helium atoms per vacancy. For larger vacancy clusters with diameters of about 3-5 nm or more, the annihilation with helium electrons dominates the positron annihilation rate; the observed lifetime of 180ps is then consistent with a helium concentration in the range of 3 to 3.5 Hehacancy, setting an upper bound on the helium concentration in the vacancy clusters. In practice, the single lifetime component is most probably associated with a family of helium-filled bubbles rather than with a specific unique defect size. The longer 350-400ps lifetime component is consistent with a relatively narrow range of defect sizes and He concentration. At zero He concentration, the lifetime values are matched by small vacancy clusters containing 6-12 vacancies. With increasing vacancy cluster size, a small amount of He is required to keep the lifetime in the 350-400 ps range, until the value saturates for larger helium bubbles of more than 50 vacancies (bubble diameter > 1.3 nm) at a helium concentration close to 1 He/vacancy. These results, taken together with the experimental data, indicate that the features observed in TEM data by Schwartz et al are not voids, but are in fact helium-filled bubbles with a helium pressure of around 2-3 helium atoms per vacancy, depending on the bubble size. This is consistent with the conclusions of recently developed models of He-bubble growth in aged plutonium.« less

  5. The effects of substrate size, surface area, and density on coat thickness of multi-particulate dosage forms.

    PubMed

    Heinicke, Grant; Matthews, Frank; Schwartz, Joseph B

    2005-01-01

    Drugs layering experiments were performed in a fluid bed fitted with a rotor granulator insert using diltiazem as a model drug. The drug was applied in various quantities to sugar spheres of different mesh sizes to give a series of drug-layered sugar spheres (cores) of different potency, size, and weight per particle. The drug presence lowered the bulk density of the cores in proportion to the quantity of added drug. Polymer coating of each core lot was performed in a fluid bed fitted with a Wurster insert. A series of polymer-coated cores (pellets) was removed from each coating experiment. The mean diameter of each core and each pellet sample was determined by image analysis. The rate of change of diameter on polymer addition was determined for each starting size of core and compared to calculated values. The core diameter was displaced from the line of best fit through the pellet diameter data. Cores of different potency with the same size distribution were made by layering increasing quantities of drug onto sugar spheres of decreasing mesh size. Equal quantities of polymer were applied to the same-sized core lots and coat thickness was measured. Weight/weight calculations predict equal coat thickness under these conditions, but measurable differences were found. Simple corrections to core charge weight in the Wurster insert were successfully used to manufacture pellets having the same coat thickness. The sensitivity of the image analysis technique in measuring particle size distributions (PSDs) was demonstrated by measuring a displacement in PSD after addition of 0.5% w/w talc to a pellet sample.

  6. mHealth Series: mHealth project in Zhao County, rural China – Description of objectives, field site and methods

    PubMed Central

    van Velthoven, Michelle Helena; Li, Ye; Wang, Wei; Du, Xiaozhen; Wu, Qiong; Chen, Li; Majeed, Azeem; Rudan, Igor; Zhang, Yanfeng; Car, Josip

    2013-01-01

    Background We set up a collaboration between researchers in China and the UK that aimed to explore the use of mHealth in China. This is the first paper in a series of papers on a large mHealth project part of this collaboration. This paper included the aims and objectives of the mHealth project, our field site, and the detailed methods of two studies. Field site The field site for this mHealth project was Zhao County, which lies 280 km south of Beijing in Hebei Province, China. Methods We described the methodology of two studies: (i) a mixed methods study exploring factors influencing sample size calculations for mHealth–based health surveys and (ii) a cross–over study determining validity of an mHealth text messaging data collection tool. The first study used mixed methods, both quantitative and qualitative, including: (i) two surveys with caregivers of young children, (ii) interviews with caregivers, village doctors and participants of the cross–over study, and (iii) researchers’ views. We combined data from caregivers, village doctors and researchers to provide an in–depth understanding of factors influencing sample size calculations for mHealth–based health surveys. The second study, a cross–over study, used a randomised cross–over study design to compare the traditional face–to–face survey method to the new text messaging survey method. We assessed data equivalence (intrarater agreement), the amount of information in responses, reasons for giving different responses, the response rate, characteristics of non–responders, and the error rate. Conclusions This paper described the objectives, field site and methods of a large mHealth project part of a collaboration between researchers in China and the UK. The mixed methods study evaluating factors that influence sample size calculations could help future studies with estimating reliable sample sizes. The cross–over study comparing face–to–face and text message survey data collection could help future studies with developing their mHealth tools. PMID:24363919

  7. Methods for specifying the target difference in a randomised controlled trial: the Difference ELicitation in TriAls (DELTA) systematic review.

    PubMed

    Hislop, Jenni; Adewuyi, Temitope E; Vale, Luke D; Harrild, Kirsten; Fraser, Cynthia; Gurung, Tara; Altman, Douglas G; Briggs, Andrew H; Fayers, Peter; Ramsay, Craig R; Norrie, John D; Harvey, Ian M; Buckley, Brian; Cook, Jonathan A

    2014-05-01

    Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation. A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified-anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size. A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts.

  8. Methodological reporting of randomized trials in five leading Chinese nursing journals.

    PubMed

    Shi, Chunhu; Tian, Jinhui; Ren, Dan; Wei, Hongli; Zhang, Lihuan; Wang, Quan; Yang, Kehu

    2014-01-01

    Randomized controlled trials (RCTs) are not always well reported, especially in terms of their methodological descriptions. This study aimed to investigate the adherence of methodological reporting complying with CONSORT and explore associated trial level variables in the Chinese nursing care field. In June 2012, we identified RCTs published in five leading Chinese nursing journals and included trials with details of randomized methods. The quality of methodological reporting was measured through the methods section of the CONSORT checklist and the overall CONSORT methodological items score was calculated and expressed as a percentage. Meanwhile, we hypothesized that some general and methodological characteristics were associated with reporting quality and conducted a regression with these data to explore the correlation. The descriptive and regression statistics were calculated via SPSS 13.0. In total, 680 RCTs were included. The overall CONSORT methodological items score was 6.34 ± 0.97 (Mean ± SD). No RCT reported descriptions and changes in "trial design," changes in "outcomes" and "implementation," or descriptions of the similarity of interventions for "blinding." Poor reporting was found in detailing the "settings of participants" (13.1%), "type of randomization sequence generation" (1.8%), calculation methods of "sample size" (0.4%), explanation of any interim analyses and stopping guidelines for "sample size" (0.3%), "allocation concealment mechanism" (0.3%), additional analyses in "statistical methods" (2.1%), and targeted subjects and methods of "blinding" (5.9%). More than 50% of trials described randomization sequence generation, the eligibility criteria of "participants," "interventions," and definitions of the "outcomes" and "statistical methods." The regression analysis found that publication year and ITT analysis were weakly associated with CONSORT score. The completeness of methodological reporting of RCTs in the Chinese nursing care field is poor, especially with regard to the reporting of trial design, changes in outcomes, sample size calculation, allocation concealment, blinding, and statistical methods.

  9. Laboratory and exterior decay of wood plastic composite boards: voids analysis and computed tomography

    Treesearch

    Grace Sun; Rebecca E. Ibach; Meghan Faillace; Marek Gnatowski; Jessie A. Glaeser; John Haight

    2016-01-01

    After exposure in the field and laboratory soil block culture testing, the void content of wood–plastic composite (WPC) decking boards was compared to unexposed samples. A void volume analysis was conducted based on calculations of sample density and from micro-computed tomography (microCT) data. It was found that reference WPC contains voids of different sizes from...

  10. An anthropometric analysis of Korean male helicopter pilots for helicopter cockpit design.

    PubMed

    Lee, Wonsup; Jung, Kihyo; Jeong, Jeongrim; Park, Jangwoon; Cho, Jayoung; Kim, Heeeun; Park, Seikwon; You, Heecheon

    2013-01-01

    This study measured 21 anthropometric dimensions (ADs) of 94 Korean male helicopter pilots in their 20s to 40s and compared them with corresponding measurements of Korean male civilians and the US Army male personnel. The ADs and the sample size of the anthropometric survey were determined by a four-step process: (1) selection of ADs related to helicopter cockpit design, (2) evaluation of the importance of each AD, (3) calculation of required sample sizes for selected precision levels and (4) determination of an appropriate sample size by considering both the AD importance evaluation results and the sample size requirements. The anthropometric comparison reveals that the Korean helicopter pilots are larger (ratio of means = 1.01-1.08) and less dispersed (ratio of standard deviations = 0.71-0.93) than the Korean male civilians and that they are shorter in stature (0.99), have shorter upper limbs (0.89-0.96) and lower limbs (0.93-0.97), but are taller on sitting height, sitting eye height and acromial height (1.01-1.03), and less dispersed (0.68-0.97) than the US Army personnel. The anthropometric characteristics of Korean male helicopter pilots were compared with those of Korean male civilians and US Army male personnel. The sample size determination process and the anthropometric comparison results presented in this study are useful to design an anthropometric survey and a helicopter cockpit layout, respectively.

  11. An analysis of Apollo lunar soil samples 12070,889, 12030,187, and 12070,891: Basaltic diversity at the Apollo 12 landing site and implications for classification of small-sized lunar samples

    NASA Astrophysics Data System (ADS)

    Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.

    2016-09-01

    Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.

  12. Bivariate mass-size relation as a function of morphology as determined by Galaxy Zoo 2 crowdsourced visual classifications

    NASA Astrophysics Data System (ADS)

    Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie

    2016-01-01

    It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.

  13. Constructing first-principles phase diagrams of amorphous LixSi using machine-learning-assisted sampling with an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Artrith, Nongnuch; Urban, Alexander; Ceder, Gerbrand

    2018-06-01

    The atomistic modeling of amorphous materials requires structure sizes and sampling statistics that are challenging to achieve with first-principles methods. Here, we propose a methodology to speed up the sampling of amorphous and disordered materials using a combination of a genetic algorithm and a specialized machine-learning potential based on artificial neural networks (ANNs). We show for the example of the amorphous LiSi alloy that around 1000 first-principles calculations are sufficient for the ANN-potential assisted sampling of low-energy atomic configurations in the entire amorphous LixSi phase space. The obtained phase diagram is validated by comparison with the results from an extensive sampling of LixSi configurations using molecular dynamics simulations and a general ANN potential trained to ˜45 000 first-principles calculations. This demonstrates the utility of the approach for the first-principles modeling of amorphous materials.

  14. X-ray simulations method for the large field of view

    NASA Astrophysics Data System (ADS)

    Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.

    2018-03-01

    In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.

  15. Growth in Head Size during Infancy: Implications for Sound Localization.

    ERIC Educational Resources Information Center

    Clifton, Rachel K.; And Others

    1988-01-01

    Compared head circumference and interaural distance in infants between birth and 22 weeks of age and in a small sample of preschool children and adults. Calculated changes in interaural time differences according to age. Found a large shift in distance. (SKC)

  16. Strain localization and fabric development in polycrystalline anorthite + melt by water diffusion in an axial deformation experiment

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun-ichi; Muto, Jun; Nagahama, Hiroyuki

    2018-01-01

    We performed two axial deformation experiments on synthetic polycrystalline anorthite samples with a grain size of 3 μm and 5 vol% Si-Al-rich glass at 900 °C, a confining pressure of 1.0 GPa, and a strain rate of 10-4.8 s-1. One sample was deformed as-is (dry); in the other sample, two half-cut samples (two cores) with 0.15 wt% water at the boundary were put together in the apparatus. The mechanical data for both samples were essentially identical with a yield strength of 700 MPa and strain weakening of 500 MPa by 20% strain. The dry sample appears to have been deformed by distributed fracturing. Meanwhile, the water-added sample shows plastic strain localization in addition to fracturing and reaction products composed of zoisite grains and SiO2 materials along the boundary between the two sample cores. Infrared spectra of the water-added sample showed dominant water bands of zoisite. The maximum water content was 1500 wt ppm H2O at the two-core boundary, which is the same as the added amount. The water contents gradually decreased from the boundaries to the sample interior, and the gradient fitted well with the solution of the one-dimensional diffusion equation. The determined diffusion coefficient was 7.4 × 10-13 m2/s, which agrees with previous data for the grain boundary diffusion of water. The anorthite grains in the water-added sample showed no crystallographic preferred orientation. Textural observations and water diffusion indicate that water promotes the plastic deformation of polycrystalline anorthite by grain-size-sensitive creep as well as simultaneous reactions. We calculated the strain rate evolution controlled by water diffusion in feldspar aggregates surrounded by a water source. We assumed water diffusion in a dry rock mass with variable sizes. Diffused water weakens a rock mass with time under compressive stress. The calculated strain rate decreased from 10-10 to 10-15 s-1 with an increase in the rock mass size to which water is supplied from < 1 m to 1 km and an increase in the time of water diffusion from < 1 to 10,000 years. This indicates a decrease in the strain rate in a rock mass with increasing deformation via water diffusion.

  17. Blinded versus unblinded estimation of a correlation coefficient to inform interim design adaptations.

    PubMed

    Kunz, Cornelia U; Stallard, Nigel; Parsons, Nicholas; Todd, Susan; Friede, Tim

    2017-03-01

    Regulatory authorities require that the sample size of a confirmatory trial is calculated prior to the start of the trial. However, the sample size quite often depends on parameters that might not be known in advance of the study. Misspecification of these parameters can lead to under- or overestimation of the sample size. Both situations are unfavourable as the first one decreases the power and the latter one leads to a waste of resources. Hence, designs have been suggested that allow a re-assessment of the sample size in an ongoing trial. These methods usually focus on estimating the variance. However, for some methods the performance depends not only on the variance but also on the correlation between measurements. We develop and compare different methods for blinded estimation of the correlation coefficient that are less likely to introduce operational bias when the blinding is maintained. Their performance with respect to bias and standard error is compared to the unblinded estimator. We simulated two different settings: one assuming that all group means are the same and one assuming that different groups have different means. Simulation results show that the naïve (one-sample) estimator is only slightly biased and has a standard error comparable to that of the unblinded estimator. However, if the group means differ, other estimators have better performance depending on the sample size per group and the number of groups. © 2016 The Authors. Biometrical Journal Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Blinded versus unblinded estimation of a correlation coefficient to inform interim design adaptations

    PubMed Central

    Stallard, Nigel; Parsons, Nicholas; Todd, Susan; Friede, Tim

    2016-01-01

    Regulatory authorities require that the sample size of a confirmatory trial is calculated prior to the start of the trial. However, the sample size quite often depends on parameters that might not be known in advance of the study. Misspecification of these parameters can lead to under‐ or overestimation of the sample size. Both situations are unfavourable as the first one decreases the power and the latter one leads to a waste of resources. Hence, designs have been suggested that allow a re‐assessment of the sample size in an ongoing trial. These methods usually focus on estimating the variance. However, for some methods the performance depends not only on the variance but also on the correlation between measurements. We develop and compare different methods for blinded estimation of the correlation coefficient that are less likely to introduce operational bias when the blinding is maintained. Their performance with respect to bias and standard error is compared to the unblinded estimator. We simulated two different settings: one assuming that all group means are the same and one assuming that different groups have different means. Simulation results show that the naïve (one‐sample) estimator is only slightly biased and has a standard error comparable to that of the unblinded estimator. However, if the group means differ, other estimators have better performance depending on the sample size per group and the number of groups. PMID:27886393

  19. Methodological reporting of randomized clinical trials in respiratory research in 2010.

    PubMed

    Lu, Yi; Yao, Qiuju; Gu, Jie; Shen, Ce

    2013-09-01

    Although randomized controlled trials (RCTs) are considered the highest level of evidence, they are also subject to bias, due to a lack of adequately reported randomization, and therefore the reporting should be as explicit as possible for readers to determine the significance of the contents. We evaluated the methodological quality of RCTs in respiratory research in high ranking clinical journals, published in 2010. We assessed the methodological quality, including generation of the allocation sequence, allocation concealment, double-blinding, sample-size calculation, intention-to-treat analysis, flow diagrams, number of medical centers involved, diseases, funding sources, types of interventions, trial registration, number of times the papers have been cited, journal impact factor, journal type, and journal endorsement of the CONSORT (Consolidated Standards of Reporting Trials) rules, in RCTs published in 12 top ranking clinical respiratory journals and 5 top ranking general medical journals. We included 176 trials, of which 93 (53%) reported adequate generation of the allocation sequence, 66 (38%) reported adequate allocation concealment, 79 (45%) were double-blind, 123 (70%) reported adequate sample-size calculation, 88 (50%) reported intention-to-treat analysis, and 122 (69%) included a flow diagram. Multivariate logistic regression analysis revealed that journal impact factor ≥ 5 was the only variable that significantly influenced adequate allocation sequence generation. Trial registration and journal impact factor ≥ 5 significantly influenced adequate allocation concealment. Medical interventions, trial registration, and journal endorsement of the CONSORT statement influenced adequate double-blinding. Publication in one of the general medical journal influenced adequate sample-size calculation. The methodological quality of RCTs in respiratory research needs improvement. Stricter enforcement of the CONSORT statement should enhance the quality of RCTs.

  20. Indicators of quality of antenatal care: a pilot study.

    PubMed

    Vause, S; Maresh, M

    1999-03-01

    To pilot a list of indicators of quality of antenatal care across a range of maternity care settings. For each indicator to determine what is achieved in current clinical practice, to facilitate the setting of audit standards and calculation of appropriate sample sizes for audit. A multicentre retrospective observational study. Nine maternity units in the United Kingdom. 20,771 women with a singleton pregnancy, who were delivered between 1 August 1994 and 31 July 1995. Nine of the eleven suggested indicators were successfully piloted. Two indicators require further development. In seven of the nine hospitals external cephalic version was not commonly performed. There were wide variations in the proportions of women screened for asymptomatic bacteriuria. Screening of women from ethnic minorities for haemoglobinopathy was more likely in hospitals with a large proportion of non-caucasian women. A large number of Rhesus negative women did not have a Rhesus antibody check performed after 28 weeks of gestation and did not receive anti-D immunoglobulin after a potentially sensitising event during pregnancy. As a result of the study appropriate sample sizes for future audit could be calculated. Measuring the extent to which evidence-based interventions are used in routine clinical practice provides a more detailed picture of the strengths and weaknesses in an antenatal service than traditional outcomes such as perinatal mortality rates. Awareness of an appropriate sample size should prevent waste of time and resources on inconclusive audits.

  1. Detecting spatial structures in throughfall data: the effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-04-01

    In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.

  2. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.

  3. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    PubMed

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  4. Considerations for successful cosmogenic 3He dating in accessory phases

    NASA Astrophysics Data System (ADS)

    Amidon, W. H.; Farley, K. A.; Rood, D. H.

    2008-12-01

    We have been working to develop cosmogenic 3He dating of phases other than the commonly dated olivine and pyroxene, especially apatite and zircon. Recent work by Dunai et al. underscores that cosmogenic 3He dating is complicated by 3He production via 6Li(n,α) 3H --> 3He. The reacting thermal neutrons can be produced from three distinct sources; nucleogenic processes (3Henuc), muon interactions (3Hemu), and by high-energy "cosmogenic" neutrons (3Hecn). Accurate cosmogenic 3He dating requires determination of the relative fractions of Li-derived and spallation derived 3He. An important complication for the fine-grained phases we are investigating is that both spallation and the 6Li reaction eject high energy particles, with consequences for redistribution of 3He among phases in a rock. Although shielded samples can be used to estimate 3Henuc, they do not conatin the 3Hecn component produced in the near surface. To calculate this component, we propose a procedure in which the bulk rock chemistry, helium closure age, 3He concentration, grain size and Li content of the target mineral are measured in a shielded sample. The average Li content of the adjacent minerals can then be calculated, which in turn allows calculation of the 3Hecn component in surface exposed samples of the same lithology. If identical grain sizes are used in the shielded and surface exposed samples, then "effective" Li can be calculated directly from the shielded sample, and it may not be necessary to measure Li at all. To help validate our theoretical understanding of Li-3He production, and to constrain the geologic contexts in which cosmogenic 3He dating with zircon and apatite is likely to be successful, results are presented from four different field locations. For example, results from ~18 Ky old moraines in the Sierra Nevada show that the combination of low Li contents and high closure ages (>50 My) creates a small 3Hecn component (2%) but a large 3Henuc component (40-70%) for zircon and apatite. In contrast the combination of high Li contents and a young closure age (0.6 My) in rhyolite from the Coso volcanic field leads to a large 3Hecn component (30%) and small 3Henuc component (5%) in zircon. Analysis of samples from a variety of lithologies shows that zircon and apatite tend to be low in Li (1-10 ppm), but are vulnerable to implantation of 3He from adjacent minerals due to their small grain size, especially from minerals like biotite and hornblende. This point is well illustrated by data from both the Sierra Nevada and Coso examples, in which there is a strong correlation between grain size and 3He concentration for zircons due to implantation. In contrast, very large zircons (150>125 um width) obtained from shielded samples of the Shoshone Falls rhyolite (SW Idaho) do not contain a significant implanted component. Thus, successful 3He dating of accessory phases requires low Li content (<10 ppm) in the target mineral and either 1) low Li in adjacent minerals, or 2) the use of large grain sizes (>100 um). In high-Li cases, the fraction of 3Henuc is minimized in samples with young helium closure ages or longer duration of exposure. However because the 3Hecn/3Hespall ratio is fixed for a given Li content, longer exposure will not reduce the fraction of 3Hecn.

  5. Development of size reduction equations for calculating power input for grinding pine wood chips using hammer mill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naimi, Ladan J.; Collard, Flavien; Bi, Xiaotao

    Size reduction is an unavoidable operation for preparing biomass for biofuels and bioproduct conversion. Yet, there is considerable uncertainty in power input requirement and the uniformity of ground biomass. Considerable gains are possible if the required power input for a size reduction ratio is estimated accurately. In this research three well-known mechanistic equations attributed to Rittinger, Kick, and Bond available for predicting energy input for grinding pine wood chips were tested against experimental grinding data. Prior to testing, samples of pine wood chips were conditioned to 11.7% wb, moisture content. The wood chips were successively ground in a hammer millmore » using screen sizes of 25.4 mm, 10 mm, 6.4 mm, and 3.2 mm. The input power and the flow of material into the grinder were recorded continuously. The recorded power input vs. mean particle size showed that the Rittinger equation had the best fit to the experimental data. The ground particle sizes were 4 to 7 times smaller than the size of installed screen. Geometric mean size of particles were calculated using two methods (1) Tyler sieves and using particle size analysis and (2) Sauter mean diameter calculated from the ratio of volume to surface that were estimated from measured length and width. The two mean diameters agreed well, pointing to the fact that either mechanical sieving or particle imaging can be used to characterize particle size. In conclusion, specific energy input to the hammer mill increased from 1.4 kWh t –1 (5.2 J g –1) for large 25.1-mm screen to 25 kWh t –1 (90.4 J g –1) for small 3.2-mm screen.« less

  6. Development of size reduction equations for calculating power input for grinding pine wood chips using hammer mill

    DOE PAGES

    Naimi, Ladan J.; Collard, Flavien; Bi, Xiaotao; ...

    2016-01-05

    Size reduction is an unavoidable operation for preparing biomass for biofuels and bioproduct conversion. Yet, there is considerable uncertainty in power input requirement and the uniformity of ground biomass. Considerable gains are possible if the required power input for a size reduction ratio is estimated accurately. In this research three well-known mechanistic equations attributed to Rittinger, Kick, and Bond available for predicting energy input for grinding pine wood chips were tested against experimental grinding data. Prior to testing, samples of pine wood chips were conditioned to 11.7% wb, moisture content. The wood chips were successively ground in a hammer millmore » using screen sizes of 25.4 mm, 10 mm, 6.4 mm, and 3.2 mm. The input power and the flow of material into the grinder were recorded continuously. The recorded power input vs. mean particle size showed that the Rittinger equation had the best fit to the experimental data. The ground particle sizes were 4 to 7 times smaller than the size of installed screen. Geometric mean size of particles were calculated using two methods (1) Tyler sieves and using particle size analysis and (2) Sauter mean diameter calculated from the ratio of volume to surface that were estimated from measured length and width. The two mean diameters agreed well, pointing to the fact that either mechanical sieving or particle imaging can be used to characterize particle size. In conclusion, specific energy input to the hammer mill increased from 1.4 kWh t –1 (5.2 J g –1) for large 25.1-mm screen to 25 kWh t –1 (90.4 J g –1) for small 3.2-mm screen.« less

  7. Researchers’ Intuitions About Power in Psychological Research

    PubMed Central

    Bakker, Marjan; Hartgerink, Chris H. J.; Wicherts, Jelte M.; van der Maas, Han L. J.

    2016-01-01

    Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers’ experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. PMID:27354203

  8. Researchers' Intuitions About Power in Psychological Research.

    PubMed

    Bakker, Marjan; Hartgerink, Chris H J; Wicherts, Jelte M; van der Maas, Han L J

    2016-08-01

    Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers' experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. © The Author(s) 2016.

  9. Methodological quality of behavioural weight loss studies: a systematic review

    PubMed Central

    Lemon, S. C.; Wang, M. L.; Haughton, C. F.; Estabrook, D. P.; Frisard, C. F.; Pagoto, S. L.

    2018-01-01

    Summary This systematic review assessed the methodological quality of behavioural weight loss intervention studies conducted among adults and associations between quality and statistically significant weight loss outcome, strength of intervention effectiveness and sample size. Searches for trials published between January, 2009 and December, 2014 were conducted using PUBMED, MEDLINE and PSYCINFO and identified ninety studies. Methodological quality indicators included study design, anthropometric measurement approach, sample size calculations, intent-to-treat (ITT) analysis, loss to follow-up rate, missing data strategy, sampling strategy, report of treatment receipt and report of intervention fidelity (mean = 6.3). Indicators most commonly utilized included randomized design (100%), objectively measured anthropometrics (96.7%), ITT analysis (86.7%) and reporting treatment adherence (76.7%). Most studies (62.2%) had a follow-up rate >75% and reported a loss to follow-up analytic strategy or minimal missing data (69.9%). Describing intervention fidelity (34.4%) and sampling from a known population (41.1%) were least common. Methodological quality was not associated with reporting a statistically significant result, effect size or sample size. This review found the published literature of behavioural weight loss trials to be of high quality for specific indicators, including study design and measurement. Identified for improvement include utilization of more rigorous statistical approaches to loss to follow up and better fidelity reporting. PMID:27071775

  10. Determination of lateral size distribution of type-II ZnTe/ZnSe stacked submonolayer quantum dots via spectral analysis of optical signature of the Aharanov-Bohm excitons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Haojie; Dhomkar, Siddharth; Roy, Bidisha

    2014-10-28

    For submonolayer quantum dot (QD) based photonic devices, size and density of QDs are critical parameters, the probing of which requires indirect methods. We report the determination of lateral size distribution of type-II ZnTe/ZnSe stacked submonolayer QDs, based on spectral analysis of the optical signature of Aharanov-Bohm (AB) excitons, complemented by photoluminescence studies, secondary-ion mass spectroscopy, and numerical calculations. Numerical calculations are employed to determine the AB transition magnetic field as a function of the type-II QD radius. The study of four samples grown with different tellurium fluxes shows that the lateral size of QDs increases by just 50%, evenmore » though tellurium concentration increases 25-fold. Detailed spectral analysis of the emission of the AB exciton shows that the QD radii take on only certain values due to vertical correlation and the stacked nature of the QDs.« less

  11. Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission

    NASA Technical Reports Server (NTRS)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.

    2015-01-01

    The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multiwavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.

  12. MUDMASTER: A Program for Calculating Crystalline Size Distributions and Strain from the Shapes of X-Ray Diffraction Peaks

    USGS Publications Warehouse

    Eberl, D.D.; Drits, V.A.; Środoń, Jan; Nüesch, R.

    1996-01-01

    Particle size may strongly influence the physical and chemical properties of a substance (e.g. its rheology, surface area, cation exchange capacity, solubility, etc.), and its measurement in rocks may yield geological information about ancient environments (sediment provenance, degree of metamorphism, degree of weathering, current directions, distance to shore, etc.). Therefore mineralogists, geologists, chemists, soil scientists, and others who deal with clay-size material would like to have a convenient method for measuring particle size distributions. Nano-size crystals generally are too fine to be measured by light microscopy. Laser scattering methods give only average particle sizes; therefore particle size can not be measured in a particular crystallographic direction. Also, the particles measured by laser techniques may be composed of several different minerals, and may be agglomerations of individual crystals. Measurement by electron and atomic force microscopy is tedious, expensive, and time consuming. It is difficult to measure more than a few hundred particles per sample by these methods. This many measurements, often taking several days of intensive effort, may yield an accurate mean size for a sample, but may be too few to determine an accurate distribution of sizes. Measurement of size distributions by X-ray diffraction (XRD) solves these shortcomings. An X-ray scan of a sample occurs automatically, taking a few minutes to a few hours. The resulting XRD peaks average diffraction effects from billions of individual nano-size crystals. The size that is measured by XRD may be related to the size of the individual crystals of the mineral in the sample, rather than to the size of particles formed from the agglomeration of these crystals. Therefore one can determine the size of a particular mineral in a mixture of minerals, and the sizes in a particular crystallographic direction of that mineral.

  13. MCNP-based computational model for the Leksell gamma knife.

    PubMed

    Trnka, Jiri; Novotny, Josef; Kluson, Jaroslav

    2007-01-01

    We have focused on the usage of MCNP code for calculation of Gamma Knife radiation field parameters with a homogenous polystyrene phantom. We have investigated several parameters of the Leksell Gamma Knife radiation field and compared the results with other studies based on EGS4 and PENELOPE code as well as the Leksell Gamma Knife treatment planning system Leksell GammaPlan (LGP). The current model describes all 201 radiation beams together and simulates all the sources in the same time. Within each beam, it considers the technical construction of the source, the source holder, collimator system, the spherical phantom, and surrounding material. We have calculated output factors for various sizes of scoring volumes, relative dose distributions along basic planes including linear dose profiles, integral doses in various volumes, and differential dose volume histograms. All the parameters have been calculated for each collimator size and for the isocentric configuration of the phantom. We have found the calculated output factors to be in agreement with other authors' works except the case of 4 mm collimator size, where averaging over the scoring volume and statistical uncertainties strongly influences the calculated results. In general, all the results are dependent on the choice of the scoring volume. The calculated linear dose profiles and relative dose distributions also match independent studies and the Leksell GammaPlan, but care must be taken about the fluctuations within the plateau, which can influence the normalization, and accuracy in determining the isocenter position, which is important for comparing different dose profiles. The calculated differential dose volume histograms and integral doses have been compared with data provided by the Leksell GammaPlan. The dose volume histograms are in good agreement as well as integral doses calculated in small calculation matrix volumes. However, deviations in integral doses up to 50% can be observed for large volumes such as for the total skull volume. The differences observed in treatment of scattered radiation between the MC method and the LGP may be important in this case. We have also studied the influence of differential direction sampling of primary photons and have found that, due to the anisotropic sampling, doses around the isocenter deviate from each other by up to 6%. With caution about the details of the calculation settings, it is possible to employ the MCNP Monte Carlo code for independent verification of the Leksell Gamma Knife radiation field properties.

  14. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    PubMed

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  15. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    NASA Astrophysics Data System (ADS)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  16. 78 FR 74175 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-10

    ... precision requirements or power calculations that justify the proposed sample size, the expected response...: Proposed Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on... Information Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback...

  17. Free-Energy Fluctuations and Chaos in the Sherrington-Kirkpatrick Model

    NASA Astrophysics Data System (ADS)

    Aspelmeier, T.

    2008-03-01

    The sample-to-sample fluctuations ΔFN of the free-energy in the Sherrington-Kirkpatrick model are shown rigorously to be related to bond chaos. Via this connection, the fluctuations become analytically accessible by replica methods. The replica calculation for bond chaos shows that the exponent μ governing the growth of the fluctuations with system size N, ΔFN˜Nμ, is bounded by μ≤(1)/(4).

  18. Convex hull approach for determining rock representative elementary volume for multiple petrophysical parameters using pore-scale imaging and Lattice-Boltzmann modelling

    NASA Astrophysics Data System (ADS)

    Shah, S. M.; Crawshaw, J. P.; Gray, F.; Yang, J.; Boek, E. S.

    2017-06-01

    In the last decade, the study of fluid flow in porous media has developed considerably due to the combination of X-ray Micro Computed Tomography (micro-CT) and advances in computational methods for solving complex fluid flow equations directly or indirectly on reconstructed three-dimensional pore space images. In this study, we calculate porosity and single phase permeability using micro-CT imaging and Lattice Boltzmann (LB) simulations for 8 different porous media: beadpacks (with bead sizes 50 μm and 350 μm), sandpacks (LV60 and HST95), sandstones (Berea, Clashach and Doddington) and a carbonate (Ketton). Combining the observed porosity and calculated single phase permeability, we shed new light on the existence and size of the Representative Element of Volume (REV) capturing the different scales of heterogeneity from the pore-scale imaging. Our study applies the concept of the 'Convex Hull' to calculate the REV by considering the two main macroscopic petrophysical parameters, porosity and single phase permeability, simultaneously. The shape of the hull can be used to identify strong correlation between the parameters or greatly differing convergence rates. To further enhance computational efficiency we note that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size so that only a few small simulations are needed to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.

  19. Testing the non-unity of rate ratio under inverse sampling.

    PubMed

    Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing

    2007-08-01

    Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  20. Far Field Modeling Methods For Characterizing Surface Detonations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, A.

    2015-10-08

    Savannah River National Laboratory (SRNL) analyzed particle samples collected during experiments that were designed to replicate tests of nuclear weapons components that involve detonation of high explosives (HE). SRNL collected the particle samples in the HE debris cloud using innovative rocket propelled samplers. SRNL used scanning electronic microscopy to determine the elemental constituents of the particles and their size distributions. Depleted uranium composed about 7% of the particle contents. SRNL used the particle size distributions and elemental composition to perform transport calculations that indicate in many terrains and atmospheric conditions the uranium bearing particles will be transported long distances downwind.more » This research established that HE tests specific to nuclear proliferation should be detectable at long downwind distances by sampling airborne particles created by the test detonations.« less

  1. Calculation and experimental determination of the geometric parameters of the coatings by laser cladding

    NASA Astrophysics Data System (ADS)

    Birukov, V. P.; Fichkov, A. A.

    2017-12-01

    In the present work the experiments on laser cladding of powder Fe-B-Cr-6-2 on samples of steel 20. Metallographic studies of geometric parameters of deposited layers and the depth of the heat affected zone (HAZ). Using is the method of full factorial experiment (FFE) mathematical dependences of the geometrical sizes of the deposited layers of processing modes. Deviation of calculated values from experimental data is not more than 3%.

  2. The study of the effect of aluminum powders dispersion on the oxidation and kinetic characteristics

    NASA Astrophysics Data System (ADS)

    Gorbenko, T. I.; Gorbenko, M. V.; Orlova, M. P.; Volkov, S. A.

    2017-11-01

    Differential-scanning calorimetry (DSC) and thermogravimetric analysis (TG) were used to study micro-sized aluminum powder ASD-4 and nano-sized powder Alex. The dependence of the oxidation process on the dispersion of the sample particles is shown. The influence of thermogravimetric conditions on the thermal regime of the process was considered, and its kinetic parameters were determined. Calculations of the activation energy and the pre-exponential factor were carried out.

  3. The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

    PubMed Central

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana

    2011-01-01

    Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777

  4. Geochemistry of sediments in the Northern and Central Adriatic Sea

    NASA Astrophysics Data System (ADS)

    De Lazzari, A.; Rampazzo, G.; Pavoni, B.

    2004-03-01

    Major, minor and trace elements, loss of ignition, specific surface area, quantities of calcite and dolomite, qualitative mineralogical composition, grain-size distribution and organic micropollutants (PAH, PCB, DDT) were determined on surficial marine sediments sampled during the 1990 ASCOP (Adriatic Scientific Cooperative Program) cruise. Mineralogical composition and carbonate content of the samples were found to be comparable with data previously reported in the literature, whereas geochemical composition and distribution of major, minor and trace elements for samples in international waters and in the central basin have never been reported before. The large amount of information contained in the variables of different origin has been processed by means of a comprehensive approach which establishes the relations among the components through the mathematical-statistical calculation of principal components (factors). These account for the major part of data variance loosing only marginal parts of information and are independent from the units of measure. The sample descriptors concerning natural components and contamination load are discussed by means of a statistical model based on an R-mode Factor analysis calculating four significant factors which explain 86.8% of the total variance, and represent important relationships between grain size, mineralogy, geochemistry and organic micropollutants. A description and an interpretation of factor composition is discussed on the basis of pollution inputs, basin geology and hydrodynamics. The areal distribution of the factors showed that it is the fine grain-size fraction, with oxides and hydroxides of colloidal origin, which are the main means of transport and thus the principal link between chemical, physical and granulometric elements in the Adriatic.

  5. Fabrication of NdCeCuO and effects of binding agents on the growth, micro-structural and electrical properties

    NASA Astrophysics Data System (ADS)

    Altin, S.; Aksan, M. A.; Turkoglu, S.; Yakinci, M. E.

    2011-12-01

    NdCeCuO superconducting samples were fabricated using ethyl alcohol, acetone and ethylenediaminetetraacetic acid (EDTA) as binding agents. For evaporation of binding agents, the samples were heat treated at 1050 °C for 24 h and then at 950 °C for 6-48 h under argon atmosphere to obtain the superconducting phase. The best superconducting performance was found in the sample heat treated at 1050 °C for 24 h and then 950 °C for 12 h which was fabricated by using acetone as binding agent. The T c and T 0 value was found to be ∼25 K and 23.4 K, respectively. Grain size in the samples fabricated was calculated using Scherer equation and SEM data. It was found that grain size strongly depends on the binding agents and heat treatment conditions. Some cracks and voids on the surface of the samples were observed, which influences the superconducting and electrical transport properties of the samples.

  6. 76 FR 61360 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-04

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Activities: Proposed Collection; Comment Request; Generic Clearance for the Collection of Qualitative... Collection of Qualitative Feedback on Agency Service Delivery'' to OMB for approval under the Paperwork...

  7. 77 FR 70780 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-27

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... notice. Proposed Project Generic Clearance for the Collection of Qualitative Feedback on Agency Service... Collection of Qualitative Feedback on Agency Service Delivery'' to OMB for approval under the Paperwork...

  8. 76 FR 35069 - Agency Information Collection Activities; Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-15

    ... precision requirements or power calculations that justify the proposed sample size, the expected response...; Proposed Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on... (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery...

  9. 78 FR 40729 - Agency Information Collection Activities; Proposed Collection; Comment Request: Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-08

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Qualitative Feedback on Agency Service Delivery AGENCY: Washington Headquarters Service (WHS), DOD. ACTION: 30... (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery...

  10. 76 FR 17861 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-31

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods...; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery... Information Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback...

  11. 76 FR 24920 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-03

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service... Information Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback...

  12. 78 FR 26033 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-03

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Collection of Qualitative Feedback on Agency Service Delivery--NEW--Epidemiology and Analysis Program Office... Clearance for the Collection of Qualitative Feedback on Agency Service Delivery'' to OMB for approval under...

  13. 77 FR 27062 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-08

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Project NIOSH Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery--NEW... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  14. 77 FR 52708 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-30

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential...: Proposed Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on... Information Collection request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback...

  15. Methodological reporting quality of randomized controlled trials: A survey of seven core journals of orthopaedics from Mainland China over 5 years following the CONSORT statement.

    PubMed

    Zhang, J; Chen, X; Zhu, Q; Cui, J; Cao, L; Su, J

    2016-11-01

    In recent years, the number of randomized controlled trials (RCTs) in the field of orthopaedics is increasing in Mainland China. However, randomized controlled trials (RCTs) are inclined to bias if they lack methodological quality. Therefore, we performed a survey of RCT to assess: (1) What about the quality of RCTs in the field of orthopedics in Mainland China? (2) Whether there is difference between the core journals of the Chinese department of orthopedics and Orthopaedics Traumatology Surgery & Research (OTSR). This research aimed to evaluate the methodological reporting quality according to the CONSORT statement of randomized controlled trials (RCTs) in seven key orthopaedic journals published in Mainland China over 5 years from 2010 to 2014. All of the articles were hand researched on Chongqing VIP database between 2010 and 2014. Studies were considered eligible if the words "random", "randomly", "randomization", "randomized" were employed to describe the allocation way. Trials including animals, cadavers, trials published as abstracts and case report, trials dealing with subgroups analysis, or trials without the outcomes were excluded. In addition, eight articles selected from Orthopaedics Traumatology Surgery & Research (OTSR) between 2010 and 2014 were included in this study for comparison. The identified RCTs are analyzed using a modified version of the Consolidated Standards of Reporting Trials (CONSORT), including the sample size calculation, allocation sequence generation, allocation concealment, blinding and handling of dropouts. A total of 222 RCTs were identified in seven core orthopaedic journals. No trials reported adequate sample size calculation, 74 (33.4%) reported adequate allocation generation, 8 (3.7%) trials reported adequate allocation concealment, 18 (8.1%) trials reported adequate blinding and 16 (7.2%) trials reported handling of dropouts. In OTSR, 1 (12.5%) trial reported adequate sample size calculation, 4 (50.0%) reported adequate allocation generation, 1 (12.5%) trials reported adequate allocation concealment, 2 (25.0%) trials reported adequate blinding and 5 (62.5%) trials reported handling of dropouts. There were statistical differences as for sample size calculation and handling of dropouts between papers from Mainland China and OTSR (P<0.05). The findings of this study show that the methodological reporting quality of RCTs in seven core orthopaedic journals from the Mainland China is far from satisfaction and it needs to further improve to keep up with the standards of the CONSORT statement. Level III case control. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  16. Statistical aspects of genetic association testing in small samples, based on selective DNA pooling data in the arctic fox.

    PubMed

    Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna

    2008-01-01

    We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.

  17. Sample entropy applied to the analysis of synthetic time series and tachograms

    NASA Astrophysics Data System (ADS)

    Muñoz-Diosdado, A.; Gálvez-Coyt, G. G.; Solís-Montufar, E.

    2017-01-01

    Entropy is a method of non-linear analysis that allows an estimate of the irregularity of a system, however, there are different types of computational entropy that were considered and tested in order to obtain one that would give an index of signals complexity taking into account the data number of the analysed time series, the computational resources demanded by the method, and the accuracy of the calculation. An algorithm for the generation of fractal time-series with a certain value of β was used for the characterization of the different entropy algorithms. We obtained a significant variation for most of the algorithms in terms of the series size, which could result counterproductive for the study of real signals of different lengths. The chosen method was sample entropy, which shows great independence of the series size. With this method, time series of heart interbeat intervals or tachograms of healthy subjects and patients with congestive heart failure were analysed. The calculation of sample entropy was carried out for 24-hour tachograms and time subseries of 6-hours for sleepiness and wakefulness. The comparison between the two populations shows a significant difference that is accentuated when the patient is sleeping.

  18. Particle size distribution and perchlorate levels in settled dust from urban roads, parks, and roofs in Chengdu, China.

    PubMed

    Li, Yiwen; Shen, Yang; Pi, Lu; Hu, Wenli; Chen, Mengqin; Luo, Yan; Li, Zhi; Su, Shijun; Ding, Sanglan; Gan, Zhiwei

    2016-01-01

    A total of 27 settled dust samples were collected from urban roads, parks, and roofs in Chengdu, China to investigate particle size distribution and perchlorate levels in different size fractions. Briefly, fine particle size fractions (<250 μm) were the dominant composition in the settled dust samples, with mean percentages of 80.2%, 69.5%, and 77.2% for the urban roads, roofs, and the parks, respectively. Perchlorate was detected in all of the size-fractionated dust samples, with concentrations ranging from 73.0 to 6160 ng g(-1), and the median perchlorate levels increased with decreasing particle size. The perchlorate level in the finest fraction (<63 μm) was significantly higher than those in the coarser fractions. To our knowledge, this is the first report on perchlorate concentrations in different particle size fractions. The calculated perchlorate loadings revealed that perchlorate was mainly associated with finer particles (<125 μm). An exposure assessment indicated that exposure to perchlorate via settled road dust intake is safe to both children and adults in Chengdu, China. However, due to perchlorate mainly existing in fine particles, there is a potential for perchlorate to transfer into surface water and the atmosphere by runoff and wind erosion or traffic emission, and this could act as an important perchlorate pollution source for the indoor environment, and merits further study.

  19. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    PubMed

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  20. Impact of asymmetrical flow field-flow fractionation on protein aggregates stability.

    PubMed

    Bria, Carmen R M; Williams, S Kim Ratanathanawongs

    2016-09-23

    The impact of asymmetrical flow field-flow fractionation (AF4) on protein aggregate species is investigated with the aid of multiangle light scattering (MALS) and dynamic light scattering (DLS). The experimental parameters probed in this study include aggregate stability in different carrier liquids, shear stress (related to sample injection), sample concentration (during AF4 focusing), and sample dilution (during separation). Two anti-streptavidin (anti-SA) IgG1 samples composed of low and high molar mass (M) aggregates are subjected to different AF4 conditions. Aggregates suspended and separated in phosphate buffer are observed to dissociate almost entirely to monomer. However, aggregates in citric acid buffer are partially stable with dissociation to 25% and 5% monomer for the low and high M samples, respectively. These results demonstrate that different carrier liquids change the aggregate stability and low M aggregates can behave differently than their larger counterparts. Increasing the duration of the AF4 focusing step showed no significant changes in the percent monomer, percent aggregates, or the average Ms in either sample. Syringe-induced shear related to sample injection resulted in an increase in hydrodynamic diameter (dh) as measured by batch mode DLS. Finally, calculations showed that dilution during AF4 separation is significantly lower than in size exclusion chromatography with dilution occurring mainly at the AF4 channel outlet and not during the separation. This has important ramifications when analyzing aggregates that rapidly dissociate (<∼2s) upon dilution as the size calculated by AF4 theory may be more accurate than that measured by online DLS. Experimentally, the dhs determined by online DLS generally agreed with AF4 theory except for the more well retained larger aggregates for which DLS showed smaller sizes. These results highlight the importance of using AF4 retention theory to understand the impacts of dilution on analytes. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Synthesis characterization and luminescence studies of gamma irradiated nanocrystalline yttrium oxide.

    PubMed

    Shivaramu, N J; Lakshminarasappa, B N; Nagabhushana, K R; Singh, Fouran

    2016-02-05

    Nanocrystalline Y2O3 is synthesized by solution combustion technique using urea and glycine as fuels. X-ray diffraction (XRD) pattern of as prepared sample shows amorphous nature while annealed samples show cubic nature. The average crystallite size is calculated using Scherrer's formula and is found to be in the range 14-30 nm for samples synthesized using urea and 15-20 nm for samples synthesized using glycine respectively. Field emission scanning electron microscopy (FE-SEM) image of 1173 K annealed Y2O3 samples show well separated spherical shape particles and the average particle size is found to be in the range 28-35 nm. Fourier transformed infrared (FTIR) and Raman spectroscopy reveals a stretching of Y-O bond. Electron spin resonance (ESR) shows V(-) center, O2(-) and Y(2+) defects. A broad photoluminescence (PL) emission with peak at ~386nm is observed when the sample is excited with 252 nm. Thermoluminescence (TL) properties of γ-irradiated Y2O3 nanopowder are studied at a heating rate of 5 K s(-1). The samples prepared by using urea show a prominent and well resolved peak at ~383 K and a weak one at ~570 K. It is also found that TL glow peak intensity (I(m1)) at ~383 K increases with increase in γ-dose up to ~6.0 kGy and then decreases with increase in dose. However, glycine used Y2O3 shows a prominent TL glow with peaks at 396 K and 590 K. Among the fuels, urea used Y2O3 shows simple and well resolved TL glows. This might be due to fuel and hence particle size effect. The kinetic parameters are calculated by Chen's glow curve peak shape method and results are discussed in detail. Copyright © 2015. Published by Elsevier B.V.

  2. Assessing readability formula differences with written health information materials: application, results, and recommendations.

    PubMed

    Wang, Lih-Wern; Miller, Michael J; Schmitt, Michael R; Wen, Frances K

    2013-01-01

    Readability formulas are often used to guide the development and evaluation of literacy-sensitive written health information. However, readability formula results may vary considerably as a result of differences in software processing algorithms and how each formula is applied. These variations complicate interpretations of reading grade level estimates, particularly without a uniform guideline for applying and interpreting readability formulas. This research sought to (1) identify commonly used readability formulas reported in the health care literature, (2) demonstrate the use of the most commonly used readability formulas on written health information, (3) compare and contrast the differences when applying common readability formulas to identical selections of written health information, and (4) provide recommendations for choosing an appropriate readability formula for written health-related materials to optimize their use. A literature search was conducted to identify the most commonly used readability formulas in health care literature. Each of the identified formulas was subsequently applied to word samples from 15 unique examples of written health information about the topic of depression and its treatment. Readability estimates from common readability formulas were compared based on text sample size, selection, formatting, software type, and/or hand calculations. Recommendations for their use were provided. The Flesch-Kincaid formula was most commonly used (57.42%). Readability formulas demonstrated variability up to 5 reading grade levels on the same text. The Simple Measure of Gobbledygook (SMOG) readability formula performed most consistently. Depending on the text sample size, selection, formatting, software, and/or hand calculations, the individual readability formula estimated up to 6 reading grade levels of variability. The SMOG formula appears best suited for health care applications because of its consistency of results, higher level of expected comprehension, use of more recent validation criteria for determining reading grade level estimates, and simplicity of use. To improve interpretation of readability results, reporting reading grade level estimates from any formula should be accompanied with information about word sample size, location of word sampling in the text, formatting, and method of calculation. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Experiential Teaching Increases Medication Calculation Accuracy Among Baccalaureate Nursing Students.

    PubMed

    Hurley, Teresa V

    Safe medication administration is an international goal. Calculation errors cause patient harm despite education. The research purpose was to evaluate the effectiveness of an experiential teaching strategy to reduce errors in a sample of 78 baccalaureate nursing students at a Northeastern college. A pretest-posttest design with random assignment into equal-sized groups was used. The experiential strategy was more effective than the traditional method (t = -0.312, df = 37, p = .004, 95% CI) with a reduction in calculation errors. Evaluations of error type and teaching strategies are indicated to facilitate course and program changes.

  4. Particle size distribution of distillers dried grains with solubles (DDGS) and relationships to compositional and color properties.

    PubMed

    Liu, Keshun

    2008-11-01

    Eleven distillers dried grains with solubles (DDGS), processed from yellow corn, were collected from different ethanol processing plants in the US Midwest area. Particle size distribution (PSD) by mass of each sample was determined using a series of six selected US standard sieves: Nos. 8, 12, 18, 35, 60, and 100, and a pan. The original sample and sieve sized fractions were measured for surface color and contents of moisture, protein, oil, ash, and starch. Total carbohydrate (CHO) and total non-starch CHO were also calculated. Results show that there was a great variation in composition and color among DDGS from different plants. Surprisingly, a few DDGS samples contained unusually high amounts of residual starch (11.1-17.6%, dry matter basis, vs. about 5% of the rest), presumably resulting from modified processing methods. Particle size of DDGS varied greatly within a sample and PSD varied greatly among samples. The 11 samples had a mean value of 0.660mm for the geometric mean diameter (dgw) of particles and a mean value of 0.440mm for the geometric standard deviation (Sgw) of particle diameters by mass. The majority had a unimodal PSD, with a mode in the size class between 0.5 and 1.0mm. Although PSD and color parameters had little correlation with composition of whole DDGS samples, distribution of nutrients as well as color attributes correlated well with PSD. In sieved fractions, protein content, L and a color values negatively while contents of oil and total CHO positively correlated with particle size. It is highly feasible to fractionate DDGS for compositional enrichment based on particle size, while the extent of PSD can serve as an index for potential of DDGS fractionation. The above information should be a vital addition to quality and baseline data of DDGS.

  5. The effect of doped zinc on the structural properties of nano-crystalline (Se0.8Te0.2)100-xZnx

    NASA Astrophysics Data System (ADS)

    Kumar, Arun; Singh, Harkawal; Gill, P. S.; Goyal, Navdeep

    2016-05-01

    The effect of metallic zinc (Zn) on the structural properties of (Se0.8Te0.2)1-XZnX (x=0, 2, 6, 8, 10) samples analyzed by X-ray Diffraction (XRD). The presence of sharp peaks in XRD patterns confirmed the crystalline nature of the samples and is indexed in orthorhombic crystal structure. XRD studies predicts that the average particle size of all the samples are about 46.29 nm, which is less than 100 nm and hence have strong tendency of agglomeration. Williamson-Hall plot method was used to evaluate the lattice strain. The dislocation density and no. of unit cells of the samples were calculated which show the inverse relation with each other. Morphology index derived from FWHM of XRD data explains the direct relationship with the particle size.

  6. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    PubMed

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Solution and Aging of MAR-M246 Nickel-Based Superalloy

    NASA Astrophysics Data System (ADS)

    Baldan, Renato; da Silva, Antonio Augusto Araújo Pinto; Nunes, Carlos Angelo; Couto, Antonio Augusto; Gabriel, Sinara Borborema; Alkmin, Luciano Braga

    2017-02-01

    Solution and aging heat-treatments play a key role for the application of the superalloys. The aim of this work is to evaluate the microstructure of the MAR-M246 nickel-based superalloy solutioned at 1200 and 1250 °C for 330 min and aged at 780, 880 and 980 °C for 5, 20 and 80 h. The γ' solvus, solidus and liquidus temperatures were calculated with the aid of the JMatPro software (Ni database). The as-cast and heat-treated samples were characterized by SEM/EDS and SEM-FEG. The γ' size precipitated in the aged samples was measured and compared with JMatPro simulations. The results have shown that the sample solutioned at 1250 °C for 330 min showed a very homogeneous γ matrix with carbides and cubic γ' precipitates uniformly distributed. The mean γ' size of aged samples at 780 and 880 °C for 5, 20 and 80 h did not present significant differences when compared to the solutioned sample. However, a significant increasing in the γ' particles was observed at 980 °C, evidenced by the large mean size of these particles after 80 h of aging heat-treatment.

  8. Statistical computation of tolerance limits

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1993-01-01

    Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.

  9. Stratospheric CCN sampling program

    NASA Technical Reports Server (NTRS)

    Rogers, C. F.

    1981-01-01

    When Mt. St. Helens produced several major eruptions in the late spring of 1980, there was a strong interest in the characterization of the cloud condensation nuclei (CCN) activity of the material that was injected into the troposphere and stratosphere. The scientific value of CCN measurements is two fold: CCN counts may be directly applied to calculations of the interaction of the aerosol (enlargement) at atmospherically-realistic relative humidities or supersaturations; and if the chemical constituency of the aerosol can be assumed, the number-versus-critical supersaturation spectrum may be converted into a dry aerosol size spectrum covering a size region not readily measured by other methods. The sampling method is described along with the instrumentation used in the experiments.

  10. Crystallite size strain analysis of nanocrystalline La0.7Sr0.3MnO3 perovskite by Williamson-Hall plot method

    NASA Astrophysics Data System (ADS)

    Kumar, Dinesh; Verma, Narendra Kumar; Singh, Chandra Bhal; Singh, Akhilesh Kumar

    2018-04-01

    The nanocrystalline Sr-doped LaMnO3 (La0.7Sr0.3MnO3 = LSMO) perovskite manganites having different crystallite size were synthesized using the nitrate-glycine auto-combustion method. The phase purity of the manganites was checked by X-ray diffraction (XRD) measurement. The XRD patterns of the sample reveal that La0.7S0.3MnO3 crystallizes into rhombohedral crystal structure with space group R-3c. The size-dependence of structural lattice parameters have been investigated with the help of Rietveld refinement. The structural parameters increase as a function of crystallite size. The crystallite-size and internal strain as a function of crystallite-size have been calculated using Williamson-Hall plot.

  11. Efficacy of a strategy for implementing a guideline for the control of cardiovascular risk in a primary healthcare setting: the SIRVA2 study a controlled, blinded community intervention trial randomised by clusters

    PubMed Central

    2011-01-01

    This work describes the methodology used to assess a strategy for implementing clinical practice guidelines (CPG) for cardiovascular risk control in a health area of Madrid. Background The results on clinical practice of introducing CPGs have been little studied in Spain. The strategy used to implement a CPG is known to influence its final use. Strategies based on the involvement of opinion leaders and that are easily executed appear to be among the most successful. Aim The main aim of the present work was to compare the effectiveness of two strategies for implementing a CPG designed to reduce cardiovascular risk in the primary healthcare setting, measured in terms of improvements in the recording of calculated cardiovascular risk or specific risk factors in patients' medical records, the control of cardiovascular risk factors, and the incidence of cardiovascular events. Methods This study involved a controlled, blinded community intervention in which the 21 health centres of the Number 2 Health Area of Madrid were randomly assigned by clusters to be involved in either a proposed CPG implementation strategy to reduce cardiovascular risk, or the normal dissemination strategy. The study subjects were patients ≥ 45 years of age whose health cards showed them to belong to the studied health area. The main variable examined was the proportion of patients whose medical histories included the calculation of their cardiovascular risk or that explicitly mentioned the presence of variables necessary for its calculation. The sample size was calculated for a comparison of proportions with alpha = 0.05 and beta = 0.20, and assuming that the intervention would lead to a 15% increase in the measured variables. Corrections were made for the design effect, assigning a sample size to each cluster proportional to the size of the population served by the corresponding health centre, and assuming losses of 20%. This demanded a final sample size of 620 patients. Data were analysed using summary measures for each cluster, both in making estimates and for hypothesis testing. Analysis of the variables was made on an intention-to-treat basis. Trial Registration ClinicalTrials.gov: NCT01270022 PMID:21504570

  12. Calculating solar photovoltaic potential on residential rooftops in Kailua Kona, Hawaii

    NASA Astrophysics Data System (ADS)

    Carl, Caroline

    As carbon based fossil fuels become increasingly scarce, renewable energy sources are coming to the forefront of policy discussions around the globe. As a result, the State of Hawaii has implemented aggressive goals to achieve energy independence by 2030. Renewable electricity generation using solar photovoltaic technologies plays an important role in these efforts. This study utilizes geographic information systems (GIS) and Light Detection and Ranging (LiDAR) data with statistical analysis to identify how much solar photovoltaic potential exists for residential rooftops in the town of Kailua Kona on Hawaii Island. This study helps to quantify the magnitude of possible solar photovoltaic (PV) potential for Solar World SW260 monocrystalline panels on residential rooftops within the study area. Three main areas were addressed in the execution of this research: (1) modeling solar radiation, (2) estimating available rooftop area, and (3) calculating PV potential from incoming solar radiation. High resolution LiDAR data and Esri's solar modeling tools and were utilized to calculate incoming solar radiation on a sample set of digitized rooftops. Photovoltaic potential for the sample set was then calculated with the equations developed by Suri et al. (2005). Sample set rooftops were analyzed using a statistical model to identify the correlation between rooftop area and lot size. Least squares multiple linear regression analysis was performed to identify the influence of slope, elevation, rooftop area, and lot size on the modeled PV potential values. The equations built from these statistical analyses of the sample set were applied to the entire study region to calculate total rooftop area and PV potential. The total study area statistical analysis findings estimate photovoltaic electric energy generation potential for rooftops is approximately 190,000,000 kWh annually. This is approximately 17 percent of the total electricity the utility provided to the entire island in 2012. Based on these findings, full rooftop PV installations on the 4,460 study area homes could provide enough energy to power over 31,000 homes annually. The methods developed here suggest a means to calculate rooftop area and PV potential in a region with limited available data. The use of LiDAR point data offers a major opportunity for future research in both automating rooftop inventories and calculating incoming solar radiation and PV potential for homeowners.

  13. POWER AND SAMPLE SIZE CALCULATIONS FOR LINEAR HYPOTHESES ASSOCIATED WITH MIXTURES OF MANY COMPONENTS USING FIXED-RATIO RAY DESIGNS

    EPA Science Inventory

    Response surface methodology, often supported by factorial designs, is the classical experimental approach that is widely accepted for detecting and characterizing interactions among chemicals in a mixture. In an effort to reduce the experimental effort as the number of compound...

  14. 76 FR 36139 - Agency Information Collection Activities: Submission for OMB Review; Comment Request; Generic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-21

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY: Federal Emergency...): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery'' to the Office of...

  15. 76 FR 29763 - Agency Information Collection Activities; Submission for Office of Management and Budget Review...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-23

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY... ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery.'' Also include...

  16. 77 FR 63798 - Agency Information Collection Activities: Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-17

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Clearance for the Collection of Qualitative Feedback on the Service Delivery of the Consumer Financial... title, ``Generic Clearance for the Collection of Qualitative Feedback on the Service Delivery of the...

  17. 76 FR 23536 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-27

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Qualitative Feedback on Agency Service Delivery April 22, 2011. AGENCY: Department of Agriculture (USDA... Qualitative Feedback on Agency Service Delivery'' to OMB for approval under the Paperwork Reduction Act (PRA...

  18. 76 FR 25693 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-05

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  19. 76 FR 37825 - Agency Information Collection Activities; Generic Clearance for the Collection of Qualitative...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-28

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Activities; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY: U...): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery '' to OMB for...

  20. 76 FR 13020 - Agency Information Collection Activities: Comment Request; Generic Clearance for the Collection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY: Department of... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  1. 76 FR 79702 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-22

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Qualitative Feedback on Agency Service Delivery AGENCY: National Institute of Mental Health (NIMH), HHS... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  2. 76 FR 13977 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-15

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... of Qualitative Feedback on Agency Service Delivery AGENCY: Office of the Secretary/Office of the...): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery'' to OMB for...

  3. 77 FR 72361 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Qualitative Feedback on Agency Service Delivery SUMMARY: As part of a Federal Government-wide effort to... Information Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback...

  4. 76 FR 15027 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-18

    ... clustering), the precision requirements or power calculations that justify the proposed sample size, the...; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery... Clearance for the Collection of Qualitative Feedback on Agency Service Delivery'' to OMB for approval under...

  5. 76 FR 19826 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-08

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY... (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery...

  6. 76 FR 10939 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-28

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... of Qualitative Feedback on Agency Service Delivery AGENCY: Federal Railroad Administration (FRA... Qualitative Feedback on Agency Service Delivery'' to OMB for approval under the Paperwork Reduction Act (PRA...

  7. 78 FR 44099 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-23

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service... Qualitative Feedback on Agency Service Delivery'' for approval under the Paperwork Reduction Act (PRA) (44 U.S...

  8. 75 FR 80542 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-22

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential...; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  9. 76 FR 31383 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-31

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods...; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY: Peace... Qualitative Feedback on Agency Service Delivery '' to OMB for approval under the Paperwork Reduction Act (PRA...

  10. 76 FR 13019 - Agency Information Collection Activities: Comment Request; Generic Clearance for the Collection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY: Department of...): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery '' to OMB for...

  11. 76 FR 55398 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-07

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Qualitative Feedback on Agency Service Delivery AGENCY: National Institutes of Health, Eunice Kennedy Shriver...): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery '' to OMB for...

  12. 76 FR 44938 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-27

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Qualitative Feedback on Agency Service Delivery: National Cancer Center (NCI) ACTION: 30-Day notice of... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  13. 76 FR 21800 - Agency Information Collection Activities: Submission for OMB Review; Comment Request; Generic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-18

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Activities: Submission for OMB Review; Comment Request; Generic Clearance for the Collection of Qualitative... Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service...

  14. 76 FR 20967 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-14

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY: U... Clearance for the Collection of Qualitative Feedback on Agency Service Delivery'' to OMB for approval under...

  15. 76 FR 38355 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-30

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... of Qualitative Feedback on Agency Service Delivery AGENCY: Architectural and Transportation Barriers...: ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery'' to the Office of...

  16. 76 FR 22920 - Agency Information Collection Activities: Proposed Collection; Comment Request; DOL Generic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-25

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Collection; Comment Request; DOL Generic Clearance for the Collection of Qualitative Feedback on Agency... of Qualitative Feedback on Agency Service Delivery'' to the Office of Management and Budget (OMB) for...

  17. Relation Between Pore Size and the Compressibility of a Confined Fluid

    PubMed Central

    Gor, Gennady Y.; Siderius, Daniel W.; Rasmussen, Christopher J.; Krekelberg, William P.; Shen, Vincent K.; Bernstein, Noam

    2015-01-01

    When a fluid is confined to a nanopore, its thermodynamic properties differ from the properties of a bulk fluid, so measuring such properties of the confined fluid can provide information about the pore sizes. Here we report a simple relation between the pore size and isothermal compressibility of argon confined in these pores. Compressibility is calculated from the fluctuations of the number of particles in the grand canonical ensemble using two different simulation techniques: conventional grand-canonical Monte Carlo and grand-canonical ensemble transition-matrix Monte Carlo. Our results provide a theoretical framework for extracting the information on the pore sizes of fluid-saturated samples by measuring the compressibility from ultrasonic experiments. PMID:26590541

  18. Aerosol sampling for the August 7th, and 9th, 1985 SAGE II validation experiment

    NASA Technical Reports Server (NTRS)

    Oberbeck, V. R.; Pueschel, R.; Ferry, G.; Livingston, J.; Fong, W.

    1986-01-01

    Comparisons are made between aerosol size distributions measured by instrumented aircraft and the SAGE II sensor on the ERB satellite performing limb scans of the same atmospheric region. Particle radii ranging from 0.0001-200 microns were detected, with good agreement being obtained between the size distributions detected by impactors and probes at radii over 0.15 micron. The distributions were used to calculate aerosol extinction values which were compared with values from SAGE II scans.

  19. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    USGS Publications Warehouse

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.

  20. Intra-class correlation estimates for assessment of vitamin A intake in children.

    PubMed

    Agarwal, Girdhar G; Awasthi, Shally; Walter, Stephen D

    2005-03-01

    In many community-based surveys, multi-level sampling is inherent in the design. In the design of these studies, especially to calculate the appropriate sample size, investigators need good estimates of intra-class correlation coefficient (ICC), along with the cluster size, to adjust for variation inflation due to clustering at each level. The present study used data on the assessment of clinical vitamin A deficiency and intake of vitamin A-rich food in children in a district in India. For the survey, 16 households were sampled from 200 villages nested within eight randomly-selected blocks of the district. ICCs and components of variances were estimated from a three-level hierarchical random effects analysis of variance model. Estimates of ICCs and variance components were obtained at village and block levels. Between-cluster variation was evident at each level of clustering. In these estimates, ICCs were inversely related to cluster size, but the design effect could be substantial for large clusters. At the block level, most ICC estimates were below 0.07. At the village level, many ICC estimates ranged from 0.014 to 0.45. These estimates may provide useful information for the design of epidemiological studies in which the sampled (or allocated) units range in size from households to large administrative zones.

  1. Estuarine sediment toxicity tests on diatoms: Sensitivity comparison for three species

    NASA Astrophysics Data System (ADS)

    Moreno-Garrido, Ignacio; Lubián, Luis M.; Jiménez, Begoña; Soares, Amadeu M. V. M.; Blasco, Julián

    2007-01-01

    Experimental populations of three marine and estuarine diatoms were exposed to sediments with different levels of pollutants, collected from the Aveiro Lagoon (NW of Portugal). The species selected were Cylindrotheca closterium, Phaeodactylum tricornutum and Navicula sp. Previous experiments were designed to determine the influence of the sediment particle size distribution on growth of the species assayed. Percentage of silt-sized sediment affect to growth of the selected species in the experimental conditions: the higher percentage of silt-sized sediment, the lower growth. In any case, percentages of silt-sized sediment less than 10% did not affect growth. In general, C. closterium seems to be slightly more sensitive to the selected sediments than the other two species. Two groups of sediment samples were determined as a function of the general response of the exposed microalgal populations: three of the six samples used were more toxic than the other three. Chemical analysis of the samples was carried out in order to determine the specific cause of differences in toxicity. After a statistical analysis, concentrations of Sn, Zn, Hg, Cu and Cr (among all physico-chemical analyzed parameters), in order of importance, were the most important factors that divided the two groups of samples (more and less toxic samples). Benthic diatoms seem to be sensitive organisms in sediment toxicity tests. Toxicity data from bioassays involving microphytobentos should be taken into account when environmental risks are calculated.

  2. Dimension- and shape-dependent thermal transport in nano-patterned thin films investigated by scanning thermal microscopy

    NASA Astrophysics Data System (ADS)

    Ge, Yunfei; Zhang, Yuan; Weaver, Jonathan M. R.; Dobson, Phillip S.

    2017-12-01

    Scanning thermal microscopy (SThM) is a technique which is often used for the measurement of the thermal conductivity of materials at the nanometre scale. The impact of nano-scale feature size and shape on apparent thermal conductivity, as measured using SThM, has been investigated. To achieve this, our recently developed topography-free samples with 200 and 400 nm wide gold wires (50 nm thick) of length of 400-2500 nm were fabricated and their thermal resistance measured and analysed. This data was used in the development and validation of a rigorous but simple heat transfer model that describes a nanoscopic contact to an object with finite shape and size. This model, in combination with a recently proposed thermal resistance network, was then used to calculate the SThM probe signal obtained by measuring these features. These calculated values closely matched the experimental results obtained from the topography-free sample. By using the model to analyse the dimensional dependence of thermal resistance, we demonstrate that feature size and shape has a significant impact on measured thermal properties that can result in a misinterpretation of material thermal conductivity. In the case of a gold nanowire embedded within a silicon nitride matrix it is found that the apparent thermal conductivity of the wire appears to be depressed by a factor of twenty from the true value. These results clearly demonstrate the importance of knowing both probe-sample thermal interactions and feature dimensions as well as shape when using SThM to quantify material thermal properties. Finally, the new model is used to identify the heat flux sensitivity, as well as the effective contact size of the conventional SThM system used in this study.

  3. Methods for Specifying the Target Difference in a Randomised Controlled Trial: The Difference ELicitation in TriAls (DELTA) Systematic Review

    PubMed Central

    Hislop, Jenni; Adewuyi, Temitope E.; Vale, Luke D.; Harrild, Kirsten; Fraser, Cynthia; Gurung, Tara; Altman, Douglas G.; Briggs, Andrew H.; Fayers, Peter; Ramsay, Craig R.; Norrie, John D.; Harvey, Ian M.; Buckley, Brian; Cook, Jonathan A.

    2014-01-01

    Background Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation. Methods and Findings A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified—anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size. Conclusions A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts. Please see later in the article for the Editors' Summary PMID:24824338

  4. Instrumental neutron activation analysis for studying size-fractionated aerosols

    NASA Astrophysics Data System (ADS)

    Salma, Imre; Zemplén-Papp, Éva

    1999-10-01

    Instrumental neutron activation analysis (INAA) was utilized for studying aerosol samples collected into a coarse and a fine size fraction on Nuclepore polycarbonate membrane filters. As a result of the panoramic INAA, 49 elements were determined in an amount of about 200-400 μg of particulate matter by two irradiations and four γ-spectrometric measurements. The analytical calculations were performed by the absolute ( k0) standardization method. The calibration procedures, application protocol and the data evaluation process are described and discussed. They make it possible now to analyse a considerable number of samples, with assuring the quality of the results. As a means of demonstrating the system's analytical capabilities, the concentration ranges, median or mean atmospheric concentrations and detection limits are presented for an extensive series of aerosol samples collected within the framework of an urban air pollution study in Budapest. For most elements, the precision of the analysis was found to be beyond the uncertainty represented by the sampling techniques and sample variability.

  5. A Systematic Review of the Relationship between Familism and Mental Health Outcomes in Latino Population

    PubMed Central

    Valdivieso-Mora, Esmeralda; Peet, Casie L.; Garnier-Villarreal, Mauricio; Salazar-Villanea, Monica; Johnson, David K.

    2016-01-01

    Background: Familismo or familism is a cultural value frequently seen in Hispanic cultures, in which a higher emphasis is placed on the family unit in terms of respect, support, obligation, and reference. Familism has been implicated as a protective factor against mental health problems and may foster the growth and development of children. This study aims at measuring the size of the relationship between familism and mental health outcomes of depression, suicide, substance abuse, internalizing, and externalizing behaviors. Methods: Thirty-nine studies were systematically reviewed to assess the relationship between familism and mental health outcomes. Data from the studies were comprised and organized into five categories: depression, suicide, internalizing symptoms, externalizing symptoms, and substance use. The Cohen's d of each value (dependent variable in comparison to familism) was calculated. Results were weighted based on sample sizes (n) and total effect sizes were then calculated. It was hypothesized that there would be a large effect size in the relationship between familism and depression, suicide, internalizing, and externalizing symptoms and substance use in Hispanics. Results: The meta-analysis showed small effect sizes in the relationship between familism and depression, suicide and internalizing behaviors. And no significant effects for substance abuse and externalizing behaviors. Discussion: The small effects found in this study may be explained by the presence of moderator variables between familism and mental health outcomes (e.g., communication within the family). In addition, variability in the Latino samples and in the measurements used might explain the small and non-significant effects found. PMID:27826269

  6. Vessel Sampling and Blood Flow Velocity Distribution With Vessel Diameter for Characterizing the Human Bulbar Conjunctival Microvasculature.

    PubMed

    Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua

    2016-03-01

    This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, P<0.05). This was the first study to determine the sampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.

  7. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    PubMed

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  8. More Power to OATP1B1: An Evaluation of Sample Size in Pharmacogenetic Studies Using a Rosuvastatin PBPK Model for Intestinal, Hepatic, and Renal Transporter‐Mediated Clearances

    PubMed Central

    Burt, Howard; Abduljalil, Khaled; Neuhoff, Sibylle

    2016-01-01

    Abstract Rosuvastatin is a substrate of choice in clinical studies of organic anion‐transporting polypeptide (OATP)1B1‐ and OATP1B3‐associated drug interactions; thus, understanding the effect of OATP1B1 polymorphisms on the pharmacokinetics of rosuvastatin is crucial. Here, physiologically based pharmacokinetic (PBPK) modeling was coupled with a power calculation algorithm to evaluate the influence of sample size on the ability to detect an effect (80% power) of OATP1B1 phenotype on pharmacokinetics of rosuvastatin. Intestinal, hepatic, and renal transporters were mechanistically incorporated into a rosuvastatin PBPK model using permeability‐limited models for intestine, liver, and kidney, respectively, nested within a full PBPK model. Simulated plasma rosuvastatin concentrations in healthy volunteers were in agreement with previously reported clinical data. Power calculations were used to determine the influence of sample size on study power while accounting for OATP1B1 haplotype frequency and abundance in addition to its correlation with OATP1B3 abundance. It was determined that 10 poor‐transporter and 45 intermediate‐transporter individuals are required to achieve 80% power to discriminate the AUC0‐48h of rosuvastatin from that of the extensive‐transporter phenotype. This number was reduced to 7 poor‐transporter and 40 intermediate‐transporter individuals when the reported correlation between OATP1B1 and 1B3 abundance was taken into account. The current study represents the first example in which PBPK modeling in conjunction with power analysis has been used to investigate sample size in clinical studies of OATP1B1 polymorphisms. This approach highlights the influence of interindividual variability and correlation of transporter abundance on study power and should allow more informed decision making in pharmacogenomic study design. PMID:27385171

  9. PDF-based heterogeneous multiscale filtration model.

    PubMed

    Gong, Jian; Rutland, Christopher J

    2015-04-21

    Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.

  10. Relative efficiency of unequal versus equal cluster sizes in cluster randomized trials using generalized estimating equation models.

    PubMed

    Liu, Jingxia; Colditz, Graham A

    2018-05-01

    There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Optical measurements for interfacial conduction and breakdown

    NASA Astrophysics Data System (ADS)

    Hebner, R. E., Jr.; Kelley, E. F.; Hagler, J. N.

    1983-01-01

    Measurements and calculations contributing to the understanding of space and surface charges in practical insulation systems are given. Calculations are presented which indicate the size of charge densities necessary to appreciably modify the electric field from what would be calculated from geometrical considerations alone. Experimental data is also presented which locates the breakdown in an electrode system with a paper sample bridging the gap between the electrodes. It is found that with careful handling, the breakdown does not necessarily occur along the interface even if heavily contaminated oil is used. The effects of space charge in the bulk liquid are electro-optically examined in nitrobenzene and transformer oil. Several levels of contamination in transformer oil are investigated. Whereas much space charge can be observed in nitrobenzene, very little space charge, if any, can be observed in the transformer oil samples even at temperatures near 100 degrees C.

  12. Effect of Reiki therapy on pain and anxiety in adults: an in-depth literature review of randomized trials with effect size calculations.

    PubMed

    Thrane, Susan; Cohen, Susan M

    2014-12-01

    The objective of this study was to calculate the effect of Reiki therapy for pain and anxiety in randomized clinical trials. A systematic search of PubMed, ProQuest, Cochrane, PsychInfo, CINAHL, Web of Science, Global Health, and Medline databases was conducted using the search terms pain, anxiety, and Reiki. The Center for Reiki Research also was examined for articles. Studies that used randomization and a control or usual care group, used Reiki therapy in one arm of the study, were published in 2000 or later in peer-reviewed journals in English, and measured pain or anxiety were included. After removing duplicates, 49 articles were examined and 12 articles received full review. Seven studies met the inclusion criteria: four articles studied cancer patients, one examined post-surgical patients, and two analyzed community dwelling older adults. Effect sizes were calculated for all studies using Cohen's d statistic. Effect sizes for within group differences ranged from d = 0.24 for decrease in anxiety in women undergoing breast biopsy to d = 2.08 for decreased pain in community dwelling adults. The between group differences ranged from d = 0.32 for decrease of pain in a Reiki versus rest intervention for cancer patients to d = 4.5 for decrease in pain in community dwelling adults. Although the number of studies is limited, based on the size Cohen's d statistics calculated in this review, there is evidence to suggest that Reiki therapy may be effective for pain and anxiety. Continued research using Reiki therapy with larger sample sizes, consistently randomized groups, and standardized treatment protocols is recommended. Copyright © 2014 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  13. Effect of Reiki Therapy on Pain and Anxiety in Adults: An In-Depth Literature Review of Randomized Trials with Effect Size Calculations

    PubMed Central

    Thrane, Susan; Cohen, Susan M.

    2013-01-01

    Objective To calculate the effect of Reiki therapy for pain and anxiety in randomized clinical trials. Data Sources A systematic search of PubMed, ProQuest, Cochrane, PsychInfo, CINAHL, Web of Science, Global Health, and Medline databases was conducted using the search terms pain, anxiety, and Reiki. The Center for Reiki Research was also examined for articles. Study Selection Studies that used randomization and a control or usual care group, used Reiki therapy in one arm of the study, published in 2000 or later in peer-reviewed journals in English, and measured pain or anxiety were included. Results After removing duplicates, 49 articles were examined and 12 articles received full review. Seven studies met the inclusion criteria: four articles studied cancer patients; one examined post-surgical patients; and two analyzed community dwelling older adults. Effect sizes were calculated for all studies using Cohen’s d statistic. Effect sizes for within group differences ranged from d=0.24 for decrease in anxiety in women undergoing breast biopsy to d=2.08 for decreased pain in community dwelling adults. The between group differences ranged from d=0.32 for decrease of pain in a Reiki versus rest intervention for cancer patients to d=4.5 for decrease in pain in community dwelling adults. Conclusions While the number of studies is limited, based on the size Cohen’s d statistics calculated in this review, there is evidence to suggest that Reiki therapy may be effective for pain and anxiety. Continued research using Reiki therapy with larger sample sizes, consistently randomized groups, and standardized treatment protocols is recommended. PMID:24582620

  14. Measurement of J-integral in CAD/CAM dental ceramics and composite resin by digital image correlation.

    PubMed

    Jiang, Yanxia; Akkus, Anna; Roperto, Renato; Akkus, Ozan; Li, Bo; Lang, Lisa; Teich, Sorin

    2016-09-01

    Ceramic and composite resin blocks for CAD/CAM machining of dental restorations are becoming more common. The sample sizes affordable by these blocks are smaller than ideal for stress intensity factor (SIF) based tests. The J-integral measurement calls for full field strain measurement, making it challenging to conduct. Accordingly, the J-integral values of dental restoration materials used in CAD/CAM restorations have not been reported to date. Digital image correlation (DIC) provides full field strain maps, making it possible to calculate the J-integral value. The aim of this study was to measure the J-integral value for CAD/CAM restorative materials. Four types of materials (sintered IPS E-MAX CAD, non-sintered IPS E-MAX CAD, Vita Mark II and Paradigm MZ100) were used to prepare beam samples for three-point bending tests. J-integrals were calculated for different integral path size and locations with respect to the crack tip. J-integral at path 1 for each material was 1.26±0.31×10(-4)MPam for MZ 100, 0.59±0.28×10(-4)MPam for sintered E-MAX, 0.19±0.07×10(-4)MPam for VM II, and 0.21±0.05×10(-4)MPam for non-sintered E-MAX. There were no significant differences between different integral path size, except for the non-sintered E-MAX group. J-integral paths of non-sintered E-MAX located within 42% of the height of the sample provided consistent values whereas outside this range resulted in lower J-integral values. Moreover, no significant difference was found among different integral path locations. The critical SIF was calculated from J-integral (KJ) along with geometry derived SIF values (KI). KI values were comparable with KJ and geometry based SIF values obtained from literature. Therefore, DIC derived J-integral is a reliable way to assess the fracture toughness of small sized specimens for dental CAD/CAM restorative materials; however, with caution applied to the selection of J-integral path. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. An estimate of field size distributions for selected sites in the major grain producing countries

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.

    1977-01-01

    The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.

  16. Cu-doped Cd1- x Zn x S alloy: synthesis and structural investigations

    NASA Astrophysics Data System (ADS)

    Yadav, Indu; Ahlawat, Dharamvir Singh; Ahlawat, Rachna

    2016-03-01

    Copper doped Cd1- x Zn x S ( x ≤ 1) quantum dots have been synthesized using chemical co-precipitation method. Structural investigation of the synthesized nanomaterials has been carried out by powder XRD method. The XRD results have confirmed that as-prepared Cu-doped Cd1- x Zn x S quantum dots have hexagonal structure. The average nanocrystallite size was estimated in the range 2-12 nm using Debye-Scherrer formula. The lattice constants, lattice plane, d-spacing, unit cell volume, Lorentz factor and dislocation density were also calculated from XRD data. The change in particle size was observed with the change in Zn concentration. Furthermore, FTIR spectra of the prepared samples were observed for identification of COO- and O-H functional groups. The TEM study has also reported the same size range of nanoparticles. The increase in agglomeration has been observed with the increase in Zn concentration in the prepared samples.

  17. Molecular-Size-Separated Brown Carbon Absorption for Biomass-Burning Aerosol at Multiple Field Sites.

    PubMed

    Di Lorenzo, Robert A; Washenfelder, Rebecca A; Attwood, Alexis R; Guo, Hongyu; Xu, Lu; Ng, Nga L; Weber, Rodney J; Baumann, Karsten; Edgerton, Eric; Young, Cora J

    2017-03-21

    Biomass burning is a known source of brown carbon aerosol in the atmosphere. We collected filter samples of biomass-burning emissions at three locations in Canada and the United States with transport times of 10 h to >3 days. We analyzed the samples with size-exclusion chromatography coupled to molecular absorbance spectroscopy to determine absorbance as a function of molecular size. The majority of absorption was due to molecules >500 Da, and these contributed an increasing fraction of absorption as the biomass-burning aerosol aged. This suggests that the smallest molecular weight fraction is more susceptible to processes that lead to reduced light absorption, while larger-molecular-weight species may represent recalcitrant brown carbon. We calculate that these large-molecular-weight species are composed of more than 20 carbons with as few as two oxygens and would be classified as extremely low volatility organic compounds (ELVOCs).

  18. Sedimentology and geochemistry of mud volcanoes in the Anaximander Mountain Region from the Eastern Mediterranean Sea.

    PubMed

    Talas, Ezgi; Duman, Muhammet; Küçüksezgin, Filiz; Brennan, Michael L; Raineault, Nicole A

    2015-06-15

    Investigations carried out on surface sediments collected from the Anaximander mud volcanoes in the Eastern Mediterranean Sea to determine sedimentary and geochemical properties. The sediment grain size distribution and geochemical contents were determined by grain size analysis, organic carbon, carbonate contents and element analysis. The results of element contents were compared to background levels of Earth's crust. The factors that affect element distribution in sediments were calculated by the nine push core samples taken from the surface of mud volcanoes by the E/V Nautilus. The grain size of the samples varies from sand to sandy silt. Enrichment and Contamination factor analysis showed that these analyses can also be used to evaluate of deep sea environmental and source parameters. It is concluded that the biological and cold seep effects are the main drivers of surface sediment characteristics from the Anaximander mud volcanoes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Designing image segmentation studies: Statistical power, sample size and reference standard quality.

    PubMed

    Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C

    2017-12-01

    Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Anharmonic, dimensionality and size effects in phonon transport

    NASA Astrophysics Data System (ADS)

    Thomas, Iorwerth O.; Srivastava, G. P.

    2017-12-01

    We have developed and employed a numerically efficient semi- ab initio theory, based on density-functional and relaxation-time schemes, to examine anharmonic, dimensionality and size effects in phonon transport in three- and two-dimensional solids of different crystal symmetries. Our method uses third- and fourth-order terms in crystal Hamiltonian expressed in terms of a temperature-dependent Grüneisen’s constant. All input to numerical calculations are generated from phonon calculations based on the density-functional perturbation theory. It is found that four-phonon processes make important and measurable contribution to lattice thermal resistivity above the Debye temperature. From our numerical results for bulk Si, bulk Ge, bulk MoS2 and monolayer MoS2 we find that the sample length dependence of phonon conductivity is significantly stronger in low-dimensional solids.

  1. A Note on Monotonicity Assumptions for Exact Unconditional Tests in Binary Matched-pairs Designs

    PubMed Central

    Li, Xiaochun; Liu, Mengling; Goldberg, Judith D.

    2011-01-01

    Summary Exact unconditional tests have been widely applied to test the difference between two probabilities for 2×2 matched-pairs binary data with small sample size. In this context, Lloyd (2008, Biometrics 64, 716–723) proposed an E + M p-value, that showed better performance than the existing M p-value and C p-value. However, the analytical calculation of the E + M p-value requires that the Barnard convexity condition be satisfied; this can be challenging to prove theoretically. In this paper, by a simple reformulation, we show that a weaker condition, conditional monotonicity, is sufficient to calculate all three p-values (M, C and E + M) and their corresponding exact sizes. Moreover, this conditional monotonicity condition is applicable to non-inferiority tests. PMID:21466507

  2. High transport efficiency of nanoparticles through a total-consumption sample introduction system and its beneficial application for particle size evaluation in single-particle ICP-MS.

    PubMed

    Miyashita, Shin-Ichi; Mitsuhashi, Hiroaki; Fujii, Shin-Ichiro; Takatsu, Akiko; Inagaki, Kazumi; Fujimoto, Toshiyuki

    2017-02-01

    In order to facilitate reliable and efficient determination of both the particle number concentration (PNC) and the size of nanoparticles (NPs) by single-particle ICP-MS (spICP-MS) without the need to correct for the particle transport efficiency (TE, a possible source of bias in the results), a total-consumption sample introduction system consisting of a large-bore, high-performance concentric nebulizer and a small-volume on-axis cylinder chamber was utilized. Such a system potentially permits a particle TE of 100 %, meaning that there is no need to include a particle TE correction when calculating the PNC and the NP size. When the particle TE through the sample introduction system was evaluated by comparing the frequency of sharp transient signals from the NPs in a measured NP standard of precisely known PNC to the particle frequency for a measured NP suspension, the TE for platinum NPs with a nominal diameter of 70 nm was found to be very high (i.e., 93 %), and showed satisfactory repeatability (relative standard deviation of 1.0 % for four consecutive measurements). These results indicated that employing this total consumption system allows the particle TE correction to be ignored when calculating the PNC. When the particle size was determined using a solution-standard-based calibration approach without an NP standard, the particle diameters of platinum and silver NPs with nominal diameters of 30-100 nm were found to agree well with the particle diameters determined by transmission electron microscopy, regardless of whether a correction was performed for the particle TE. Thus, applying the proposed system enables NP size to be accurately evaluated using a solution-standard-based calibration approach without the need to correct for the particle TE.

  3. Implementing Generalized Additive Models to Estimate the Expected Value of Sample Information in a Microsimulation Model: Results of Three Case Studies.

    PubMed

    Rabideau, Dustin J; Pei, Pamela P; Walensky, Rochelle P; Zheng, Amy; Parker, Robert A

    2018-02-01

    The expected value of sample information (EVSI) can help prioritize research but its application is hampered by computational infeasibility, especially for complex models. We investigated an approach by Strong and colleagues to estimate EVSI by applying generalized additive models (GAM) to results generated from a probabilistic sensitivity analysis (PSA). For 3 potential HIV prevention and treatment strategies, we estimated life expectancy and lifetime costs using the Cost-effectiveness of Preventing AIDS Complications (CEPAC) model, a complex patient-level microsimulation model of HIV progression. We fitted a GAM-a flexible regression model that estimates the functional form as part of the model fitting process-to the incremental net monetary benefits obtained from the CEPAC PSA. For each case study, we calculated the expected value of partial perfect information (EVPPI) using both the conventional nested Monte Carlo approach and the GAM approach. EVSI was calculated using the GAM approach. For all 3 case studies, the GAM approach consistently gave similar estimates of EVPPI compared with the conventional approach. The EVSI behaved as expected: it increased and converged to EVPPI for larger sample sizes. For each case study, generating the PSA results for the GAM approach required 3 to 4 days on a shared cluster, after which EVPPI and EVSI across a range of sample sizes were evaluated in minutes. The conventional approach required approximately 5 weeks for the EVPPI calculation alone. Estimating EVSI using the GAM approach with results from a PSA dramatically reduced the time required to conduct a computationally intense project, which would otherwise have been impractical. Using the GAM approach, we can efficiently provide policy makers with EVSI estimates, even for complex patient-level microsimulation models.

  4. Confidence bounds for normal and lognormal distribution coefficients of variation

    Treesearch

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  5. 10 CFR 431.325 - Units to be tested.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... EQUIPMENT Metal Halide Lamp Ballasts and Fixtures Test Procedures § 431.325 Units to be tested. For each basic model of metal halide lamp ballast selected for testing, a sample of sufficient size, no less than... energy efficiency calculated as the measured output power to the lamp divided by the measured input power...

  6. Relationship between Spiritual Intelligence and Job Satisfaction among Female High School Teachers

    ERIC Educational Resources Information Center

    Zamani, Mahmmood Reza; Karimi, Fariba

    2015-01-01

    The present paper aims to study the relationship between spiritual intelligence and job satisfaction among female high school teachers in Isfahan. It was a descriptive-correlation research. Population included all female high school teachers of Isfahan (2015) in academic year 2013-2014. Sample size calculated was 320 teachers by Krejcie and…

  7. Exact Interval Estimation, Power Calculation, and Sample Size Determination in Normal Correlation Analysis

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…

  8. Cross-Cultural Validation of the Counselor Burnout Inventory in Hong Kong

    ERIC Educational Resources Information Center

    Shin, Hyojung; Yuen, Mantak; Lee, Jayoung; Lee, Sang Min

    2013-01-01

    This study investigated the cross-cultural validation of the Chinese translation of the Counselor Burnout Inventory (CBI) with a sample of school counselors in Hong Kong. Specifically, this study examined the CBI's factor structure using confirmatory factor analysis and calculated the effect size, to compare burnout scores among the counselors of…

  9. Methods for estimating confidence intervals in interrupted time series analyses of health interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis

    2009-02-01

    Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.

  10. Quality of Reporting Nutritional Randomized Controlled Trials in Patients With Cystic Fibrosis.

    PubMed

    Daitch, Vered; Babich, Tanya; Singer, Pierre; Leibovici, Leonard

    2016-08-01

    Randomized controlled trials (RCTs) have a major role in the making of evidence-based guidelines. The aim of the present study was to critically appraise the RCTs that addressed nutritional interventions in patients with cystic fibrosis. Embase, PubMed, and the Cochrane Library were systematically searched until July 2015. Methodology and reporting of nutritional RCTs were evaluated by the Consolidated Standards of Reporting Trials (CONSORT) checklist and additional dimensions relevant to patients with CF. Fifty-one RCTs were included. Full details on methods were provided in a minority of studies. The mean duration of intervention was <6 months. 56.9% of the RCTs did not define a primary outcome; 70.6% of studies did not provide details on sample size calculation; and only 31.4% reported on the subgroup or separated between important subgroups. The examined RCTs were characterized by a weak methodology, a small number of patients with no sample size calculations, a relatively short intervention, and many times did not examine the outcomes that are important to the patient. Improvement over the years has been minor.

  11. Homogeneity tests of clustered diagnostic markers with applications to the BioCycle Study

    PubMed Central

    Tang, Liansheng Larry; Liu, Aiyi; Schisterman, Enrique F.; Zhou, Xiao-Hua; Liu, Catherine Chun-ling

    2014-01-01

    Diagnostic trials often require the use of a homogeneity test among several markers. Such a test may be necessary to determine the power both during the design phase and in the initial analysis stage. However, no formal method is available for the power and sample size calculation when the number of markers is greater than two and marker measurements are clustered in subjects. This article presents two procedures for testing the accuracy among clustered diagnostic markers. The first procedure is a test of homogeneity among continuous markers based on a global null hypothesis of the same accuracy. The result under the alternative provides the explicit distribution for the power and sample size calculation. The second procedure is a simultaneous pairwise comparison test based on weighted areas under the receiver operating characteristic curves. This test is particularly useful if a global difference among markers is found by the homogeneity test. We apply our procedures to the BioCycle Study designed to assess and compare the accuracy of hormone and oxidative stress markers in distinguishing women with ovulatory menstrual cycles from those without. PMID:22733707

  12. Robustness of methods for blinded sample size re-estimation with overdispersed count data.

    PubMed

    Schneider, Simon; Schmidli, Heinz; Friede, Tim

    2013-09-20

    Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between-patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re-estimation have been proposed for overdispersed count data, one of which is based on an EM-algorithm. In this paper we investigate the EM-algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM-based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Combustion synthesis and structural analysis of nanocrystalline nickel ferrite at low temperature regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shanmugavel, T., E-mail: gokulrajs@hotmail.com, E-mail: shanmugavelnano@gmail.com; Raj, S. Gokul, E-mail: gokulrajs@hotmail.com, E-mail: shanmugavelnano@gmail.com; Rajarajan, G.

    2015-06-24

    Combustion synthesis of single phase Nickel ferrite was successfully achieved at low temperature regime. The obtained powders were calcinated to increase the crystallinity and their characterization change due to calcinations is investigated in detail. Citric acid used as a chelating agent for the synthesis of nickel ferrite. Pure single phase nickel ferrites were found at this low temperature. The average crystalline sizes were measured by using powder XRD measurements. Surface morphology was investigated through Transmission Electron Microscope (TEM). Particle size calculated in XRD is compared with TEM results. Magnetic behaviour of the samples is analyzed by using Vibrating Sample Magnetometermore » (VSM). Saturation magnetization, coercivity and retentivity are measured and their results are discussed in detail.« less

  14. ELIPGRID-PC: A PC program for calculating hot spot probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, J.R.

    1994-10-01

    ELIPGRID-PC, a new personal computer program has been developed to provide easy access to Singer`s 1972 ELIPGRID algorithm for hot-spot detection probabilities. Three features of the program are the ability to determine: (1) the grid size required for specified conditions, (2) the smallest hot spot that can be sampled with a given probability, and (3) the approximate grid size resulting from specified conditions and sampling cost. ELIPGRID-PC also provides probability of hit versus cost data for graphing with spread-sheets or graphics software. The program has been successfully tested using Singer`s published ELIPGRID results. An apparent error in the original ELIPGRIDmore » code has been uncovered and an appropriate modification incorporated into the new program.« less

  15. Gravity or turbulence? IV. Collapsing cores in out-of-virial disguise

    NASA Astrophysics Data System (ADS)

    Ballesteros-Paredes, Javier; Vázquez-Semadeni, Enrique; Palau, Aina; Klessen, Ralf S.

    2018-06-01

    We study the dynamical state of massive cores by using a simple analytical model, an observational sample, and numerical simulations of collapsing massive cores. From the analytical model, we find that cores increase their column density and velocity dispersion as they collapse, resulting in a time evolution path in the Larson velocity dispersion-size diagram from large sizes and small velocity dispersions to small sizes and large velocity dispersions, while they tend to equipartition between gravity and kinetic energy. From the observational sample, we find that: (a) cores with substantially different column densities in the sample do not follow a Larson-like linewidth-size relation. Instead, cores with higher column densities tend to be located in the upper-left corner of the Larson velocity dispersion σv, 3D-size R diagram, a result explained in the hierarchical and chaotic collapse scenario. (b) Cores appear to have overvirial values. Finally, our numerical simulations reproduce the behavior predicted by the analytical model and depicted in the observational sample: collapsing cores evolve towards larger velocity dispersions and smaller sizes as they collapse and increase their column density. More importantly, however, they exhibit overvirial states. This apparent excess is due to the assumption that the gravitational energy is given by the energy of an isolated homogeneous sphere. However, such excess disappears when the gravitational energy is correctly calculated from the actual spatial mass distribution. We conclude that the observed energy budget of cores is consistent with their non-thermal motions being driven by their self-gravity and in the process of dynamical collapse.

  16. Erosion of an ancient mountain range, the Great Smoky Mountains, North Carolina and Tennessee

    USGS Publications Warehouse

    Matmon, A.; Bierman, P.R.; Larsen, J.; Southworth, S.; Pavich, M.; Finkel, R.; Caffee, M.

    2003-01-01

    Analysis of 10Be and 26Al in bedrock (n=10), colluvium (n=5 including grain size splits), and alluvial sediments (n=59 including grain size splits), coupled with field observations and GIS analysis, suggest that erosion rates in the Great Smoky Mountains are controlled by subsurface bedrock erosion and diffusive slope processes. The results indicate rapid alluvial transport, minimal alluvial storage, and suggest that most of the cosmogenic nuclide inventory in sediments is accumulated while they are eroding from bedrock and traveling down hill slopes. Spatially homogeneous erosion rates of 25 - 30 mm Ky-1 are calculated throughout the Great Smoky Mountains using measured concentrations of cosmogenic 10Be and 26Al in quartz separated from alluvial sediment. 10Be and 26Al concentrations in sediments collected from headwater tributaries that have no upstream samples (n=18) are consistent with an average erosion rate of 28 ?? 8 mm Ky-1, similar to that of the outlet rivers (n=16, 24 ?? 6 mm Ky-1), which carry most of the sediment out of the mountain range. Grain-size-specific analysis of 6 alluvial sediment samples shows higher nuclide concentrations in smaller grain sizes than in larger ones. The difference in concentrations arises from the large elevation distribution of the source of the smaller grains compared with the narrow and relatively low source elevation of the large grains. Large sandstone clasts disaggregate into sand-size grains rapidly during weathering and downslope transport; thus, only clasts from the lower parts of slopes reach the streams. 26Al/10Be ratios do not suggest significant burial periods for our samples. However, alluvial samples have lower 26Al/10Be ratios than bedrock and colluvial samples, a trend consistent with a longer integrated cosmic ray exposure history that includes periods of burial during down-slope transport. The results confirm some of the basic ideas embedded in Davis' geographic cycle model, such as the reduction of relief through slope processes, and of Hack's dynamic equilibrium model such as the similarity of erosion rates across different lithologies. Comparing cosmogenic nuclide data with other measured and calculated erosion rates for the Appalachians, we conclude that rates of erosion, integrated over varying time periods from decades to a hundred million years are similar, the result of equilibrium between erosion and isostatic uplift in the southern Appalachian Mountains.

  17. Nonantibiotic prophylaxis for recurrent urinary tract infections: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Beerepoot, M A J; Geerlings, S E; van Haarst, E P; van Charante, N Mensing; ter Riet, G

    2013-12-01

    Increasing antimicrobial resistance has stimulated interest in nonantibiotic prophylaxis of recurrent urinary tract infections. We assessed the effectiveness, tolerability and safety of nonantibiotic prophylaxis in adults with recurrent urinary tract infections. MEDLINE®, EMBASE™, the Cochrane Library and reference lists of relevant reviews were searched to April 2013 for relevant English language citations. Two reviewers selected randomized controlled trials that met the predefined criteria for population, interventions and outcomes. The difference in the proportions of patients with at least 1 urinary tract infection was calculated for individual studies, and pooled risk ratios were calculated using random and fixed effects models. Adverse event rates were also extracted. The Jadad score was used to assess risk of bias (0 to 2-high risk and 3 to 5-low risk). We identified 5,413 records and included 17 studies with data for 2,165 patients. The oral immunostimulant OM-89 decreased the rate of urinary tract infection recurrence (4 trials, sample size 891, median Jadad score 3, RR 0.61, 95% CI 0.48-0.78) and had a good safety profile. The vaginal vaccine Urovac® slightly reduced urinary tract infection recurrence (3 trials, sample size 220, Jadad score 3, RR 0.81, 95% CI 0.68-0.96) and primary immunization followed by booster immunization increased the time to reinfection. Vaginal estrogens showed a trend toward preventing urinary tract infection recurrence (2 trials, sample size 201, Jadad score 2.5, RR 0.42, 95% CI 0.16-1.10) but vaginal irritation occurred in 6% to 20% of women. Cranberries decreased urinary tract infection recurrence (2 trials, sample size 250, Jadad score 4, RR 0.53, 95% CI 0.33-0.83) as did acupuncture (2 open label trials, sample size 165, Jadad score 2, RR 0.48, 95% CI 0.29-0.79). Oral estrogens and lactobacilli prophylaxis did not decrease the rate of urinary tract infection recurrence. The evidence of the effectiveness of the oral immunostimulant OM-89 is promising. Although sometimes statistically significant, pooled findings for the other interventions should be considered tentative until corroborated by more research. Large head-to-head trials should be performed to optimally inform clinical decision making. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  18. Improved magnetic and electrical properties of Cu doped Fe-Ni invar alloys synthesized by chemical reduction technique

    NASA Astrophysics Data System (ADS)

    Ahmad, Sajjad; Ziya, Amer Bashir; Ashiq, Muhammad Naeem; Ibrahim, Ather; Atiq, Shabbar; Ahmad, Naseeb; Shakeel, Muhammad; Khan, Muhammad Azhar

    2016-12-01

    Fe-Ni-Cu invar alloys of various compositions (Fe65Ni35-xCux, x=0, 0.2, 0.6, 1, 1.4 and 1.8) were synthesized via chemical reduction route. These alloys were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM) and vibrating sample magnetometry (VSM) techniques. The XRD analysis revealed the formation of face centered cubic (fcc) structure. The lattice parameter and the crystallite size of the investigated alloys were calculated and the line broadening indicated the nano-crystallites size of alloy powder. The particle size was estimated from SEM and it decreases by the incorporation of Cu and found to be in the range of 24-40 nm. The addition of Cu in these alloys appreciably enhances the saturation magnetization and it increases from 99 to 123 emu/g. Electrical conductivity has been improved with Cu addition. The thermal conductivity was calculated using the Wiedemann-Franz law.

  19. A small-plane heat source method for measuring the thermal conductivities of anisotropic materials

    NASA Astrophysics Data System (ADS)

    Cheng, Liang; Yue, Kai; Wang, Jun; Zhang, Xinxin

    2017-07-01

    A new small-plane heat source method was proposed in this study to simultaneously measure the in-plane and cross-plane thermal conductivities of anisotropic insulating materials. In this method the size of the heat source element is smaller than the sample size and the boundary condition is thermal insulation due to no heat flux at the edge of the sample during the experiment. A three-dimensional model in a rectangular coordinate system was established to exactly describe the heat transfer process of the measurement system. Using the Laplace transform, variable separation, and Laplace inverse transform methods, the analytical solution of the temperature rise of the sample was derived. The temperature rises calculated by the analytical solution agree well with the results of numerical calculation. The result of the sensitivity analysis shows that the sensitivity coefficients of the estimated thermal conductivities are high and uncorrelated to each other. At room temperature and in a high-temperature environment, experimental measurements of anisotropic silica aerogel were carried out using the traditional one-dimensional plane heat source method and the proposed method, respectively. The results demonstrate that the measurement method developed in this study is effective and feasible for simultaneously obtaining the in-plane and cross-plane thermal conductivities of the anisotropic materials.

  20. Random vs. systematic sampling from administrative databases involving human subjects.

    PubMed

    Hagino, C; Lo, R J

    1998-09-01

    Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.

  1. Hepatitis C bio-behavioural surveys in people who inject drugs-a systematic review of sensitivity to the theoretical assumptions of respondent driven sampling.

    PubMed

    Buchanan, Ryan; Khakoo, Salim I; Coad, Jonathan; Grellier, Leonie; Parkes, Julie

    2017-07-11

    New, more effective and better-tolerated therapies for hepatitis C (HCV) have made the elimination of HCV a feasible objective. However, for this to be achieved, it is necessary to have a detailed understanding of HCV epidemiology in people who inject drugs (PWID). Respondent-driven sampling (RDS) can provide prevalence estimates in hidden populations such as PWID. The aims of this systematic review are to identify published studies that use RDS in PWID to measure the prevalence of HCV, and compare each study against the STROBE-RDS checklist to assess their sensitivity to the theoretical assumptions underlying RDS. Searches were undertaken in accordance with PRISMA systematic review guidelines. Included studies were English language publications in peer-reviewed journals, which reported the use of RDS to recruit PWID to an HCV bio-behavioural survey. Data was extracted under three headings: (1) survey overview, (2) survey outcomes, and (3) reporting against selected STROBE-RDS criteria. Thirty-one studies met the inclusion criteria. They varied in scale (range 1-15 survey sites) and the sample sizes achieved (range 81-1000 per survey site) but were consistent in describing the use of standard RDS methods including: seeds, coupons and recruitment incentives. Twenty-seven studies (87%) either calculated or reported the intention to calculate population prevalence estimates for HCV and two used RDS data to calculate the total population size of PWID. Detailed operational and analytical procedures and reporting against selected criteria from the STROBE-RDS checklist varied between studies. There were widespread indications that sampling did not meet the assumptions underlying RDS, which led to two studies being unable to report an estimated HCV population prevalence in at least one survey location. RDS can be used to estimate a population prevalence of HCV in PWID and estimate the PWID population size. Accordingly, as a single instrument, it is a useful tool for guiding HCV elimination. However, future studies should report the operational conduct of each survey in accordance with the STROBE-RDS checklist to indicate sensitivity to the theoretical assumptions underlying the method. PROSPERO CRD42015019245.

  2. Thermal Infrared Spectra of a Suite of Forsterite Samples and Ab-initio Modelling of theirs Spectra

    NASA Astrophysics Data System (ADS)

    Maturilli, A.; Stangarone, C.; Helbert, J.; Tribaudino, M.; Prencipe, M.

    2017-12-01

    Forsterite is the dominating component in olivine, a major constituent in ultrafemic rocks, as well as planetary bodies. Messenger X-ray spectrometer has shown that Mg-rich silicate minerals, such as enstatite and forsterite, dominate Mercury's surface (Weider et al 2012). A careful and detailed acquaintance with the forsterite spectral features and their dependence wrt environmental conditions on Mercury is needed to interpret the remote sensing data from previous and forthcoming missions. We propose an experimental vs calculation approach to reproduce and describe the spectral features of forsterite. TIR emissivity measurements are performed by the Planetary Spectroscopy Laboratory (PSL) of DLR. PSL offers the unique capability to measure the emissivity of samples at temperature up to 1000K under vacuum conditions. TIR emissivity and reflectance measurements are performed on 11 olivine samples having a different composition within the forsterite-fayalite series. When available, the sample has been measured in 2 different grain sizes (<25µm and 125-250µm ranges). Emissivity measurements are taken for temperatures from 300K to 900K step 100K in the 1-100µm spectral range. Modelling is based on ab initio calculation techniques, which allow reproducing properties of crystals, at any P/T condition, with the least possible amount of a priori empirical information. Spectra are calculated evaluating vibrational frequencies at different volume cell, here 0K, 300K and 1000K (extreme situations), taking into account zero point effects. The aim of this work is to study experimentally the effects of temperature, composition and grain sizes on emissivity band minima shifts. The outcomes will benefit the modelling of emissivity spectra with ab initio methods, already successfully enabling to foresee the bands shift due to temperature and composition, but not taking into account band shape due to grain size variations. Considering the chameleon-like effects of Mercury surface already observed (Helbert et al. 2013), this study wants to point out the main spectral features due to the composition and temperature. Our results are used to create a theoretical background to interpret the high temperature infrared emissivity spectra from MERTIS onboard the ESA BepiColombo mission to Mercury (Helbert et al. 2010).

  3. Mechanisms of Laser-Induced Dissection and Transport of Histologic Specimens

    PubMed Central

    Vogel, Alfred; Lorenz, Kathrin; Horneffer, Verena; Hüttmann, Gereon; von Smolinski, Dorthe; Gebert, Andreas

    2007-01-01

    Rapid contact- and contamination-free procurement of histologic material for proteomic and genomic analysis can be achieved by laser microdissection of the sample of interest followed by laser-induced transport (laser pressure catapulting). The dynamics of laser microdissection and laser pressure catapulting of histologic samples of 80 μm diameter was investigated by means of time-resolved photography. The working mechanism of microdissection was found to be plasma-mediated ablation initiated by linear absorption. Catapulting was driven by plasma formation when tightly focused pulses were used, and by photothermal ablation at the bottom of the sample when defocused pulses producing laser spot diameters larger than 35 μm were used. With focused pulses, driving pressures of several hundred MPa accelerated the specimen to initial velocities of 100–300 m/s before they were rapidly slowed down by air friction. When the laser spot was increased to a size comparable to or larger than the sample diameter, both driving pressure and flight velocity decreased considerably. Based on a characterization of the thermal and optical properties of the histologic specimens and supporting materials used, we calculated the evolution of the heat distribution in the sample. Selected catapulted samples were examined by scanning electron microscopy or analyzed by real-time reverse-transcriptase polymerase chain reaction. We found that catapulting of dissected samples results in little collateral damage when the laser pulses are either tightly focused or when the laser spot size is comparable to the specimen size. By contrast, moderate defocusing with spot sizes up to one-third of the specimen diameter may involve significant heat and ultraviolet exposure. Potential side effects are maximal when samples are catapulted directly from a glass slide without a supporting polymer foil. PMID:17766336

  4. A comparison of bootstrap methods and an adjusted bootstrap approach for estimating the prediction error in microarray classification.

    PubMed

    Jiang, Wenyu; Simon, Richard

    2007-12-20

    This paper first provides a critical review on some existing methods for estimating the prediction error in classifying microarray data where the number of genes greatly exceeds the number of specimens. Special attention is given to the bootstrap-related methods. When the sample size n is small, we find that all the reviewed methods suffer from either substantial bias or variability. We introduce a repeated leave-one-out bootstrap (RLOOB) method that predicts for each specimen in the sample using bootstrap learning sets of size ln. We then propose an adjusted bootstrap (ABS) method that fits a learning curve to the RLOOB estimates calculated with different bootstrap learning set sizes. The ABS method is robust across the situations we investigate and provides a slightly conservative estimate for the prediction error. Even with small samples, it does not suffer from large upward bias as the leave-one-out bootstrap and the 0.632+ bootstrap, and it does not suffer from large variability as the leave-one-out cross-validation in microarray applications. Copyright (c) 2007 John Wiley & Sons, Ltd.

  5. Sample size determination for GEE analyses of stepped wedge cluster randomized trials.

    PubMed

    Li, Fan; Turner, Elizabeth L; Preisser, John S

    2018-06-19

    In stepped wedge cluster randomized trials, intact clusters of individuals switch from control to intervention from a randomly assigned period onwards. Such trials are becoming increasingly popular in health services research. When a closed cohort is recruited from each cluster for longitudinal follow-up, proper sample size calculation should account for three distinct types of intraclass correlations: the within-period, the inter-period, and the within-individual correlations. Setting the latter two correlation parameters to be equal accommodates cross-sectional designs. We propose sample size procedures for continuous and binary responses within the framework of generalized estimating equations that employ a block exchangeable within-cluster correlation structure defined from the distinct correlation types. For continuous responses, we show that the intraclass correlations affect power only through two eigenvalues of the correlation matrix. We demonstrate that analytical power agrees well with simulated power for as few as eight clusters, when data are analyzed using bias-corrected estimating equations for the correlation parameters concurrently with a bias-corrected sandwich variance estimator. © 2018, The International Biometric Society.

  6. Hard choices in assessing survival past dams — a comparison of single- and paired-release strategies

    USGS Publications Warehouse

    Zydlewski, Joseph D.; Stich, Daniel S.; Sigourney, Douglas B.

    2017-01-01

    Mark–recapture models are widely used to estimate survival of salmon smolts migrating past dams. Paired releases have been used to improve estimate accuracy by removing components of mortality not attributable to the dam. This method is accompanied by reduced precision because (i) sample size is reduced relative to a single, large release; and (ii) variance calculations inflate error. We modeled an idealized system with a single dam to assess trade-offs between accuracy and precision and compared methods using root mean squared error (RMSE). Simulations were run under predefined conditions (dam mortality, background mortality, detection probability, and sample size) to determine scenarios when the paired release was preferable to a single release. We demonstrate that a paired-release design provides a theoretical advantage over a single-release design only at large sample sizes and high probabilities of detection. At release numbers typical of many survival studies, paired release can result in overestimation of dam survival. Failures to meet model assumptions of a paired release may result in further overestimation of dam-related survival. Under most conditions, a single-release strategy was preferable.

  7. Crystallization of hard spheres revisited. II. Thermodynamic modeling, nucleation work, and the surface of tension.

    PubMed

    Richard, David; Speck, Thomas

    2018-06-14

    Combining three numerical methods (forward flux sampling, seeding of droplets, and finite-size droplets), we probe the crystallization of hard spheres over the full range from close to coexistence to the spinodal regime. We show that all three methods allow us to sample different regimes and agree perfectly in the ranges where they overlap. By combining the nucleation work calculated from forward flux sampling of small droplets and the nucleation theorem, we show how to compute the nucleation work spanning three orders of magnitude. Using a variation of the nucleation theorem, we show how to extract the pressure difference between the solid droplet and ambient liquid. Moreover, combining the nucleation work with the pressure difference allows us to calculate the interfacial tension of small droplets. Our results demonstrate that employing bulk quantities yields inaccurate results for the nucleation rate.

  8. Experimental light scattering by ultrasonically controlled small particles - Implications for Planetary Science

    NASA Astrophysics Data System (ADS)

    Gritsevich, M.; Penttilä, A.; Maconi, G.; Kassamakov, I.; Markkanen, J.; Martikainen, J.; Väisänen, T.; Helander, P.; Puranen, T.; Salmi, A.; Hæggström, E.; Muinonen, K.

    2017-09-01

    We present the results obtained with our newly developed 3D scatterometer - a setup for precise multi-angular measurements of light scattered by mm- to µm-sized samples held in place by sound. These measurements are cross-validated against the modeled light-scattering characteristics of the sample, i.e., the intensity and the degree of linear polarization of the reflected light, calculated with state-of-the-art electromagnetic techniques. We demonstrate a unique non-destructive approach to derive the optical properties of small grain samples which facilitates research on highly valuable planetary materials, such as samples returned from space missions or rare meteorites.

  9. Structure, microstructure, and size dependent catalytic properties of nanostructured ruthenium dioxide

    NASA Astrophysics Data System (ADS)

    Nowakowski, Pawel; Dallas, Jean-Pierre; Villain, Sylvie; Kopia, Agnieszka; Gavarri, Jean-Raymond

    2008-05-01

    Nanostructured powders of ruthenium dioxide RuO 2 were synthesized via a sol gel route involving acidic solutions with pH varying between 0.4 and 4.5. The RuO 2 nanopowders were characterized by X-ray diffraction, scanning and transmission electron microscopy (SEM and TEM). Rietveld refinement of mean crystal structure was performed on RuO 2 nanopowders and crystallized standard RuO 2 sample. Crystallite sizes measured from X-ray diffraction profiles and TEM analysis varied in the range of 4-10 nm, with a minimum of crystallite dimension for pH=1.5. A good agreement between crystallite sizes calculated from Williamson Hall approach of X-ray data and from direct TEM observations was obtained. The tetragonal crystal cell parameter (a) and cell volumes of nanostructured samples were characterized by values greater than the values of standard RuO 2 sample. In addition, the [Ru-O 6] oxygen octahedrons of rutile structure also depended on crystal size. Catalytic conversion of methane by these RuO 2 nanostructured catalysts was studied as a function of pH, catalytic interaction time, air methane composition, and catalysis temperature, by the way of Fourier transform infrared (FTIR) spectroscopy coupled to homemade catalytic cell. The catalytic efficiency defined as FTIR absorption band intensities I(CO 2) was maximum for sample prepared at pH=1.5, and mainly correlated to crystallite dimensions. No significant catalytic effect was observed from sintered RuO 2 samples.

  10. Sample size requirements for separating out the effects of combination treatments: randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis.

    PubMed

    Wolbers, Marcel; Heemskerk, Dorothee; Chau, Tran Thi Hong; Yen, Nguyen Thi Bich; Caws, Maxine; Farrar, Jeremy; Day, Jeremy

    2011-02-02

    In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 x 2 factorial design. We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 x 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Current Controlled Trials ISRCTN61649292.

  11. Sample Size Requirements for Studies of Treatment Effects on Beta-Cell Function in Newly Diagnosed Type 1 Diabetes

    PubMed Central

    Lachin, John M.; McGee, Paula L.; Greenbaum, Carla J.; Palmer, Jerry; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of -cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(), log(+1) and square-root transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8–12 years of age, adolescents (13–17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13–17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(+1) and transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes. PMID:22102862

  12. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    PubMed

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes.

  13. Computing physical properties with quantum Monte Carlo methods with statistical fluctuations independent of system size.

    PubMed

    Assaraf, Roland

    2014-12-01

    We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.

  14. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  15. Reiki Therapy for Symptom Management in Children Receiving Palliative Care: A Pilot Study.

    PubMed

    Thrane, Susan E; Maurer, Scott H; Ren, Dianxu; Danford, Cynthia A; Cohen, Susan M

    2017-05-01

    Pain may be reported in one-half to three-fourths of children with cancer and other terminal conditions and anxiety in about one-third of them. Pharmacologic methods do not always give satisfactory symptom relief. Complementary therapies such as Reiki may help children manage symptoms. This pre-post mixed-methods single group pilot study examined feasibility, acceptability, and the outcomes of pain, anxiety, and relaxation using Reiki therapy with children receiving palliative care. A convenience sample of children ages 7 to 16 and their parents were recruited from a palliative care service. Two 24-minute Reiki sessions were completed at the children's home. Paired t tests or Wilcoxon signed-rank tests were calculated to compare change from pre to post for outcome variables. Significance was set at P < .10. Cohen d effect sizes were calculated. The final sample included 8 verbal and 8 nonverbal children, 16 mothers, and 1 nurse. All mean scores for outcome variables decreased from pre- to posttreatment for both sessions. Significant decreases for pain for treatment 1 in nonverbal children ( P = .063) and for respiratory rate for treatment 2 in verbal children ( P = .009). Cohen d effect sizes were medium to large for most outcome measures. Decreased mean scores for outcome measures indicate that Reiki therapy did decrease pain, anxiety, heart, and respiratory rates, but small sample size deterred statistical significance. This preliminary work suggests that complementary methods of treatment such as Reiki may be beneficial to support traditional methods to manage pain and anxiety in children receiving palliative care.

  16. Monitoring diesel particulate matter and calculating diesel particulate densities using Grimm model 1.109 real-time aerosol monitors in underground mines.

    PubMed

    Kimbal, Kyle C; Pahler, Leon; Larson, Rodney; VanDerslice, Jim

    2012-01-01

    Currently, there is no Mine Safety and Health Administration (MSHA)-approved sampling method that provides real-time results for ambient concentrations of diesel particulates. This study investigated whether a commercially available aerosol spectrometer, the Grimm Portable Aerosol Spectrometer Model 1.109, could be used during underground mine operations to provide accurate real-time diesel particulate data relative to MSHA-approved cassette-based sampling methods. A subset was to estimate size-specific diesel particle densities to potentially improve the diesel particulate concentration estimates using the aerosol monitor. Concurrent sampling was conducted during underground metal mine operations using six duplicate diesel particulate cassettes, according to the MSHA-approved method, and two identical Grimm Model 1.109 instruments. Linear regression was used to develop adjustment factors relating the Grimm results to the average of the cassette results. Statistical models using the Grimm data produced predicted diesel particulate concentrations that highly correlated with the time-weighted average cassette results (R(2) = 0.86, 0.88). Size-specific diesel particulate densities were not constant over the range of particle diameters observed. The variance of the calculated diesel particulate densities by particle diameter size supports the current understanding that diesel emissions are a mixture of particulate aerosols and a complex host of gases and vapors not limited to elemental and organic carbon. Finally, diesel particulate concentrations measured by the Grimm Model 1.109 can be adjusted to provide sufficiently accurate real-time air monitoring data for an underground mining environment.

  17. Using meta-analysis to inform the design of subsequent studies of diagnostic test accuracy.

    PubMed

    Hinchliffe, Sally R; Crowther, Michael J; Phillips, Robert S; Sutton, Alex J

    2013-06-01

    An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial evidence to underpin reliable clinical decision-making. Very few investigators consider any sample size calculations when designing a new diagnostic accuracy study. However, it is important to consider the number of subjects in a new study in order to achieve a precise measure of accuracy. Sutton et al. have suggested previously that when designing a new therapeutic trial, it could be more beneficial to consider the power of the updated meta-analysis including the new trial rather than of the new trial itself. The methodology involves simulating new studies for a range of sample sizes and estimating the power of the updated meta-analysis with each new study added. Plotting the power values against the range of sample sizes allows the clinician to make an informed decision about the sample size of a new trial. This paper extends this approach from the trial setting and applies it to diagnostic accuracy studies. Several meta-analytic models are considered including bivariate random effects meta-analysis that models the correlation between sensitivity and specificity. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Influence of Zn doping on structural, optical and dielectric properties of LaFeO3

    NASA Astrophysics Data System (ADS)

    Manzoor, Samiya; Husain, Shahid

    2018-05-01

    The effect of Zn doping on structural, optical and dielectric properties of nano-crystalline LaFe1‑xZnxO3 (0.0 ≤ x ≤ 0.3) samples have been investigated. These samples are synthesized using conventional solid state reaction route. X-ray diffraction patterns with Rietveld analysis confirm the single phase nature of samples. Further, the sample formation has been confirmed by FTIR spectroscopy. All the samples are formed in orthorhombic crystal symmetry with Pbnm space group. The average crystallite sizes, calculated from the Scherer’s formula, lie in the range below 50 nm. Rietveld refinement technique is used to determine lattice parameters, bond lengths and unit cell volume. Williamson-Hall analysis has been performed to calculate the crystallite size and lattice strain. Crystallite sizes are found to be of nanometer range while the strain is of the order of 10‑3. Zn doping leads to the expansion of volume due to the tensile strain. Optical bandgap has been determined from Kubelka-Munk function using Tauc’s relation. Zinc doping in LaFeO3 leads to decrease in optical bandgap. Dielectric constant as a function of frequency is measured in the frequency range of 75 kHz–5 MHz. The dielectric behavior has been investigated by analyzing ‘universal dielectric response’ (UDR) model. The dielectric constant (ε‧) shows colossal value with Zn doping in the whole frequency range. However, the imaginary part (ε″) shows relaxational behavior which may be attributed to the strong correlation that exists between conduction mechanism and dielectric behavior in ferrites. Cole-Cole analysis has been done that confirms the dielectric material does not follow the ideal Debye theory but shows distribution of relaxation times. The a.c conductivity increases with frequency and with Zn doping due to the increased polaron hopping.

  19. Are fixed grain size ratios useful proxies for loess sedimentation dynamics? Experiences from Remizovka, Kazakhstan

    NASA Astrophysics Data System (ADS)

    Schulte, Philipp; Sprafke, Tobias; Rodrigues, Leonor; Fitzsimmons, Kathryn E.

    2018-04-01

    Loess-paleosol sequences (LPS) are sensitive terrestrial archives of past aeolian dynamics and paleoclimatic changes within the Quaternary. Grain size (GS) analysis is commonly used to interpret aeolian dynamics and climate influences on LPS, based on granulometric parameters such as specific GS classes, ratios of GS classes and statistical manipulation of GS data. However, the GS distribution of a loess sample is not solely a function of aeolian dynamics; rather complex polygenetic depositional and post-depositional processes must be taken into account. This study assesses the reliability of fixed GS ratios as proxies for past sedimentation dynamics using the case study of Remizovka in southeast Kazakhstan. Continuous sampling of the upper 8 m of the profile, which shows extremely weak pedogenic alteration and is therefore dominated by primary aeolian activity, indicates that fixed GS ratios do not adequately serve as proxies for loess sedimentation dynamics. We find through the calculation of single value parameters, that "true" variations within sensitive GS classes are masked by relative changes of the more frequent classes. Heatmap signatures provide the visualization of GS variability within LPS without significant data loss within the measured classes of a sample, or across all measured samples. We also examine the effect of two different commonly used laser diffraction devices on GS ratio calculation by duplicate measurements, the Beckman Coulter (LS13320) and a Malvern Mastersizer Hydro (MM2000), as well as the applicability and significance of the so-called "twin peak ratio" previously developed on samples from the same section. The LS13320 provides higher resolution results than the MM2000, nevertheless the GS ratios related to variations in the silt-sized fraction were comparable. However, we could not detect a twin peak within the coarse silt as detected in the original study using the same device. Our GS measurements differ from previous works at Remizovka in several instances, calling into question the interpretation of paleoclimatic implications using GS data alone.

  20. Periodontal Research: Basics and beyond – Part II (Ethical issues, sampling, outcome measures and bias)

    PubMed Central

    Avula, Haritha

    2013-01-01

    A good research beginning refers to formulating a well-defined research question, developing a hypothesis and choosing an appropriate study design. The first part of the review series has discussed these issues in depth and this paper intends to throw light on other issues pertaining to the implementation of research. These include the various ethical norms and standards in human experimentation, the eligibility criteria for the participants, sampling methods and sample size calculation, various outcome measures that need to be defined and the biases that can be introduced in research. PMID:24174747

  1. Comparing Single Case Design Overlap-Based Effect Size Metrics From Studies Examining Speech Generating Device Interventions

    PubMed Central

    Chen, Mo; Hyppa-Martin, Jolene K.; Reichle, Joe E.; Symons, Frank J.

    2017-01-01

    Meaningfully synthesizing single case experimental data from intervention studies comprised of individuals with low incidence conditions and generating effect size estimates remains challenging. Seven effect size metrics were compared for single case design (SCD) data focused on teaching speech generating device use to individuals with intellectual and developmental disabilities (IDD) with moderate to profound levels of impairment. The effect size metrics included percent of data points exceeding the median (PEM), percent of nonoverlapping data (PND), improvement rate difference (IRD), percent of all nonoverlapping data (PAND), Phi, nonoverlap of all pairs (NAP), and Taunovlap. Results showed that among the seven effect size metrics, PAND, Phi, IRD, and PND were more effective in quantifying intervention effects for the data sample (N = 285 phase or condition contrasts). Results are discussed with respect to issues concerning extracting and calculating effect sizes, visual analysis, and SCD intervention research in IDD. PMID:27119210

  2. 75 FR 46958 - Proposed Fair Market Rents for the Housing Choice Voucher Program and Moderate Rehabilitation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-04

    ... program staff. Questions on how to conduct FMR surveys or concerning further methodological explanations... insufficient sample sizes. The areas covered by this estimation method had less than the HUD standard of 200...-bedroom FMR for that area's CBSA as calculated using methods employed for past metropolitan area FMR...

  3. Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys

    Treesearch

    Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik

    2011-01-01

    The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...

  4. 77 FR 47590 - Notice of Request for a Revision to and Extension of Approval of an Information Collection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods...] Notice of Request for a Revision to and Extension of Approval of an Information Collection; Qualitative... associated with qualitative customer and stakeholder feedback on service delivery by the Animal and Plant...

  5. Demographic trends in Claremont California’s street tree population

    Treesearch

    Natalie S. van Doorn; E. Gregory McPherson

    2018-01-01

    The aim of this study was to quantify street tree population dynamics in the city of Claremont, CA. A repeated measures survey (2000 and 2014) based on a stratified random sampling approach across size classes and for the most abundant 21 species was analyzed to calculate removal, growth, and replacement planting rates. Demographic rates were estimated using a...

  6. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    ERIC Educational Resources Information Center

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  7. A Field Study of Performance Among Embarked Infantry Personnel Exposed to Waterborne Motion

    DTIC Science & Technology

    2012-09-01

    was designed with four groups with 16 participants per group to accommodate the calculated sample size and the maximum seating capacity of the...25  A.  APPROACH TO THE EXPERIMENTAL DESIGN .................................25  B.  VARIABLES...39  viii 1.  Design of the Training Period ...........................................................39  2.  Training Period

  8. Setting health research priorities using the CHNRI method: VI. Quantitative properties of human collective opinion

    PubMed Central

    Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon

    2016-01-01

    Introduction Crowdsourcing has become an increasingly important tool to address many problems – from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. Methods We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. Results The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14–16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94–0.96). Conclusions Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45–55 experts. PMID:27350874

  9. Setting health research priorities using the CHNRI method: VI. Quantitative properties of human collective opinion.

    PubMed

    Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon

    2016-06-01

    Crowdsourcing has become an increasingly important tool to address many problems - from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14-16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94-0.96). Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45-55 experts.

  10. Sample size, power calculations, and their implications for the cost of thorough studies of drug induced QT interval prolongation.

    PubMed

    Malik, Marek; Hnatkova, Katerina; Batchvarov, Velislav; Gang, Yi; Smetana, Peter; Camm, A John

    2004-12-01

    Regulatory authorities require new drugs to be investigated using a so-called "thorough QT/QTc study" to identify compounds with a potential of influencing cardiac repolarization in man. Presently drafted regulatory consensus requires these studies to be powered for the statistical detection of QTc interval changes as small as 5 ms. Since this translates into a noticeable drug development burden, strategies need to be identified allowing the size and thus the cost of thorough QT/QTc studies to be minimized. This study investigated the influence of QT and RR interval data quality and the precision of heart rate correction on the sample sizes of thorough QT/QTc studies. In 57 healthy subjects (26 women, age range 19-42 years), a total of 4,195 drug-free digital electrocardiograms (ECG) were obtained (65-84 ECGs per subject). All ECG parameters were measured manually using the most accurate approach with reconciliation of measurement differences between different cardiologists and aligning the measurements of corresponding ECG patterns. From the data derived in this measurement process, seven different levels of QT/RR data quality were obtained, ranging from the simplest approach of measuring 3 beats in one ECG lead to the most exact approach. Each of these QT/RR data-sets was processed with eight different heart rate corrections ranging from Bazett and Fridericia corrections to the individual QT/RR regression modelling with optimization of QT/RR curvature. For each combination of data quality and heart rate correction, standard deviation of individual mean QTc values and mean of individual standard deviations of QTc values were calculated and used to derive the size of thorough QT/QTc studies with an 80% power to detect 5 ms QTc changes at the significance level of 0.05. Irrespective of data quality and heart rate corrections, the necessary sample sizes of studies based on between-subject comparisons (e.g., parallel studies) are very substantial requiring >140 subjects per group. However, the required study size may be substantially reduced in investigations based on within-subject comparisons (e.g., crossover studies or studies of several parallel groups each crossing over an active treatment with placebo). While simple measurement approaches with ad-hoc heart rate correction still lead to requirements of >150 subjects, the combination of best data quality with most accurate individualized heart rate correction decreases the variability of QTc measurements in each individual very substantially. In the data of this study, the average of standard deviations of QTc values calculated separately in each individual was only 5.2 ms. Such a variability in QTc data translates to only 18 subjects per study group (e.g., the size of a complete one-group crossover study) to detect 5 ms QTc change with an 80% power. Cost calculations show that by involving the most stringent ECG handling and measurement, the cost of a thorough QT/QTc study may be reduced to approximately 25%-30% of the cost imposed by the simple ECG reading (e.g., three complexes in one lead only).

  11. Design of pilot studies to inform the construction of composite outcome measures.

    PubMed

    Edland, Steven D; Ard, M Colin; Li, Weiwei; Jiang, Lingjing

    2017-06-01

    Composite scales have recently been proposed as outcome measures for clinical trials. For example, the Prodromal Alzheimer's Cognitive Composite (PACC) is the sum of z-score normed component measures assessing episodic memory, timed executive function, and global cognition. Alternative methods of calculating composite total scores using the weighted sum of the component measures that maximize signal-to-noise of the resulting composite score have been proposed. Optimal weights can be estimated from pilot data, but it is an open question how large a pilot trial is required to calculate reliably optimal weights. In this manuscript, we describe the calculation of optimal weights, and use large-scale computer simulations to investigate the question of how large a pilot study sample is required to inform the calculation of optimal weights. The simulations are informed by the pattern of decline observed in cognitively normal subjects enrolled in the Alzheimer's Disease Cooperative Study (ADCS) Prevention Instrument cohort study, restricting to n=75 subjects age 75 and over with an ApoE E4 risk allele and therefore likely to have an underlying Alzheimer neurodegenerative process. In the context of secondary prevention trials in Alzheimer's disease, and using the components of the PACC, we found that pilot studies as small as 100 are sufficient to meaningfully inform weighting parameters. Regardless of the pilot study sample size used to inform weights, the optimally weighted PACC consistently outperformed the standard PACC in terms of statistical power to detect treatment effects in a clinical trial. Pilot studies of size 300 produced weights that achieved near-optimal statistical power, and reduced required sample size relative to the standard PACC by more than half. These simulations suggest that modestly sized pilot studies, comparable to that of a phase 2 clinical trial, are sufficient to inform the construction of composite outcome measures. Although these findings apply only to the PACC in the context of prodromal AD, the observation that weights only have to approximate the optimal weights to achieve near-optimal performance should generalize. Performing a pilot study or phase 2 trial to inform the weighting of proposed composite outcome measures is highly cost-effective. The net effect of more efficient outcome measures is that smaller trials will be required to test novel treatments. Alternatively, second generation trials can use prior clinical trial data to inform weighting, so that greater efficiency can be achieved as we move forward.

  12. A Fixed-Precision Sequential Sampling Plan for the Potato Tuberworm Moth, Phthorimaea operculella Zeller (Lepidoptera: Gelechidae), on Potato Cultivars.

    PubMed

    Shahbi, M; Rajabpour, A

    2017-08-01

    Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.

  13. The effects of neutralized particles on the sampling efficiency of polyurethane foam used to estimate the extrathoracic deposition fraction.

    PubMed

    Tomyn, Ronald L; Sleeth, Darrah K; Thiese, Matthew S; Larson, Rodney R

    2016-01-01

    In addition to chemical composition, the site of deposition of inhaled particles is important for determining the potential health effects from an exposure. As a result, the International Organization for Standardization adopted a particle deposition sampling convention. This includes extrathoracic particle deposition sampling conventions for the anterior nasal passages (ET1) and the posterior nasal and oral passages (ET2). This study assessed how well a polyurethane foam insert placed in an Institute of Occupational Medicine (IOM) sampler can match an extrathoracic deposition sampling convention, while accounting for possible static buildup in the test particles. In this way, the study aimed to assess whether neutralized particles affected the performance of this sampler for estimating extrathoracic particle deposition. A total of three different particle sizes (4.9, 9.5, and 12.8 µm) were used. For each trial, one particle size was introduced into a low-speed wind tunnel with a wind speed set a 0.2 m/s (∼40 ft/min). This wind speed was chosen to closely match the conditions of most indoor working environments. Each particle size was tested twice either neutralized, using a high voltage neutralizer, or left in its normal (non neutralized) state as standard particles. IOM samplers were fitted with a polyurethane foam insert and placed on a rotating mannequin inside the wind tunnel. Foam sampling efficiencies were calculated for all trials to compare against the normalized ET1 sampling deposition convention. The foam sampling efficiencies matched well to the ET1 deposition convention for the larger particle sizes, but had a general trend of underestimating for all three particle sizes. The results of a Wilcoxon Rank Sum Test also showed that only at 4.9 µm was there a statistically significant difference (p-value = 0.03) between the foam sampling efficiency using the standard particles and the neutralized particles. This is interpreted to mean that static buildup may be occurring and neutralizing the particles that are 4.9 µm diameter in size did affect the performance of the foam sampler when estimating extrathoracic particle deposition.

  14. Sample substitution can be an acceptable data-collection strategy: the case of the Belgian Health Interview Survey.

    PubMed

    Demarest, Stefaan; Molenberghs, Geert; Van der Heyden, Johan; Gisle, Lydia; Van Oyen, Herman; de Waleffe, Sandrine; Van Hal, Guido

    2017-11-01

    Substitution of non-participating households is used in the Belgian Health Interview Survey (BHIS) as a method to obtain the predefined net sample size. Yet, possible effects of applying substitution on response rates and health estimates remain uncertain. In this article, the process of substitution with its impact on response rates and health estimates is assessed. The response rates (RR)-both at household and individual level-according to the sampling criteria were calculated for each stage of the substitution process, together with the individual accrual rate (AR). Unweighted and weighted health estimates were calculated before and after applying substitution. Of the 10,468 members of 4878 initial households, 5904 members (RRind: 56.4%) of 2707 households (RRhh: 55.5%) participated. For the three successive (matched) substitutes, the RR dropped to 45%. The composition of the net sample resembles the one of the initial samples. Applying substitution did not produce any important distorting effects on the estimates. Applying substitution leads to an increase in non-participation, but does not impact the estimations.

  15. What big size you have! Using effect sizes to determine the impact of public health nursing interventions.

    PubMed

    Johnson, K E; McMorris, B J; Raynor, L A; Monsen, K A

    2013-01-01

    The Omaha System is a standardized interface terminology that is used extensively by public health nurses in community settings to document interventions and client outcomes. Researchers using Omaha System data to analyze the effectiveness of interventions have typically calculated p-values to determine whether significant client changes occurred between admission and discharge. However, p-values are highly dependent on sample size, making it difficult to distinguish statistically significant changes from clinically meaningful changes. Effect sizes can help identify practical differences but have not yet been applied to Omaha System data. We compared p-values and effect sizes (Cohen's d) for mean differences between admission and discharge for 13 client problems documented in the electronic health records of 1,016 young low-income parents. Client problems were documented anywhere from 6 (Health Care Supervision) to 906 (Caretaking/parenting) times. On a scale from 1 to 5, the mean change needed to yield a large effect size (Cohen's d ≥ 0.80) was approximately 0.60 (range = 0.50 - 1.03) regardless of p-value or sample size (i.e., the number of times a client problem was documented in the electronic health record). Researchers using the Omaha System should report effect sizes to help readers determine which differences are practical and meaningful. Such disclosures will allow for increased recognition of effective interventions.

  16. Preparation of improved catalytic materials for water purification

    NASA Astrophysics Data System (ADS)

    Cherkezova-Zheleva, Z.; Paneva, D.; Tsvetkov, M.; Kunev, B.; Milanova, M.; Petrov, N.; Mitov, I.

    2014-04-01

    The aim of presented paper was to study preparation of catalytic materials for water purification. Iron oxide (Fe3O4) samples supported on activated carbon were prepared by wet impregnation method and low temperature heating in an inert atmosphere. The as-prepared, activated and samples after catalytic test were characterized by Mössbauer spectroscopy and X-ray diffraction. The obtained X-ray diffraction patterns of prepared samples show broad and low-intensity peaks of magnetite phase and the characteristic peaks of the activated carbon. The average crystallite size of magnetite particles was calculated below 20 nm. The registered Mössbauer spectra of prepared materials show a superposition of doublet lines or doublet and sextet components. The calculated hyperfine parameters after spectra evaluation reveal the presence of magnetite phase with nanosize particles. Relaxation phenomena were registered in both cases, i.e. superparamagnetism or collective magnetic excitation behavior, respectively. Low temperature Mössbauer spectra confirm this observation. Application of materials as photo-Fenton catalysts for organic pollutions degradation was studied. It was obtained high adsorption degree of dye, extremely high reaction rate and fast dye degradation. Photocatalytic behaviour of a more active sample was enhanced using mechanochemical activation (MCA). The nanometric size and high dispersion of photocatalyst particles influence both the adsorption and degradation mechanism of reaction. The results showed that all studied photocatalysts effectively decompose the organic pollutants under UV light irradiation. Partial oxidation of samples after catalytic tests was registered. Combination of magnetic particles with high photocatalytic activity meets both the requirements of photocatalytic degradation of water contaminants and that of recovery for cyclic utilization of material.

  17. Optical absorption and TEM studies of silver nanoparticle embedded BaO-CaF{sub 2}-P{sub 2}O{sub 5} glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, Manoj Kumar, E-mail: manukokkal01@gmail.com; Shashikala, H. D.

    Silver nanoparticle embedded 30BaO-20CaF{sub 2}-50P{sub 2}O{sub 5}-4Ag{sub 2}O-4SnO glasses were prepared by melt-quenching and subsequent heat treatment process. Silver-doped glasses were heat treated at temperatures 500 °C, 525°C and 550 °C for a fixed duration of 10 hours to incorporate metal nanoparticles into the glass matrix. Appearance and shift in peak positions of the surface plasmon resonance (SPR) bands in the optical absorption spectra of heat treated glass samples indicated that both formation and growth of nanoparticle depended on heat treatment temperature. Glass sample heat treated at 525 °C showed a SPR peak around 3 eV, which indicated that sphericalmore » nanoparticles smaller than 20 nm were formed inside the glass matrix. Whereas sample heat treated at 550 °C showed a size dependent red shift in SPR peak due to the presence of silver nanoparticles of size larger than 20 nm. Size of the nanoparticles calculated using full-width at half-maximum (FWHM) of absorption band showed a good agreement with the particle size obtained from transmission electron microscopy (TEM) analysis.« less

  18. Size-segregated compositional analysis of aerosol particles collected in the European Arctic during the ACCACIA campaign

    NASA Astrophysics Data System (ADS)

    Young, G.; Jones, H. M.; Darbyshire, E.; Baustian, K. J.; McQuaid, J. B.; Bower, K. N.; Connolly, P. J.; Gallagher, M. W.; Choularton, T. W.

    2016-03-01

    Single-particle compositional analysis of filter samples collected on board the Facility for Airborne Atmospheric Measurements (FAAM) BAe-146 aircraft is presented for six flights during the springtime Aerosol-Cloud Coupling and Climate Interactions in the Arctic (ACCACIA) campaign (March-April 2013). Scanning electron microscopy was utilised to derive size-segregated particle compositions and size distributions, and these were compared to corresponding data from wing-mounted optical particle counters. Reasonable agreement between the calculated number size distributions was found. Significant variability in composition was observed, with differing external and internal mixing identified, between air mass trajectory cases based on HYbrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) analyses. Dominant particle classes were silicate-based dusts and sea salts, with particles notably rich in K and Ca detected in one case. Source regions varied from the Arctic Ocean and Greenland through to northern Russia and the European continent. Good agreement between the back trajectories was mirrored by comparable compositional trends between samples. Silicate dusts were identified in all cases, and the elemental composition of the dust was consistent for all samples except one. It is hypothesised that long-range, high-altitude transport was primarily responsible for this dust, with likely sources including the Asian arid regions.

  19. A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes

    PubMed Central

    Bundy, Brian; Krischer, Jeffrey P.

    2016-01-01

    The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448

  20. Considering aspects of the 3Rs principles within experimental animal biology.

    PubMed

    Sneddon, Lynne U; Halsey, Lewis G; Bury, Nic R

    2017-09-01

    The 3Rs - Replacement, Reduction and Refinement - are embedded into the legislation and guidelines governing the ethics of animal use in experiments. Here, we consider the advantages of adopting key aspects of the 3Rs into experimental biology, represented mainly by the fields of animal behaviour, neurobiology, physiology, toxicology and biomechanics. Replacing protected animals with less sentient forms or species, cells, tissues or computer modelling approaches has been broadly successful. However, many studies investigate specific models that exhibit a particular adaptation, or a species that is a target for conservation, such that their replacement is inappropriate. Regardless of the species used, refining procedures to ensure the health and well-being of animals prior to and during experiments is crucial for the integrity of the results and legitimacy of the science. Although the concepts of health and welfare are developed for model organisms, relatively little is known regarding non-traditional species that may be more ecologically relevant. Studies should reduce the number of experimental animals by employing the minimum suitable sample size. This is often calculated using power analyses, which is associated with making statistical inferences based on the P -value, yet P -values often leave scientists on shaky ground. We endorse focusing on effect sizes accompanied by confidence intervals as a more appropriate means of interpreting data; in turn, sample size could be calculated based on effect size precision. Ultimately, the appropriate employment of the 3Rs principles in experimental biology empowers scientists in justifying their research, and results in higher-quality science. © 2017. Published by The Company of Biologists Ltd.

  1. Technical Factors Influencing Cone Packing Density Estimates in Adaptive Optics Flood Illuminated Retinal Images

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681

  2. Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images.

    PubMed

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic.

  3. Sampling of suspended particulate matter using particle traps in the Rhône River: Relevance and representativeness for the monitoring of contaminants.

    PubMed

    Masson, M; Angot, H; Le Bescond, C; Launay, M; Dabrin, A; Miège, C; Le Coz, J; Coquery, M

    2018-05-10

    Monitoring hydrophobic contaminants in surface freshwaters requires measuring contaminant concentrations in the particulate fraction (sediment or suspended particulate matter, SPM) of the water column. Particle traps (PTs) have been recently developed to sample SPM as cost-efficient, easy to operate and time-integrative tools. But the representativeness of SPM collected with PTs is not fully understood, notably in terms of grain size distribution and particulate organic carbon (POC) content, which could both skew particulate contaminant concentrations. The aim of this study was to evaluate the representativeness of SPM characteristics (i.e. grain size distribution and POC content) and associated contaminants (i.e. polychlorinated biphenyls, PCBs; mercury, Hg) in samples collected in a large river using PTs for differing hydrological conditions. Samples collected using PTs (n = 74) were compared with samples collected during the same time period by continuous flow centrifugation (CFC). The grain size distribution of PT samples shifted with increasing water discharge: the proportion of very fine silts (2-6 μm) decreased while that of coarse silts (27-74 μm) increased. Regardless of water discharge, POC contents were different likely due to integration by PT of high POC-content phytoplankton blooms or low POC-content flood events. Differences in PCBs and Hg concentrations were usually within the range of analytical uncertainties and could not be related to grain size or POC content shifts. Occasional Hg-enriched inputs may have led to higher Hg concentrations in a few PT samples (n = 4) which highlights the time-integrative capacity of the PTs. The differences of annual Hg and PCB fluxes calculated either from PT samples or CFC samples were generally below 20%. Despite some inherent limitations (e.g. grain size distribution bias), our findings suggest that PT sampling is a valuable technique to assess reliable spatial and temporal trends of particulate contaminants such as PCBs and Hg within a river monitoring network. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Structural and Magnetic Properties of Dilute Ca²⁺ Doped Iron Oxide Nanoparticles.

    PubMed

    Samar Layek; Rout, K; Mohapatra, M; Anand, S; Verma, H C

    2016-01-01

    Undoped and calcium substituted hematite (α-Fe₂O₃) nanoparticles are synthesized by surfactant-directed co-precipitation and post annealing method. The annealed nanoparticles were found to be in single phase in nature and crystallize in the rhombohedral structure with space group R3c as confirmed by Rietveld refinement of the X-ray diffraction (XRD) data. Average crystallite sizes are calculated to be 20 to 30 nm and 50 to 60 nm for the nanoparticles annealed at 400 and 600 °C respectively. Mössbauer spectra for all the nanoparticles could be fitted with a sextet corresponding to the single magnetic state of the iron atoms in its Fe³⁺ state in the hematite matrix. The FTIR and Raman spectra of all the samples correspond to specific modes of α-Fe₂O₃. UV-Vis spectra of annealed samples showed broad peaks in the range of 525-630 nm resulting from spin-forbidden ligand field transition together with the spin-flip transition among the 2t₂g states. The estimated band gap energies were in the range of 1.6 to 1.9 eV which are much lower than the reported values for nano hematite. From the room temperature magnetic hysteresis loop measurements, weak ferromagnetic behavior is observed in all undoped and Ca²⁺ doped hematite samples. Morin temperature (T(M)) is calculated to be 257 and 237 K for 1.45% doped samples with particle size 54 and 27 nm respectively. The sample with Ca content of 1.45 wt% when annealed at 400 °C showed that the particles were of different shapes which included both quasi spherical and rod shaped. On annealing the same sample at 600 °C, the nanorods collapsed to form bigger spherical and ellipsoidal particles.

  5. Enhanced Ligand Sampling for Relative Protein–Ligand Binding Free Energy Calculations

    PubMed Central

    2016-01-01

    Free energy calculations are used to study how strongly potential drug molecules interact with their target receptors. The accuracy of these calculations depends on the accuracy of the molecular dynamics (MD) force field as well as proper sampling of the major conformations of each molecule. However, proper sampling of ligand conformations can be difficult when there are large barriers separating the major ligand conformations. An example of this is for ligands with an asymmetrically substituted phenyl ring, where the presence of protein loops hinders the proper sampling of the different ring conformations. These ring conformations become more difficult to sample when the size of the functional groups attached to the ring increases. The Adaptive Integration Method (AIM) has been developed, which adaptively changes the alchemical coupling parameter λ during the MD simulation so that conformations sampled at one λ can aid sampling at the other λ values. The Accelerated Adaptive Integration Method (AcclAIM) builds on AIM by lowering potential barriers for specific degrees of freedom at intermediate λ values. However, these methods may not work when there are very large barriers separating the major ligand conformations. In this work, we describe a modification to AIM that improves sampling of the different ring conformations, even when there is a very large barrier between them. This method combines AIM with conformational Monte Carlo sampling, giving improved convergence of ring populations and the resulting free energy. This method, called AIM/MC, is applied to study the relative binding free energy for a pair of ligands that bind to thrombin and a different pair of ligands that bind to aspartyl protease β-APP cleaving enzyme 1 (BACE1). These protein–ligand binding free energy calculations illustrate the improvements in conformational sampling and the convergence of the free energy compared to both AIM and AcclAIM. PMID:25906170

  6. Project W-320, 241-C-106 sluicing HVAC calculations, Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, J.W.

    1998-08-07

    This supporting document has been prepared to make the FDNW calculations for Project W-320, readily retrievable. The report contains the following calculations: Exhaust airflow sizing for Tank 241-C-106; Equipment sizing and selection recirculation fan; Sizing high efficiency mist eliminator; Sizing electric heating coil; Equipment sizing and selection of recirculation condenser; Chiller skid system sizing and selection; High efficiency metal filter shielding input and flushing frequency; and Exhaust skid stack sizing and fan sizing.

  7. Grain size analysis and depositional environment of shallow marine to basin floor, Kelantan River Delta

    NASA Astrophysics Data System (ADS)

    Afifah, M. R. Nurul; Aziz, A. Che; Roslan, M. Kamal

    2015-09-01

    Sediment samples were collected from the shallow marine from Kuala Besar, Kelantan outwards to the basin floor of South China Sea which consisted of quaternary bottom sediments. Sixty five samples were analysed for their grain size distribution and statistical relationships. Basic statistical analysis like mean, standard deviation, skewness and kurtosis were calculated and used to differentiate the depositional environment of the sediments and to derive the uniformity of depositional environment either from the beach or river environment. The sediments of all areas were varied in their sorting ranging from very well sorted to poorly sorted, strongly negative skewed to strongly positive skewed, and extremely leptokurtic to very platykurtic in nature. Bivariate plots between the grain-size parameters were then interpreted and the Coarsest-Median (CM) pattern showed the trend suggesting relationships between sediments influenced by three ongoing hydrodynamic factors namely turbidity current, littoral drift and waves dynamic, which functioned to control the sediments distribution pattern in various ways.

  8. Quantifying Density Fluctuations in Volumes of All Shapes and Sizes Using Indirect Umbrella Sampling

    NASA Astrophysics Data System (ADS)

    Patel, Amish J.; Varilly, Patrick; Chandler, David; Garde, Shekhar

    2011-10-01

    Water density fluctuations are an important statistical mechanical observable and are related to many-body correlations, as well as hydrophobic hydration and interactions. Local water density fluctuations at a solid-water surface have also been proposed as a measure of its hydrophobicity. These fluctuations can be quantified by calculating the probability, P v ( N), of observing N waters in a probe volume of interest v. When v is large, calculating P v ( N) using molecular dynamics simulations is challenging, as the probability of observing very few waters is exponentially small, and the standard procedure for overcoming this problem (umbrella sampling in N) leads to undesirable impulsive forces. Patel et al. (J. Phys. Chem. B 114:1632, 2010) have recently developed an indirect umbrella sampling (INDUS) method, that samples a coarse-grained particle number to obtain P v ( N) in cuboidal volumes. Here, we present and demonstrate an extension of that approach to volumes of other basic shapes, like spheres and cylinders, as well as to collections of such volumes. We further describe the implementation of INDUS in the NPT ensemble and calculate P v ( N) distributions over a broad range of pressures. Our method may be of particular interest in characterizing the hydrophobicity of interfaces of proteins, nanotubes and related systems.

  9. Evaluation of hydraulic conductivities calculated from multi-port permeameter measurements

    USGS Publications Warehouse

    Wolf, Steven H.; Celia, Michael A.; Hess, Kathryn M.

    1991-01-01

    A multiport permeameter was developed for use in estimating hydraulic conductivity over intact sections of aquifer core using the core liner as the permeameter body. Six cores obtained from one borehole through the upper 9 m of a stratified glacial-outwash aquifer were used to evaluate the reliability of the permeameter. Radiographs of the cores were used to assess core integrity and to locate 5- to 10-cm sections of similar grain size for estimation of hydraulic conductivity. After extensive testing of the permeameter, hydraulic conductivities were determined for 83 sections of the six cores. Other measurement techniques included permeameter measurements on repacked sections of core, estimates based on grain-size analyses, and estimates based on borehole flowmeter measurements. Permeameter measurements of 33 sections of core that had been extruded, homogenized, and repacked did not differ significantly from the original measurements. Hydraulic conductivities estimated from grain-size distributions were slightly higher than those calculated from permeameter measurements; the significance of the difference depended on the estimating equation used. Hydraulic conductivities calculated from field measurements, using a borehole flowmeter in the borehole from which the cores were extracted, were significantly higher than those calculated from laboratory measurements and more closely agreed with independent estimates of hydraulic conductivity based on tracer movement near the borehole. This indicates that hydraulic conductivities based on laboratory measurements of core samples may underestimate actual field hydraulic conductivities in this type of stratified glacial-outwash aquifer.

  10. Determining Representative Elementary Volume For Multiple Petrophysical Parameters using a Convex Hull Analysis of Digital Rock Data

    NASA Astrophysics Data System (ADS)

    Shah, S.; Gray, F.; Yang, J.; Crawshaw, J.; Boek, E.

    2016-12-01

    Advances in 3D pore-scale imaging and computational methods have allowed an exceptionally detailed quantitative and qualitative analysis of the fluid flow in complex porous media. A fundamental problem in pore-scale imaging and modelling is how to represent and model the range of scales encountered in porous media, starting from the smallest pore spaces. In this study, a novel method is presented for determining the representative elementary volume (REV) of a rock for several parameters simultaneously. We calculate the two main macroscopic petrophysical parameters, porosity and single-phase permeability, using micro CT imaging and Lattice Boltzmann (LB) simulations for 14 different porous media, including sandpacks, sandstones and carbonates. The concept of the `Convex Hull' is then applied to calculate the REV for both parameters simultaneously using a plot of the area of the convex hull as a function of the sub-volume, capturing the different scales of heterogeneity from the pore-scale imaging. The results also show that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size suggesting a computationally efficient way to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.

  11. Intraclass Correlation Coefficients for Obesity Indicators and Energy Balance-Related Behaviors among New York City Public Elementary Schools

    ERIC Educational Resources Information Center

    Gray, Heewon Lee; Burgermaster, Marissa; Tipton, Elizabeth; Contento, Isobel R.; Koch, Pamela A.; Di Noia, Jennifer

    2016-01-01

    Objective: Sample size and statistical power calculation should consider clustering effects when schools are the unit of randomization in intervention studies. The objective of the current study was to investigate how student outcomes are clustered within schools in an obesity prevention trial. Method: Baseline data from the Food, Health &…

  12. The influence of landscape characteristics and home-range size on the quantification of landscape-genetics relationships

    Treesearch

    Tabitha A. Graves; Tzeidle N. Wasserman; Milton Cezar Ribeiro; Erin L. Landguth; Stephen F. Spear; Niko Balkenhol; Colleen B. Higgins; Marie-Josee Fortin; Samuel A. Cushman; Lisette P. Waits

    2012-01-01

    A common approach used to estimate landscape resistance involves comparing correlations of ecological and genetic distances calculated among individuals of a species. However, the location of sampled individuals may contain some degree of spatial uncertainty due to the natural variation of animals moving through their home range ormeasurement error in plant or animal...

  13. Green synthesis and characterization of ANbO3 (A = Na, K) nanopowders fabricated using a biopolymer

    NASA Astrophysics Data System (ADS)

    Khorrami, Gh. H.; Mousavi, M.; Khayatian, S. A.; Kompany, A.; Khorsand Zak, A.

    2017-10-01

    Lead-free sodium niobate (NaNbO3, NN) and potassium niobate (KNbO3, KN) nanopowders were successfully synthesized by a simple and green synthesis process in gelatin media. Gelatin, which is a biopolymer, was used as stabilizer. In order to determine the lowest calcination temperature needed to obtain pure NN and KN nanopowders, the produced gels were analyzed by thermogravometric analyzer (TGA). The produced gels were calcined at 500∘C and 600∘C. The structural and optical properties of the prepared powders were examined using X-ray diffraction (XRD) technique, transmission electron microscopy (TEM), and UV-Vis spectroscopy. The XRD results revealed that pure phase NN and KN nanopowders were formed at low temperature calcination of 500∘C and 600∘C, respectively. The Scherrer formula and size-strain plot (SSP) method were employed to estimate crystallite size and lattice strain of the samples. The TEM images show that the NN and KN samples calcined at 600∘C have cubic shape with an average particle size of 60.95 and 39.29 nm, respectively. The optical bandgap energy of the samples was calculated using UV-Vis diffused reflectance spectra of the samples and Kubelka-Munck relation.

  14. Constraining pre-eruptive volatile contents and degassing histories in submarine lavas

    NASA Astrophysics Data System (ADS)

    Jones, M.; Soule, S. A.; Liao, Y.; Le Roux, V.; Brodsky, H.; Kurz, M. D.

    2017-12-01

    Vesicle textures in submarine lavas have been used to calculate total (pre-eruption) volatile concentrations in mid-ocean ridge basalts (MORB), which provide constraints on upper mantle volatile contents and CO2 fluxes along the global MOR. In this study, we evaluate vesicle size distributions and volatile contents in a suite of 20 MORB samples, which span the range of typical vesicularities and bubble number densities observed in global MORB. We demonstrate that 2D imaging coupled with traditional stereological methods closely reproduces vesicle size distributions and vesicularities measured using 3D x-ray micro-computed tomography (μ-CT). We further demonstrate that x-ray μ-CT provides additional information about bubble deformation and clustering that are linked to bubble nucleation and lava emplacement dynamics. The validation of vesicularity measurements allows us to evaluate the methods for calculating total CO2 concentrations in MORB using dissolved volatile content (SIMS), vesicularity, vesicle gas density, and equations of state. We model bubble and melt contraction during lava quenching and show that the melt viscosity prevents bubbles from reaching equilibrium at the glass transition temperature. Thus, we suggest that higher temperatures should be used to calculate exsolved volatile concentrations based on observed vesicularities. Our revised method reconciles discrepancies between exsolved volatile contents measured by gas manometry and calculated from vesicularity. In addition, our revised method suggests that some previous studies may have overestimated MORB volatile concentrations by up to a factor of two, with the greatest differences in samples with the highest vesicularities (e.g., `popping rock' 2πD43). These new results have important implications for CO2/Nb of `undegassed' MORB and global ridge CO2 fluxes. Lastly, our revised method yields constant total CO2 concentrations in sample suites from individual MOR eruptions that experienced syn-eruptive degassing. These results imply closed-system degassing during magma ascent and emplacement following equilibration at the depth of melt storage in the crust.

  15. Historical Population Estimates For Several Fish Species At Offshore Oil and Gas Structures in the US Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Gitschlag, G.

    2016-02-01

    Population estimates were calculated for four fish species occurring at offshore oil and gas structures in water depths of 14-32 m off the Louisiana and upper Texas coasts in the US Gulf of Mexico. From 1993-1999 sampling was conducted at eight offshore platforms in conjunction with explosive salvage of the structures. To estimate fish population size prior to detonation of explosives, a fish mark-recapture study was conducted. Fish were captured on rod and reel using assorted hook sizes. Traps were occasionally used to supplement catches. Fish were tagged below the dorsal fin with plastic t-bar tags using tagging guns. Only fish that were alive and in good condition were released. Recapture sampling was conducted after explosives were detonated during salvage operations. Personnel operating from inflatable boats used dip nets to collect all dead fish that floated to the surface. Divers collected representative samples of dead fish that sank to the sea floor. Data provided estimates for red snapper (Lutjanus campechanus), Atlantic spadefish (Chaetodipterus faber), gray triggerfish (Balistes capriscus), and blue runner (Caranx crysos) at one or more of the eight platforms studied. At seven platforms, population size for red snapper was calculated at 503-1,943 with a 95% CI of 478. Abundance estimates for Atlantic spadefish at three platforms ranged from 1,432-1,782 with a 95% CI of 473. At three platforms, population size of gray triggerfish was 63-129 with a 95% CI of 82. Blue runner abundance at one platform was 558. Unlike the other three species which occur close to the platforms, blue runner range widely and recapture of this species was dependent on fish schools being in close proximity to the platform at the time explosives were detonated. Tag recapture was as high as 73% for red snapper at one structure studied.

  16. Evaluating test-retest reliability in patient-reported outcome measures for older people: A systematic review.

    PubMed

    Park, Myung Sook; Kang, Kyung Ja; Jang, Sun Joo; Lee, Joo Yun; Chang, Sun Ju

    2018-03-01

    This study aimed to evaluate the components of test-retest reliability including time interval, sample size, and statistical methods used in patient-reported outcome measures in older people and to provide suggestions on the methodology for calculating test-retest reliability for patient-reported outcomes in older people. This was a systematic literature review. MEDLINE, Embase, CINAHL, and PsycINFO were searched from January 1, 2000 to August 10, 2017 by an information specialist. This systematic review was guided by both the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and the guideline for systematic review published by the National Evidence-based Healthcare Collaborating Agency in Korea. The methodological quality was assessed by the Consensus-based Standards for the selection of health Measurement Instruments checklist box B. Ninety-five out of 12,641 studies were selected for the analysis. The median time interval for test-retest reliability was 14days, and the ratio of sample size for test-retest reliability to the number of items in each measure ranged from 1:1 to 1:4. The most frequently used statistical methods for continuous scores was intraclass correlation coefficients (ICCs). Among the 63 studies that used ICCs, 21 studies presented models for ICC calculations and 30 studies reported 95% confidence intervals of the ICCs. Additional analyses using 17 studies that reported a strong ICC (>0.09) showed that the mean time interval was 12.88days and the mean ratio of the number of items to sample size was 1:5.37. When researchers plan to assess the test-retest reliability of patient-reported outcome measures for older people, they need to consider an adequate time interval of approximately 13days and the sample size of about 5 times the number of items. Particularly, statistical methods should not only be selected based on the types of scores of the patient-reported outcome measures, but should also be described clearly in the studies that report the results of test-retest reliability. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Particulate, colloidal, and dissolved-phase associations of plutonium and americium in a water sample from well 1587 at the Rocky Flats Plant, Colorado

    USGS Publications Warehouse

    Harnish, R.A.; McKnight, Diane M.; Ranville, James F.

    1994-01-01

    In November 1991, the initial phase of a study to determine the dominant aqueous phases that control the transport of plutonium (Pu), americium (Am), and uranium (U) in surface and groundwater at the Rocky Flats Plant was undertaken by the U.S. Geological Survey. By use of the techniques of stirred-cell spiral-flow filtration and crossflow ultrafiltration, particles of three size fractions were collected from a 60-liter sample of water from well 1587 at the Rocky Flats Plant. These samples and corresponding filtrate samples were analyzed for Pu and Am. As calculated from the analysis of filtrates, 65 percent of Pu 239 and 240 activity in the sample was associated with particulate and largest colloidal size fractions. Particulate (22 percent) and colloidal (43 percent) fractions were determined to have significant activities in relation to whole-water Pu activity. Am and Pu 238 activities were too low to be analyzed. Examination and analyses of the particulate and colloidal phases indicated the presence of mineral species (iron oxyhydroxides and clay minerals) and natural organic matter that can facilitate the transport of actinides in ground water. High concentrations of the transition metals copper and zinc in the smallest colloid fractions strongly indicate a potential for organic complexation of metals, and potentially of actinides, in this size fraction.

  18. Statistical power analysis in wildlife research

    USGS Publications Warehouse

    Steidl, R.J.; Hayes, J.P.

    1997-01-01

    Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.

  19. Production of medical radioactive isotopes using KIPT electron driven subcritical facility.

    PubMed

    Talamo, Alberto; Gohar, Yousry

    2008-05-01

    Kharkov Institute of Physics and Technology (KIPT) of Ukraine in collaboration with Argonne National Laboratory (ANL) has a plan to construct an electron accelerator driven subcritical assembly. One of the facility objectives is the production of medical radioactive isotopes. This paper presents the ANL collaborative work performed for characterizing the facility performance for producing medical radioactive isotopes. First, a preliminary assessment was performed without including the self-shielding effect of the irradiated samples. Then, more detailed investigation was carried out including the self-shielding effect, which defined the sample size and location for producing each medical isotope. In the first part, the reaction rates were calculated as the multiplication of the cross section with the unperturbed neutron flux of the facility. Over fifty isotopes have been considered and all transmutation channels are used including (n, gamma), (n, 2n), (n, p), and (gamma, n). In the second part, the parent isotopes with high reaction rate were explicitly modeled in the calculations. Four irradiation locations were considered in the analyses to study the medical isotope production rate. The results show the self-shielding effect not only reduces the specific activity but it also changes the irradiation location that maximizes the specific activity. The axial and radial distributions of the parent capture rates have been examined to define the irradiation sample size of each parent isotope.

  20. Factors associated to acceptable treatment adherence among children with chronic kidney disease in Guatemala

    PubMed Central

    Cerón, Alejandro; Méndez-Alburez, Luis Pablo; Lou-Meda, Randall

    2017-01-01

    Pediatric patients with Chronic Kidney Disease face several barriers to medication adherence that, if addressed, may improve clinical care outcomes. A cross sectional questionnaire was administered in the Foundation for Children with Kidney Disease (FUNDANIER, Guatemala City) from September of 2015 to April of 2016 to identify the predisposing factors, enabling factors and need factors related to medication adherence. Sample size was calculated using simple random sampling with a confidence level of 95%, confidence interval of 0.05 and a proportion of 87%. A total of 103 participants responded to the questionnaire (calculated sample size was 96). Independent variables were defined and described, and the bivariate relationship to dependent variables was determined using Odds Ratio. Multivariate analysis was carried out using logistic regression. The mean adherence of study population was 78% (SD 0.08, max = 96%, min = 55%). The mean adherence in transplant patients was 82% (SD 7.8, max 96%, min 63%), and the mean adherence in dialysis patients was 76% (SD 7.8 max 90%, min 55%). Adherence was positively associated to the mother’s educational level and to higher monthly household income. Together predisposing, enabling and need factors illustrate the complexities surrounding adherence in this pediatric CKD population. Public policy strategies aimed at improving access to comprehensive treatment regimens may facilitate treatment access, alleviating economic strain on caregivers and may improve adherence outcomes. PMID:29036228

  1. Methodological approach for substantiating disease freedom in a heterogeneous small population. Application to ovine scrapie, a disease with a strong genetic susceptibility.

    PubMed

    Martinez, Marie-José; Durand, Benoit; Calavas, Didier; Ducrot, Christian

    2010-06-01

    Demonstrating disease freedom is becoming important in different fields including animal disease control. Most methods consider sampling only from a homogeneous population in which each animal has the same probability of becoming infected. In this paper, we propose a new methodology to calculate the probability of detecting the disease if it is present in a heterogeneous population of small size with potentially different risk groups, differences in risk being defined using relative risks. To calculate this probability, for each possible arrangement of the infected animals in the different groups, the probability that all the animals tested are test-negative given this arrangement is multiplied by the probability that this arrangement occurs. The probability formula is developed using the assumption of a perfect test and hypergeometric sampling for finite small size populations. The methodology is applied to scrapie, a disease affecting small ruminants and characterized in sheep by a strong genetic susceptibility defining different risk groups. It illustrates that the genotypes of the tested animals influence heavily the confidence level of detecting scrapie. The results present the statistical power for substantiating disease freedom in a small heterogeneous population as a function of the design prevalence, the structure of the sample tested, the structure of the herd and the associated relative risks. (c) 2010 Elsevier B.V. All rights reserved.

  2. Factors associated to acceptable treatment adherence among children with chronic kidney disease in Guatemala.

    PubMed

    Ramay, Brooke M; Cerón, Alejandro; Méndez-Alburez, Luis Pablo; Lou-Meda, Randall

    2017-01-01

    Pediatric patients with Chronic Kidney Disease face several barriers to medication adherence that, if addressed, may improve clinical care outcomes. A cross sectional questionnaire was administered in the Foundation for Children with Kidney Disease (FUNDANIER, Guatemala City) from September of 2015 to April of 2016 to identify the predisposing factors, enabling factors and need factors related to medication adherence. Sample size was calculated using simple random sampling with a confidence level of 95%, confidence interval of 0.05 and a proportion of 87%. A total of 103 participants responded to the questionnaire (calculated sample size was 96). Independent variables were defined and described, and the bivariate relationship to dependent variables was determined using Odds Ratio. Multivariate analysis was carried out using logistic regression. The mean adherence of study population was 78% (SD 0.08, max = 96%, min = 55%). The mean adherence in transplant patients was 82% (SD 7.8, max 96%, min 63%), and the mean adherence in dialysis patients was 76% (SD 7.8 max 90%, min 55%). Adherence was positively associated to the mother's educational level and to higher monthly household income. Together predisposing, enabling and need factors illustrate the complexities surrounding adherence in this pediatric CKD population. Public policy strategies aimed at improving access to comprehensive treatment regimens may facilitate treatment access, alleviating economic strain on caregivers and may improve adherence outcomes.

  3. Experimental investigation of inhomogeneities, nanoscopic phase separation, and magnetism in arc melted Fe-Cu metals with equal atomic ratio of the constituents

    NASA Astrophysics Data System (ADS)

    Hassnain Jaffari, G.; Aftab, M.; Anjum, D. H.; Cha, Dongkyu; Poirier, Gerald; Ismat Shah, S.

    2015-12-01

    Composition gradient and phase separation at the nanoscale have been investigated for arc-melted and solidified with equiatomic Fe-Cu. Diffraction studies revealed that Fe and Cu exhibited phase separation with no trace of any mixing. Microscopy studies revealed that immiscible Fe-Cu form dense bulk nanocomposite. The spatial distribution of Fe and Cu showed existence of two distinct regions, i.e., Fe-rich and Cu-rich regions. Fe-rich regions have Cu precipitates of various sizes and different shapes, with Fe forming meshes or channels greater than 100 nm in size. On the other hand, the matrix of Cu-rich regions formed strips with fine strands of nanosized Fe. Macromagnetic response of the system showed ferromagnetic behavior with a magnetic moment being equal to about 2.13 μB/ Fe atom and a bulk like negligible value of coercivity over the temperature range of 5-300 K. Anisotropy constant has been calculated from various laws of approach to saturation, and its value is extracted to be equal to 1350 J/m3. Inhomogeneous strain within the Cu and Fe crystallites has been calculated for the (unannealed) sample solidified after arc-melting. Annealed sample also exhibited local inhomogeneity with removal of inhomogeneous strain and no appreciable change in magnetic character. However, for the annealed sample phase separated Fe exhibited homogenous strain.

  4. Fast calculation method for computer-generated cylindrical holograms.

    PubMed

    Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi

    2008-07-01

    Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.

  5. Cálculo del esfuerzo ideal de metales nobles mediante primeros principios en la dirección <100>

    NASA Astrophysics Data System (ADS)

    Bautista-Hernández, A.; López-Fuentes, M.; Pacheco-Espejel, V.; Rivas-Silva, J. F.

    2005-04-01

    We present calculations of the ideal strength on the < 100 > direction for noble metals (Cu, Ag and Au), by means of first principles calculations. First, we obtain the structural parameters (cell parameters, bulk modulus) for each studied metal. We deform on the < 100 > direction calculating the total energy and the stress tensor through the Hellman-Feynman theorem, by the relaxation of the unit cell in the perpendicular directions to the deformation one. The calculated cell constants differ 1.3 % from experimental data. The maximum ideal strength are 29.6, 17 and 19 GPa for Cu, Ag and Au respectively. Meanwhile, the calculated elastic modulus are 106 (Cu), 71 (Ag), and 45 GPa (Au) and are in agreement with the experimental values for polycrystalline samples. The values of maximum strength are explained by the optimum volume values due to the atomic radius size for each element.

  6. Synthesis of Mn doped ZnS nanocrystals: Crystallographic and morphological study

    NASA Astrophysics Data System (ADS)

    Shaikh, Azharuddin Z.; Shirsath, Narendra B.; Sonawane, Prabhakar S.

    2018-05-01

    The influence of doping concentration on the physical properties of ZnS nanocrystals synthesized using coprecipitation method at room temperature is reported in this paper. In particular, we have studied the structural properties of Zn1-xMnxS where (x=0.01, 0.03, 0.05) by X-ray diffraction. X-ray peak broadening analysis used to calculate the crystalline sizes, lattice parameters, number of unit cell per particle and volume of unit cell. Crystalline ZnS with a cubic structure is confirmed by XRD results. The grain size of pure and Mn doped samples were found in the range of 7nm to 9nm. All the physical parameters of cubic ZnS nanocrystals were calculated are similar with standard values. The scanning electron microscope (SEM) which revealed that the synthesized nanocrystals are well-crystalline and possessing cubic phase.

  7. Observed intra-cluster correlation coefficients in a cluster survey sample of patient encounters in general practice in Australia

    PubMed Central

    Knox, Stephanie A; Chondros, Patty

    2004-01-01

    Background Cluster sample study designs are cost effective, however cluster samples violate the simple random sample assumption of independence of observations. Failure to account for the intra-cluster correlation of observations when sampling through clusters may lead to an under-powered study. Researchers therefore need estimates of intra-cluster correlation for a range of outcomes to calculate sample size. We report intra-cluster correlation coefficients observed within a large-scale cross-sectional study of general practice in Australia, where the general practitioner (GP) was the primary sampling unit and the patient encounter was the unit of inference. Methods Each year the Bettering the Evaluation and Care of Health (BEACH) study recruits a random sample of approximately 1,000 GPs across Australia. Each GP completes details of 100 consecutive patient encounters. Intra-cluster correlation coefficients were estimated for patient demographics, morbidity managed and treatments received. Intra-cluster correlation coefficients were estimated for descriptive outcomes and for associations between outcomes and predictors and were compared across two independent samples of GPs drawn three years apart. Results Between April 1999 and March 2000, a random sample of 1,047 Australian general practitioners recorded details of 104,700 patient encounters. Intra-cluster correlation coefficients for patient demographics ranged from 0.055 for patient sex to 0.451 for language spoken at home. Intra-cluster correlations for morbidity variables ranged from 0.005 for the management of eye problems to 0.059 for management of psychological problems. Intra-cluster correlation for the association between two variables was smaller than the descriptive intra-cluster correlation of each variable. When compared with the April 2002 to March 2003 sample (1,008 GPs) the estimated intra-cluster correlation coefficients were found to be consistent across samples. Conclusions The demonstrated precision and reliability of the estimated intra-cluster correlations indicate that these coefficients will be useful for calculating sample sizes in future general practice surveys that use the GP as the primary sampling unit. PMID:15613248

  8. Increasing Complexity of Clinical Research in Gastroenterology: Implications for Training Clinician-Scientists

    PubMed Central

    Scott, Frank I.; McConnell, Ryan A.; Lewis, Matthew E.; Lewis, James D.

    2014-01-01

    Background Significant advances have been made in clinical and epidemiologic research methods over the past 30 years. We sought to demonstrate the impact of these advances on published research in gastroenterology from 1980 to 2010. Methods Three journals (Gastroenterology, Gut, and American Journal of Gastroenterology) were selected for evaluation given their continuous publication during the study period. Twenty original clinical articles were randomly selected from each journal from 1980, 1990, 2000, and 2010. Each article was assessed for topic studied, whether the outcome was clinical or physiologic, study design, sample size, number of authors and centers collaborating, and reporting of statistical methods such as sample size calculations, p-values, confidence intervals, and advanced techniques such as bioinformatics or multivariate modeling. Research support with external funding was also recorded. Results A total of 240 articles were included in the study. From 1980 to 2010, there was a significant increase in analytic studies (p<0.001), clinical outcomes (p=0.003), median number of authors per article (p<0.001), multicenter collaboration (p<0.001), sample size (p<0.001), and external funding (p<0.001)). There was significantly increased reporting of p-values (p=0.01), confidence intervals (p<0.001), and power calculations (p<0.001). There was also increased utilization of large multicenter databases (p=0.001), multivariate analyses (p<0.001), and bioinformatics techniques (p=0.001). Conclusions There has been a dramatic increase in complexity in clinical research related to gastroenterology and hepatology over the last three decades. This increase highlights the need for advanced training of clinical investigators to conduct future research. PMID:22475957

  9. Magnetic Resonance Biomarkers in Neonatal Encephalopathy (MARBLE): a prospective multicountry study.

    PubMed

    Lally, Peter J; Pauliah, Shreela; Montaldo, Paolo; Chaban, Badr; Oliveira, Vania; Bainbridge, Alan; Soe, Aung; Pattnayak, Santosh; Clarke, Paul; Satodia, Prakash; Harigopal, Sundeep; Abernethy, Laurence J; Turner, Mark A; Huertas-Ceballos, Angela; Shankaran, Seetha; Thayyil, Sudhin

    2015-09-30

    Despite cooling, adverse outcomes are seen in up to half of the surviving infants after neonatal encephalopathy. A number of novel adjunct drug therapies with cooling have been shown to be highly neuroprotective in animal studies, and are currently awaiting clinical translation. Rigorous evaluation of these therapies in phase II trials using surrogate MR biomarkers may speed up their bench to bedside translation. A recent systematic review of single-centre studies has suggested that MR spectroscopy biomarkers offer the best promise; however, the prognostic accuracy of these biomarkers in cooled encephalopathic babies in a multicentre setting using different MR scan makers is not known. The MR scanners (3 T; Philips, Siemens, GE) in all the participating sites will be harmonised using phantom experiments and healthy adult volunteers before the start of the study. We will then recruit 180 encephalopathic infants treated with whole body cooling from the participating centres. MRI and spectroscopy will be performed within 2 weeks of birth. Neurodevelopmental outcomes will be assessed at 18-24 months of age. Agreement between MR cerebral biomarkers and neurodevelopmental outcome will be reported. The sample size is calculated using the 'rule of 10', generally used to calculate the sample size requirements for developing prognostic models. Considering 9 parameters, we require 9×10 adverse events, which suggest that a total sample size of 180 is required. Human Research Ethics Committee approvals have been received from Brent Research Ethics Committee (London), and from Imperial College London (Sponsor). We will submit the results of the study to relevant journals and offer national and international presentations. Clinical Trials.gov Number: NCT01309711. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. Application of asymmetric flow-field flow fractionation to the characterization of colloidal dispersions undergoing aggregation.

    PubMed

    Lattuada, Marco; Olivo, Carlos; Gauer, Cornelius; Storti, Giuseppe; Morbidelli, Massimo

    2010-05-18

    The characterization of complex colloidal dispersions is a relevant and challenging problem in colloidal science. In this work, we show how asymmetric flow-field flow fractionation (AF4) coupled to static light scattering can be used for this purpose. As an example of complex colloidal dispersions, we have chosen two systems undergoing aggregation. The first one is a conventional polystyrene latex undergoing reaction-limited aggregation, which leads to the formation of fractal clusters with well-known structure. The second one is a dispersion of elastomeric colloidal particles made of a polymer with a low glass transition temperature, which undergoes coalescence upon aggregation. Samples are withdrawn during aggregation at fixed times, fractionated with AF4 using a two-angle static light scattering unit as a detector. We have shown that from the analysis of the ratio between the intensities of the scattered light at the two angles the cluster size distribution can be recovered, without any need for calibration based on standard elution times, provided that the geometry and scattering properties of particles and clusters are known. The nonfractionated samples have been characterized also by conventional static and dynamic light scattering to determine their average radius of gyration and hydrodynamic radius. The size distribution of coalescing particles has been investigated also through image analysis of cryo-scanning electron microscopy (SEM) pictures. The average radius of gyration and the average hydrodynamic radius of the nonfractionated samples have been calculated and successfully compared to the values obtained from the size distributions measured by AF4. In addition, the data obtained are also in good agreement with calculations made with population balance equations.

  11. Structural and magnetic properties of cobalt-doped iron oxide nanoparticles prepared by solution combustion method for biomedical applications.

    PubMed

    Venkatesan, Kaliyamoorthy; Rajan Babu, Dhanakotti; Kavya Bai, Mane Prabhu; Supriya, Ravi; Vidya, Radhakrishnan; Madeswaran, Saminathan; Anandan, Pandurangan; Arivanandhan, Mukannan; Hayakawa, Yasuhiro

    2015-01-01

    Cobalt-doped iron oxide nanoparticles were prepared by solution combustion technique. The structural and magnetic properties of the prepared samples were also investigated. The average crystallite size of cobalt ferrite (CoFe2O4) magnetic nanoparticle was calculated using Scherrer equation, and it was found to be 16±5 nm. The particle size was measured by transmission electron microscope. This value was found to match with the crystallite size calculated by Scherrer equation corresponding to the prominent intensity peak (311) of X-ray diffraction. The high-resolution transmission electron microscope image shows clear lattice fringes and high crystallinity of cobalt ferrite magnetic nanoparticles. The synthesized magnetic nanoparticles exhibited the saturation magnetization value of 47 emu/g and coercivity of 947 Oe. The anti-microbial activity of cobalt ferrite nanoparticles showed better results as an anti-bacterial agent. The affinity constant was determined for the nanoparticles, and the cytotoxicity studies were conducted for the cobalt ferrite nanoparticles at different concentrations and the results are discussed.

  12. Structural and magnetic properties of cobalt-doped iron oxide nanoparticles prepared by solution combustion method for biomedical applications

    PubMed Central

    Venkatesan, Kaliyamoorthy; Rajan Babu, Dhanakotti; Kavya Bai, Mane Prabhu; Supriya, Ravi; Vidya, Radhakrishnan; Madeswaran, Saminathan; Anandan, Pandurangan; Arivanandhan, Mukannan; Hayakawa, Yasuhiro

    2015-01-01

    Cobalt-doped iron oxide nanoparticles were prepared by solution combustion technique. The structural and magnetic properties of the prepared samples were also investigated. The average crystallite size of cobalt ferrite (CoFe2O4) magnetic nanoparticle was calculated using Scherrer equation, and it was found to be 16±5 nm. The particle size was measured by transmission electron microscope. This value was found to match with the crystallite size calculated by Scherrer equation corresponding to the prominent intensity peak (311) of X-ray diffraction. The high-resolution transmission electron microscope image shows clear lattice fringes and high crystallinity of cobalt ferrite magnetic nanoparticles. The synthesized magnetic nanoparticles exhibited the saturation magnetization value of 47 emu/g and coercivity of 947 Oe. The anti-microbial activity of cobalt ferrite nanoparticles showed better results as an anti-bacterial agent. The affinity constant was determined for the nanoparticles, and the cytotoxicity studies were conducted for the cobalt ferrite nanoparticles at different concentrations and the results are discussed. PMID:26491320

  13. Growth and Electrical and Far-Infrared Properties of Wide Electron Wells in Semiconductors

    DTIC Science & Technology

    1994-04-15

    uniform. cmw where the barrier doping is 5 X 10" 6 cm -’, the well 300 K true electron ,profiles are shown for four dfiffer- depth calculated using Eq...in some samples. The mobility vs temperature characteristic of a where y- 0 . 7 6 . Mobility decreases from -9.4x 10’ cm 2/ sample of n-GaAs bulk doped...x 10 14 cm -3 -wt size effect scattering. Points show experimental data (for sample PBW 3 1). II I I 0 2 O 4 O 6 ITm:’au K I 14 ujanmm Hall eff I At

  14. Error in the Sampling Area of an Optical Disdrometer: Consequences in Computing Rain Variables

    PubMed Central

    Fraile, R.; Castro, A.; Fernández-Raga, M.; Palencia, C.; Calvo, A. I.

    2013-01-01

    The aim of this study is to improve the estimation of the characteristic uncertainties of optic disdrometers in an attempt to calculate the efficient sampling area according to the size of the drop and to study how this influences the computation of other parameters, taking into account that the real sampling area is always smaller than the nominal area. For large raindrops (a little over 6 mm), the effective sampling area may be half the area indicated by the manufacturer. The error committed in the sampling area is propagated to all the variables depending on this surface, such as the rain intensity and the reflectivity factor. Both variables tend to underestimate the real value if the sampling area is not corrected. For example, the rainfall intensity errors may be up to 50% for large drops, those slightly larger than 6 mm. The same occurs with reflectivity values, which may be up to twice the reflectivity calculated using the uncorrected constant sampling area. The Z-R relationships appear to have little dependence on the sampling area, because both variables depend on it the same way. These results were obtained by studying one particular rain event that occurred on April 16, 2006. PMID:23844393

  15. Size distribution of radioactive particles collected at Tokai, Japan 6 days after the nuclear accident.

    PubMed

    Miyamoto, Yutaka; Yasuda, Kenichiro; Magara, Masaaki

    2014-06-01

    Airborne radioactive particles released by the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident in 2011 were collected with a cascade low-pressure impactor at the Japan Atomic Energy Agency (JAEA) in Tokai, Japan, 114 km south of the FDNPP. Size-fractionated samples were collected twice, in the periods of March 17-April 1, 2011, and May 9-13, 2011. These size-fractionated samplings were carried out in the earliest days at a short distance from the FDNPP. Radioactivity of short-lived nuclides (several ten days of half-life) was determined as well as (134)Cs and (137)Cs. The elemental composition of size-fractionated samples was also measured. In the first collection, the activity median aerodynamic diameter (AMAD) of (129m)Te, (140)Ba, (134)Cs, (136)Cs and (137)Cs was 1.5-1.6 μm, while the diameter of (131)I was 0.45 μm. The diameters of (134)Cs and (137)Cs in the second collection were expressed as three peaks at <0.5 μm, 0.94 μm, and 7.8 μm. The (134)Cs/(137)Cs ratio of the first collection was 1.02 in total, but the ratio in the fine fractions was 0.91. A distribution map of (134)Cs/(137)Cs - (136)Cs/(137)Cs ratios was helpful in understanding the change of radioactive Cs composition. The Cs composition of size fractions <0.43 μm and the composition in the 1.1-2.1 μm range (including the AMAD of 1.5-1.6 μm) were similar to the calculated compositions of fuels in the reactors No. 1 and No. 3 at the FDNPP using the ORIGEN-II code. The Cs composition collected in May, 2011 was similar to the calculation results of reactor No. 2 fuel composition. The change of Cs composition implies that the radioactive Cs was released from the three reactors at the FDNPP via different processes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Detection of silver nanoparticles in parsley by solid sampling high-resolution-continuum source atomic absorption spectrometry.

    PubMed

    Feichtmeier, Nadine S; Leopold, Kerstin

    2014-06-01

    In this work, we present a fast and simple approach for detection of silver nanoparticles (AgNPs) in biological material (parsley) by solid sampling high-resolution-continuum source atomic absorption spectrometry (HR-CS AAS). A novel evaluation strategy was developed in order to distinguish AgNPs from ionic silver and for sizing of AgNPs. For this purpose, atomisation delay was introduced as significant indication of AgNPs, whereas atomisation rates allow distinction of 20-, 60-, and 80-nm AgNPs. Atomisation delays were found to be higher for samples containing silver ions than for samples containing silver nanoparticles. A maximum difference in atomisation delay normalised by the sample weight of 6.27 ± 0.96 s mg(-1) was obtained after optimisation of the furnace program of the AAS. For this purpose, a multivariate experimental design was used varying atomisation temperature, atomisation heating rate and pyrolysis temperature. Atomisation rates were calculated as the slope of the first inflection point of the absorbance signals and correlated with the size of the AgNPs in the biological sample. Hence, solid sampling HR-CS AAS was proved to be a promising tool for identifying and distinguishing silver nanoparticles from ionic silver directly in solid biological samples.

  17. Assessing methods to specify the target difference for a randomised controlled trial: DELTA (Difference ELicitation in TriAls) review.

    PubMed

    Cook, Jonathan A; Hislop, Jennifer; Adewuyi, Temitope E; Harrild, Kirsten; Altman, Douglas G; Ramsay, Craig R; Fraser, Cynthia; Buckley, Brian; Fayers, Peter; Harvey, Ian; Briggs, Andrew H; Norrie, John D; Fergusson, Dean; Ford, Ian; Vale, Luke D

    2014-05-01

    The randomised controlled trial (RCT) is widely considered to be the gold standard study for comparing the effectiveness of health interventions. Central to the design and validity of a RCT is a calculation of the number of participants needed (the sample size). The value used to determine the sample size can be considered the 'target difference'. From both a scientific and an ethical standpoint, selecting an appropriate target difference is of crucial importance. Determination of the target difference, as opposed to statistical approaches to calculating the sample size, has been greatly neglected though a variety of approaches have been proposed the current state of the evidence is unclear. The aim was to provide an overview of the current evidence regarding specifying the target difference in a RCT sample size calculation. The specific objectives were to conduct a systematic review of methods for specifying a target difference; to evaluate current practice by surveying triallists; to develop guidance on specifying the target difference in a RCT; and to identify future research needs. The biomedical and social science databases searched were MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL), Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, Education Resources Information Center (ERIC) and Scopus for in-press publications. All were searched from 1966 or the earliest date of the database coverage and searches were undertaken between November 2010 and January 2011. There were three interlinked components: (1) systematic review of methods for specifying a target difference for RCTs - a comprehensive search strategy involving an electronic literature search of biomedical and some non-biomedical databases and clinical trials textbooks was carried out; (2) identification of current trial practice using two surveys of triallists - members of the Society for Clinical Trials (SCT) were invited to complete an online survey and respondents were asked about their awareness and use of, and willingness to recommend, methods; one individual per triallist group [UK Clinical Research Collaboration (UKCRC)-registered Clinical Trials Units (CTUs), Medical Research Council (MRC) UK Hubs for Trials Methodology Research and National Institute for Health Research (NIHR) UK Research Design Services (RDS)] was invited to complete a survey; (3) production of a structured guidance document to aid the design of future trials - the draft guidance was developed utilising the results of the systematic review and surveys by the project steering and advisory groups. Methodological review incorporating electronic searches, review of books and guidelines, two surveys of experts (membership of an international society and UK- and Ireland-based triallists) and development of guidance. The two surveys were sent out to membership of the SCT and UK- and Ireland-based triallists. The review focused on methods for specifying the target difference in a RCT. It was not restricted to any type of intervention or condition. Methods for specifying the target difference for a RCT were considered. The search identified 11,485 potentially relevant studies. In total, 1434 were selected for full-text assessment and 777 were included in the review. Seven methods to specify the target difference for a RCT were identified - anchor, distribution, health economic, opinion-seeking, pilot study, review of evidence base (RoEB) and standardised effect size (SES) - each having important variations in implementation. A total of 216 of the included studies used more than one method. A total of 180 (15%) responses to the SCT survey were received, representing 13 countries. Awareness of methods ranged from 38% (n =69) for the health economic method to 90% (n =162) for the pilot study. Of the 61 surveys sent out to UK triallist groups, 34 (56%) responses were received. Awareness ranged from 97% (n =33) for the RoEB and pilot study methods to only 41% (n =14) for the distribution method. Based on the most recent trial, all bar three groups (91%, n =30) used a formal method. Guidance was developed on the use of each method and the reporting of the sample size calculation in a trial protocol and results paper. There is a clear need for greater use of formal methods to determine the target difference and better reporting of its specification. Raising the standard of RCT sample size calculations and the corresponding reporting of them would aid health professionals, patients, researchers and funders in judging the strength of the evidence and ensuring better use of scarce resources. The Medical Research Council UK and the National Institute for Health Research Joint Methodology Research programme.

  18. Exploring the variability of aerosol particle composition in the Arctic: a study from the springtime ACCACIA campaign

    NASA Astrophysics Data System (ADS)

    Young, G.; Jones, H. M.; Darbyshire, E.; Baustian, K. J.; McQuaid, J. B.; Bower, K. N.; Connolly, P. J.; Gallagher, M. W.; Choularton, T. W.

    2015-10-01

    Single-particle compositional analysis of filter samples collected on-board the FAAM BAe-146 aircraft is presented for six flights during the springtime Aerosol-Cloud Coupling and Climate Interactions in the Arctic (ACCACIA) campaign (March-April 2013). Scanning electron microscopy was utilised to derive size distributions and size-segregated particle compositions. These data were compared to corresponding data from wing-mounted optical particle counters and reasonable agreement between the calculated number size distributions was found. Significant variability in composition was observed, with differing external and internal mixing identified, between air mass trajectory cases based on HYSPLIT analyses. Dominant particle classes were silicate-based dusts and sea salts, with particles notably rich in K and Ca detected in one case. Source regions varied from the Arctic Ocean and Greenland through to northern Russia and the European continent. Good agreement between the back trajectories was mirrored by comparable compositional trends between samples. Silicate dusts were identified in all cases, and the elemental composition of the dust was consistent for all samples except one. It is hypothesised that long-range, high-altitude transport was primarily responsible for this dust, with likely sources including the Asian arid regions.

  19. A comparative appraisal of two equivalence tests for multiple standardized effects.

    PubMed

    Shieh, Gwowen

    2016-04-01

    Equivalence testing is recommended as a better alternative to the traditional difference-based methods for demonstrating the comparability of two or more treatment effects. Although equivalent tests of two groups are widely discussed, the natural extensions for assessing equivalence between several groups have not been well examined. This article provides a detailed and schematic comparison of the ANOVA F and the studentized range tests for evaluating the comparability of several standardized effects. Power and sample size appraisals of the two grossly distinct approaches are conducted in terms of a constraint on the range of the standardized means when the standard deviation of the standardized means is fixed. Although neither method is uniformly more powerful, the studentized range test has a clear advantage in sample size requirements necessary to achieve a given power when the underlying effect configurations are close to the priori minimum difference for determining equivalence. For actual application of equivalence tests and advance planning of equivalence studies, both SAS and R computer codes are available as supplementary files to implement the calculations of critical values, p-values, power levels, and sample sizes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Continuous-time quantum Monte Carlo calculation of multiorbital vertex asymptotics

    NASA Astrophysics Data System (ADS)

    Kaufmann, Josef; Gunacker, Patrik; Held, Karsten

    2017-07-01

    We derive the equations for calculating the high-frequency asymptotics of the local two-particle vertex function for a multiorbital impurity model. These relate the asymptotics for a general local interaction to equal-time two-particle Green's functions, which we sample using continuous-time quantum Monte Carlo simulations with a worm algorithm. As specific examples we study the single-orbital Hubbard model and the three t2 g orbitals of SrVO3 within dynamical mean-field theory (DMFT). We demonstrate how the knowledge of the high-frequency asymptotics reduces the statistical uncertainties of the vertex and further eliminates finite-box-size effects. The proposed method benefits the calculation of nonlocal susceptibilities in DMFT and diagrammatic extensions of DMFT.

  1. Body size, body proportions, and encephalization in a Middle Pleistocene archaic human from northern China.

    PubMed

    Rosenberg, Karen R; Zuné, Lü; Ruff, Christopher B

    2006-03-07

    The unusual discovery of associated cranial and postcranial elements from a single Middle Pleistocene fossil human allows us to calculate body proportions and relative cranial capacity (encephalization quotient) for that individual rather than rely on estimates based on sample means from unassociated specimens. The individual analyzed here (Jinniushan) from northeastern China at 260,000 years ago is the largest female specimen yet known in the human fossil record and has body proportions (body height relative to body breadth and relative limb length) typical of cold-adapted populations elsewhere in the world. Her encephalization quotient of 4.15 is similar to estimates for late Middle Pleistocene humans that are based on mean body size and mean brain size from unassociated specimens.

  2. Acceleration of intensity-modulated radiotherapy dose calculation by importance sampling of the calculation matrices.

    PubMed

    Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas

    2002-05-01

    In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan.

  3. A model-based approach to sample size estimation in recent onset type 1 diabetes.

    PubMed

    Bundy, Brian N; Krischer, Jeffrey P

    2016-11-01

    The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Sample size requirements for separating out the effects of combination treatments: Randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis

    PubMed Central

    2011-01-01

    Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Trial registration Current Controlled Trials ISRCTN61649292 PMID:21288326

  5. Pelvic dimorphism in relation to body size and body size dimorphism in humans.

    PubMed

    Kurki, Helen K

    2011-12-01

    Many mammalian species display sexual dimorphism in the pelvis, where females possess larger dimensions of the obstetric (pelvic) canal than males. This is contrary to the general pattern of body size dimorphism, where males are larger than females. Pelvic dimorphism is often attributed to selection relating to parturition, or as a developmental consequence of secondary sexual differentiation (different allometric growth trajectories of each sex). Among anthropoid primates, species with higher body size dimorphism have higher pelvic dimorphism (in converse directions), which is consistent with an explanation of differential growth trajectories for pelvic dimorphism. This study investigates whether the pattern holds intraspecifically in humans by asking: Do human populations with high body size dimorphism also display high pelvic dimorphism? Previous research demonstrated that in some small-bodied populations, relative pelvic canal size can be larger than in large-bodied populations, while others have suggested that larger-bodied human populations display greater body size dimorphism. Eleven human skeletal samples (total N: male = 229, female = 208) were utilized, representing a range of body sizes and geographical regions. Skeletal measurements of the pelvis and femur were collected and indices of sexual dimorphism for the pelvis and femur were calculated for each sample [ln(M/F)]. Linear regression was used to examine the relationships between indices of pelvic and femoral size dimorphism, and between pelvic dimorphism and female femoral size. Contrary to expectations, the results suggest that pelvic dimorphism in humans is generally not correlated with body size dimorphism or female body size. These results indicate that divergent patterns of dimorphism exist for the pelvis and body size in humans. Implications for the evaluation of the evolution of pelvic dimorphism and rotational childbirth in Homo are considered. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Ambient air particulates and particulate-bound mercury Hg(p) concentrations: dry deposition study over a Traffic, Airport, Park (T.A.P.) areas during years of 2011-2012.

    PubMed

    Fang, Guor-Cheng; Lin, Yen-Heng; Zheng, Yu-Cheng

    2016-02-01

    The main purpose of this study was to monitor ambient air particles and particulate-bound mercury Hg(p) in total suspended particulate (TSP) concentrations and dry deposition at the Hung Kuang (Traffic), Taichung airport and Westing Park sampling sites during the daytime and nighttime, from 2011 to 2012. In addition, the calculated/measured dry deposition flux ratios of ambient air particles and particulate-bound mercury Hg(p) were also studied with Baklanov & Sorensen and the Williams models. For a particle size of 10 μm, the Baklanov & Sorensen model yielded better predictions of dry deposition of ambient air particulates and particulate-bound mercury Hg(p) at the Hung Kuang (Traffic), Taichung airport and Westing Park sampling site during the daytime and nighttime sampling periods. However, for particulates with sizes 20-23 μm, the results obtained in the study reveal that the Williams model provided better prediction results for ambient air particulates and particulate-bound mercury Hg(p) at all sampling sites in this study.

  7. Millimeter-Wave Absorption as a Quality Control Tool for M-Type Hexaferrite Nanopowders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCloy, John S.; Korolev, Konstantin A.; Crum, Jarrod V.

    2013-01-01

    Millimeter wave (MMW) absorption measurements have been conducted on commercial samples of large (micrometer-sized) and small (nanometer-sized) particles of BaFe12O19 and SrFe12O19 using a quasi-optical MMW spectrometer and a series of backwards wave oscillators encompassing the 30-120 GHz range. Effective anisotropy of the particles calculated from the resonant absorption frequency indicates lower overall anisotropy in the nano-particles. Due to their high magnetocrystalline anisotropy, both BaFe12O19 and SrFe12O19 are expected to have spin resonances in the 45-55 GHz range. Several of the sampled BaFe12O19 powders did not have MMW absorptions, so they were further investigated by DC magnetization and x-ray diffractionmore » to assess magnetic behavior and structure. The samples with absent MMW absorption contained primarily iron oxides, suggesting that MMW absorption could be used for quality control in hexaferrite powder manufacture.« less

  8. Cation distribution of Ni-Zn-Mn ferrite nanoparticles

    NASA Astrophysics Data System (ADS)

    Parvatheeswara Rao, B.; Dhanalakshmi, B.; Ramesh, S.; Subba Rao, P. S. V.

    2018-06-01

    Mn substituted Ni-Zn ferrite nanoparticles, Ni0.4Zn0.6-xMnxFe2O4 (x = 0.00-0.25 in steps of 0.05), using metal nitrates were prepared by sol-gel autocombustion in citric acid matrix. The samples were examined by X-ray diffraction and vibrating sample magnetometer techniques. Rietveld structural refinements using the XRD data were performed on the samples to consolidate various structural parameters like phase (spinel), crystallite size (24.86-37.43 nm), lattice constant (8.3764-8.4089 Å) etc and also to determine cation distributions based on profile matching and integrated intensity ratios. Saturation magnetization values (37.18-68.40 emu/g) were extracted from the measured M-H loops of these nanoparticles to estimate their magnetic moments. Experimental and calculated magnetic moments and lattice constants were used to confirm the derived cation distributions from Rietveld analysis. The results of these ferrite nanoparticles are discussed in terms of the compositional modifications, particle sizes and the corresponding cation distributions as a result of Mn substitutions.

  9. Lattice dynamics and electron-phonon coupling calculations using nondiagonal supercells

    NASA Astrophysics Data System (ADS)

    Lloyd-Williams, Jonathan; Monserrat, Bartomeu

    Quantities derived from electron-phonon coupling matrix elements require a fine sampling of the vibrational Brillouin zone. Converged results are typically not obtainable using the direct method, in which a perturbation is frozen into the system and the total energy derivatives are calculated using a finite difference approach, because the size of simulation cell needed is prohibitively large. We show that it is possible to determine the response of a periodic system to a perturbation characterized by a wave vector with reduced fractional coordinates (m1 /n1 ,m2 /n2 ,m3 /n3) using a supercell containing a number of primitive cells equal to the least common multiple of n1, n2, and n3. This is accomplished by utilizing supercell matrices containing nonzero off-diagonal elements. We present the results of electron-phonon coupling calculations using the direct method to sample the vibrational Brillouin zone with grids of unprecedented size for a range of systems, including the canonical example of diamond. We also demonstrate that the use of nondiagonal supercells reduces by over an order of magnitude the computational cost of obtaining converged vibrational densities of states and phonon dispersion curves. J.L.-W. is supported by the Engineering and Physical Sciences Research Council (EPSRC). B.M. is supported by Robinson College, Cambridge, and the Cambridge Philosophical Society. This work was supported by EPSRC Grants EP/J017639/1 and EP/K013564/1.

  10. A new method for assessing the contribution of Primary Biological Atmospheric Particles to the mass concentration of the atmospheric aerosol.

    PubMed

    Perrino, Cinzia; Marcovecchio, Francesca

    2016-02-01

    Primary Biologic Atmospheric Particles (PBAPs) constitute an interesting and poorly investigated component of the atmospheric aerosol. We have developed and validated a method for evaluating the contribution of overall PBAPs to the mass concentration of atmospheric particulate matter (PM). The method is based on PM sampling on polycarbonate filters, staining of the collected particles with propidium iodide, observation at epifluorescence microscope and calculation of the bioaerosol mass using a digital image analysis software. The method has been also adapted to the observation and quantification of size-segregated aerosol samples collected by multi-stage impactors. Each step of the procedure has been individually validated. The relative repeatability of the method, calculated on 10 pairs of atmospheric PM samples collected side-by-side, was 16%. The method has been applied to real atmospheric samples collected in the vicinity of Rome, Italy. Size distribution measurements revealed that PBAPs was mainly in the coarse fraction of PM, with maxima in the range 5.6-10 μm. 24-h samples collected during different period of the year have shown that the concentration of bioaerosol was in the range 0.18-5.3 μg m(-3) (N=20), with a contribution to the organic matter in PM10 in the range 0.5-31% and to the total mass concentration of PM10 in the range 0.3-18%. The possibility to determine the concentration of total PBAPs in PM opens up interesting perspectives in terms of studying the health effects of these components and of increasing our knowledge about the composition of the organic fraction of the atmospheric aerosol. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    USGS Publications Warehouse

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  12. Summary of sediment data from the Yampa river and upper Green river basins, Colorado and Utah, 1993-2002

    USGS Publications Warehouse

    Elliott, John G.; Anders, Steven P.

    2004-01-01

    The water resources of the Upper Colorado River Basin have been extensively developed for water supply, irrigation, and power generation through water storage in upstream reservoirs during spring runoff and subsequent releases during the remainder of the year. The net effect of water-resource development has been to substantially modify the predevelopment annual hydrograph as well as the timing and amount of sediment delivery from the upper Green River and the Yampa River Basins tributaries to the main-stem reaches where endangered native fish populations have been observed. The U.S. Geological Survey, in cooperation with the Colorado Division of Wildlife and the U.S. Fish and Wildlife Service, began a study to identify sediment source reaches in the Green River main stem and the lower Yampa and Little Snake Rivers and to identify sediment-transport relations that would be useful in assessing the potential effects of hydrograph modification by reservoir operation on sedimentation at identified razorback spawning bars in the Green River. The need for additional data collection is evaluated at each sampling site. Sediment loads were calculated at five key areas within the watershed by using instantaneous measurements of streamflow, suspended-sediment concentration, and bedload. Sediment loads were computed at each site for two modes of transport (suspended load and bedload), as well as for the total-sediment load (suspended load plus bedload) where both modes were sampled. Sediment loads also were calculated for sediment particle-size range (silt-and-clay, and sand-and-gravel sizes) if laboratory size analysis had been performed on the sample, and by hydrograph season. Sediment-transport curves were developed for each type of sediment load by a least-squares regression of logarithmic-transformed data. Transport equations for suspended load and total load had coefficients of determination of at least 0.72 at all of the sampling sites except Little Snake River near Lily, Colorado. Bedload transport equations at the five sites had coefficients of determination that ranged from 0.40 (Yampa River at Deerlodge Park, Colorado) to 0.80 (Yampa River above Little Snake River near Maybell, Colorado). Transport equations for silt and clay-size material had coefficients of determination that ranged from 0.46 to 0.82. Where particle-size data were available (Yampa River at Deerlodge Park, Colorado, and Green River near Jensen, Utah), transport equations for the smaller particle sizes (fine sand) tended to have higher coefficients of determination than the equations for coarser sizes (medium and coarse sand, and very coarse sand and gravel). Because the data had to be subdivided into at least two subsets (rising-limb, falling-limb and, occasionally, base-flow periods), the seasonal transport equations generally were based on relatively few samples. All transport equations probably could be improved by additional data collected at strategically timed periods.

  13. SU-E-T-374: Evaluation and Verification of Dose Calculation Accuracy with Different Dose Grid Sizes for Intracranial Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, C; Schultheiss, T

    Purpose: In this study, we aim to evaluate the effect of dose grid size on the accuracy of calculated dose for small lesions in intracranial stereotactic radiosurgery (SRS), and to verify dose calculation accuracy with radiochromic film dosimetry. Methods: 15 intracranial lesions from previous SRS patients were retrospectively selected for this study. The planning target volume (PTV) ranged from 0.17 to 2.3 cm{sup 3}. A commercial treatment planning system was used to generate SRS plans using the volumetric modulated arc therapy (VMAT) technique using two arc fields. Two convolution-superposition-based dose calculation algorithms (Anisotropic Analytical Algorithm and Acuros XB algorithm) weremore » used to calculate volume dose distribution with dose grid size ranging from 1 mm to 3 mm with 0.5 mm step size. First, while the plan monitor units (MU) were kept constant, PTV dose variations were analyzed. Second, with 95% of the PTV covered by the prescription dose, variations of the plan MUs as a function of dose grid size were analyzed. Radiochomic films were used to compare the delivered dose and profile with the calculated dose distribution with different dose grid sizes. Results: The dose to the PTV, in terms of the mean dose, maximum, and minimum dose, showed steady decrease with increasing dose grid size using both algorithms. With 95% of the PTV covered by the prescription dose, the total MU increased with increasing dose grid size in most of the plans. Radiochromic film measurements showed better agreement with dose distributions calculated with 1-mm dose grid size. Conclusion: Dose grid size has significant impact on calculated dose distribution in intracranial SRS treatment planning with small target volumes. Using the default dose grid size could lead to under-estimation of delivered dose. A small dose grid size should be used to ensure calculation accuracy and agreement with QA measurements.« less

  14. Structural elucidation and magnetic behavior evaluation of Cu-Cr doped BaCo-X hexagonal ferrites

    NASA Astrophysics Data System (ADS)

    Azhar Khan, Muhammad; Hussain, Farhat; Rashid, Muhammad; Mahmood, Asif; Ramay, Shahid M.; Majeed, Abdul

    2018-04-01

    Ba2-xCuxCo2CryFe28-yO46 (x = 0.0, 0.1, 0.2, 0.3, 0.4, y = 0.0, 0.2, 0.4, 0.6, 0.8) X-type hexagonal ferrites were synthesized via micro-emulsion route. The techniques which were applied to characterize the prepared samples are as follows: X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), Dielectric measurements and vibrating sample magnetometer (VSM). The structural parameters i.e. lattice constant (a, c), cell volume (V), X-ray density, bulk density and crystallite size of all the prepared samples were obtained using XRD analysis. The lattice parameters 'a' and 'c' increase from 5.875 Å to 5.934 Å and 83.367 Å to 83.990 Å respectively. The crystallite size of investigated samples lies in the range of 28-32 nm. The magnetic properties of all samples have been calculated by vibrating sample magnetometer (VSM) analysis. The increase in coercivity (Hc) was observed with the increase of doping contents. It was observed that the coercivity (Hc) of all prepared samples is inversely related to the crystalline size which reflects that all materials are super-paramagnetic. The dielectric parameters i.e. dielectric constant, dielectric loss, tangent loss etc were obtained in the frequency range of 1 MHz-3 GHz and followed the Maxwell-Wagner's model. The significant variation the dielectric parameters are observed with increasing frequency. The maximum Q value is obtained at ∼2 GHz due to which these materials are used for high frequency multilayer chip inductors.

  15. Power and Sample Size Calculations for Testing Linear Combinations of Group Means under Variance Heterogeneity with Applications to Meta and Moderation Analyses

    ERIC Educational Resources Information Center

    Shieh, Gwowen; Jan, Show-Li

    2015-01-01

    The general formulation of a linear combination of population means permits a wide range of research questions to be tested within the context of ANOVA. However, it has been stressed in many research areas that the homogeneous variances assumption is frequently violated. To accommodate the heterogeneity of variance structure, the…

  16. The response of excess 230Th and extraterrestrial 3He to sediment redistribution at the Blake Ridge, western North Atlantic

    NASA Astrophysics Data System (ADS)

    McGee, David; Marcantonio, Franco; McManus, Jerry F.; Winckler, Gisela

    2010-10-01

    The constant-flux proxies excess 230Th ( 230Th xs) and extraterrestrial 3He ( 3He ET) are commonly used to calculate sedimentary mass accumulation rates and to quantify lateral advection of sediment at core sites. In settings with significant lateral input or removal of sediment, these calculations depend on the assumption that concentrations of 230Th xs and 3He ET are the same in both advected sediment and sediment falling through the water column above the core site. Sediment redistribution is known to fractionate grain sizes, preferentially transporting fine grains; though relatively few studies have examined the grain size distribution of 230Th xs and 3He ET, presently available data indicate that both are concentrated in fine grains, suggesting that fractionation during advection may bias accumulation rate and lateral advection estimates based on these proxies. In this study, we evaluate the behavior of 230Th xs and 3He ET in Holocene and last glacial samples from two cores from the Blake Ridge, a drift deposit in the western North Atlantic. At the end of the last glacial period, both cores received large amounts of laterally transported sediment enriched in fine-grained material. We find that accumulation rates calculated by normalization to 230Th and 3He are internally consistent despite large spatial and temporal differences in sediment advection. Our analyses of grain size fractions indicate that ~ 70% of 3He ET-bearing grains are in the < 20 μm fraction, with roughly equal amounts in the < 4 and 4-20 μm fractions. 230Th xs is concentrated in <4-μm grains relative to 4- to 20-μm grains by approximately a factor of 2 in Holocene samples and by a much larger factor (averaging a factor of 10) in glacial samples. Despite these enrichments of both constant-flux proxies in fine particles, the fidelity of 230Th- and 3He-based accumulation rate estimates appears to be preserved even in settings with extreme sediment redistribution, perhaps due to the cohesive behavior of fine particles in marine settings.

  17. Particulate emissions calculations from fall tillage operations using point and remote sensors.

    PubMed

    Moore, Kori D; Wojcik, Michael D; Martin, Randal S; Marchant, Christian C; Bingham, Gail E; Pfeiffer, Richard L; Prueger, John H; Hatfield, Jerry L

    2013-07-01

    Soil preparation for agricultural crops produces aerosols that may significantly contribute to seasonal atmospheric particulate matter (PM). Efforts to reduce PM emissions from tillage through a variety of conservation management practices (CMPs) have been made, but the reductions from many of these practices have not been measured in the field. A study was conducted in California's San Joaquin Valley to quantify emissions reductions from fall tillage CMP. Emissions were measured from conventional tillage methods and from a "combined operations" CMP, which combines several implements to reduce tractor passes. Measurements were made of soil moisture, bulk density, meteorological profiles, filter-based total suspended PM (TSP), concentrations of PM with an equivalent aerodynamic diameter ≤10 μm (PM) and PM with an equivalent aerodynamic diameter ≤2.5 μm (PM), and aerosol size distribution. A mass-calibrated, scanning, three-wavelength light detection and ranging (LIDAR) procedure estimated PM through a series of algorithms. Emissions were calculated via inverse modeling with mass concentration measurements and applying a mass balance to LIDAR data. Inverse modeling emission estimates were higher, often with statistically significant differences. Derived PM emissions for conventional operations generally agree with literature values. Sampling irregularities with a few filter-based samples prevented calculation of a complete set of emissions through inverse modeling; however, the LIDAR-based emissions dataset was complete. The CMP control effectiveness was calculated based on LIDAR-derived emissions to be 29 ± 2%, 60 ± 1%, and 25 ± 1% for PM, PM, and TSP size fractions, respectively. Implementation of this CMP provides an effective method for the reduction of PM emissions. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  18. Assessing the role of detrital zircon sorting on provenance interpretations in an ancient fluvial system using paleohydraulics - Permian Cutler Group, Paradox Basin, Utah and Colorado

    NASA Astrophysics Data System (ADS)

    Findlay, C. P., III; Ewing, R. C.; Perez, N. D.

    2017-12-01

    Detrital zircon age signatures used in provenance studies are assumed to be representative of entire catchments from which the sediment was derived, but the extent to which hydraulic sorting can bias provenance interpretations is poorly constrained. Sediment and mineral sorting occurs with changes in hydraulic conditions driven by both allogenic and autogenic processes. Zircon is sorted from less dense minerals due to the difference in density, and any age dependence on zircon size could potentially bias provenance interpretations. In this study, a coupled paleohydraulic and geochemical provenance approach is used to identify changes in paleohydraulic conditions and relate them to spatial variations in provenance signatures from samples collected along an approximately time-correlative source-to-sink pathway in the Permian Cutler Group of the Paradox Basin. Samples proximal to the uplift have a paleoflow direction to the southwest. In the medial basin, paleocurrent direction indicates salt movement caused fluvial pathways divert to the north and northwest on the flanks of anticlines. Channel depth, flow velocity, and discharge calculations were derived from field measurements of grain size and dune and bar cross-stratification indicate that competency of the fluvial system decreased from proximal to the medial basin by up to a factor of 12. Based upon the paleohydraulic calculations, zircon size fractionation would occur along the transect such that the larger zircons are removed from the system prior to reaching the medial basin. Analysis of the size and age distribution of zircons from the proximal and distal fluvial system of the Cutler Group tests if this hydraulic sorting affects the expected Uncompahgre Uplift age distribution.

  19. Porosity characterization for heterogeneous shales using integrated multiscale microscopy

    NASA Astrophysics Data System (ADS)

    Rassouli, F.; Andrew, M.; Zoback, M. D.

    2016-12-01

    Pore size distribution analysis plays a critical role in gas storage capacity and fluid transport characterization of shales. Study of the diverse distribution of pore size and structure in such low permeably rocks is withheld by the lack of tools to visualize the microstructural properties of shale rocks. In this paper we try to use multiple techniques to investigate the full pore size range in different sample scales. Modern imaging techniques are combined with routine analytical investigations (x-ray diffraction, thin section analysis and mercury porosimetry) to describe pore size distribution of shale samples from Haynesville formation in East Texas to generate a more holistic understanding of the porosity structure in shales, ranging from standard core plug down to nm scales. Standard 1" diameter core plug samples were first imaged using a Versa 3D x-ray microscope at lower resolutions. Then we pick several regions of interest (ROIs) with various micro-features (such as micro-cracks and high organic matters) in the rock samples to run higher resolution CT scans using a non-destructive interior tomography scans. After this step, we cut the samples and drill 5 mm diameter cores out of the selected ROIs. Then we rescan the samples to measure porosity distribution of the 5 mm cores. We repeat this step for samples with diameter of 1 mm being cut out of the 5 mm cores using a laser cutting machine. After comparing the pore structure and distribution of the samples measured form micro-CT analysis, we move to nano-scale imaging to capture the ultra-fine pores within the shale samples. At this stage, the diameter of the 1 mm samples will be milled down to 70 microns using the laser beam. We scan these samples in a nano-CT Ultra x-ray microscope and calculate the porosity of the samples by image segmentation methods. Finally, we use images collected from focused ion beam scanning electron microscopy (FIB-SEM) to be able to compare the results of porosity measurements from all different imaging techniques. These multi-scale characterization techniques are then compared with traditional analytical techniques such as Mercury Porosimetry.

  20. Application of the graphics processor unit to simulate a near field diffraction

    NASA Astrophysics Data System (ADS)

    Zinchik, Alexander A.; Topalov, Oleg K.; Muzychenko, Yana B.

    2017-06-01

    For many years, computer modeling program used for lecture demonstrations. Most of the existing commercial software, such as Virtual Lab, LightTrans GmbH company are quite expensive and have a surplus capabilities for educational tasks. The complexity of the diffraction demonstrations in the near zone, due to the large amount of calculations required to obtain the two-dimensional distribution of the amplitude and phase. At this day, there are no demonstrations, allowing to show the resulting distribution of amplitude and phase without much time delay. Even when using Fast Fourier Transform (FFT) algorithms diffraction calculation speed in the near zone for the input complex amplitude distributions with size more than 2000 × 2000 pixels is tens of seconds. Our program selects the appropriate propagation operator from a prescribed set of operators including Spectrum of Plane Waves propagation and Rayleigh-Sommerfeld propagation (using convolution). After implementation, we make a comparison between the calculation time for the near field diffraction: calculations made on GPU and CPU, showing that using GPU for calculations diffraction pattern in near zone does increase the overall speed of algorithm for an image of size 2048×2048 sampling points and more. The modules are implemented as separate dynamic-link libraries and can be used for lecture demonstrations, workshops, selfstudy and students in solving various problems such as the phase retrieval task.

  1. Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions

    NASA Astrophysics Data System (ADS)

    Dong, Miao L.; Goyal, Kashika G.; Worth, Bradley W.; Makkar, Sorab S.; Calhoun, William R.; Bali, Lalit M.; Bali, Samir

    2013-08-01

    A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.

  2. Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions.

    PubMed

    Dong, Miao L; Goyal, Kashika G; Worth, Bradley W; Makkar, Sorab S; Calhoun, William R; Bali, Lalit M; Bali, Samir

    2013-08-01

    A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.

  3. Crystal Face Distributions and Surface Site Densities of Two Synthetic Goethites: Implications for Adsorption Capacities as a Function of Particle Size.

    PubMed

    Livi, Kenneth J T; Villalobos, Mario; Leary, Rowan; Varela, Maria; Barnard, Jon; Villacís-García, Milton; Zanella, Rodolfo; Goodridge, Anna; Midgley, Paul

    2017-09-12

    Two synthetic goethites of varying crystal size distributions were analyzed by BET, conventional TEM, cryo-TEM, atomic resolution STEM and HRTEM, and electron tomography in order to determine the effects of crystal size, shape, and atomic scale surface roughness on their adsorption capacities. The two samples were determined by BET to have very different site densities based on Cr VI adsorption experiments. Model specific surface areas generated from TEM observations showed that, based on size and shape, there should be little difference in their adsorption capacities. Electron tomography revealed that both samples crystallized with an asymmetric {101} tablet habit. STEM and HRTEM images showed a significant increase in atomic-scale surface roughness of the larger goethite. This difference in roughness was quantified based on measurements of relative abundances of crystal faces {101} and {201} for the two goethites, and a reactive surface site density was calculated for each goethite. Singly coordinated sites on face {210} are 2.5 more dense than on face {101}, and the larger goethite showed an average total of 36% {210} as compared to 14% for the smaller goethite. This difference explains the considerably larger adsorption capacitiy of the larger goethite vs the smaller sample and points toward the necessity of knowing the atomic scale surface structure in predicting mineral adsorption processes.

  4. Confidence intervals and sample size calculations for the standardized mean difference effect size between two normal populations under heteroscedasticity.

    PubMed

    Shieh, G

    2013-12-01

    The use of effect sizes and associated confidence intervals in all empirical research has been strongly emphasized by journal publication guidelines. To help advance theory and practice in the social sciences, this article describes an improved procedure for constructing confidence intervals of the standardized mean difference effect size between two independent normal populations with unknown and possibly unequal variances. The presented approach has advantages over the existing formula in both theoretical justification and computational simplicity. In addition, simulation results show that the suggested one- and two-sided confidence intervals are more accurate in achieving the nominal coverage probability. The proposed estimation method provides a feasible alternative to the most commonly used measure of Cohen's d and the corresponding interval procedure when the assumption of homogeneous variances is not tenable. To further improve the potential applicability of the suggested methodology, the sample size procedures for precise interval estimation of the standardized mean difference are also delineated. The desired precision of a confidence interval is assessed with respect to the control of expected width and to the assurance probability of interval width within a designated value. Supplementary computer programs are developed to aid in the usefulness and implementation of the introduced techniques.

  5. Effect size and statistical power in the rodent fear conditioning literature - A systematic review.

    PubMed

    Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B

    2018-01-01

    Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.

  6. Effect size and statistical power in the rodent fear conditioning literature – A systematic review

    PubMed Central

    Macleod, Malcolm R.

    2018-01-01

    Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451

  7. Magnetic study of Co-doped CdSe nanoparticles

    NASA Astrophysics Data System (ADS)

    Das, Sayantani; Banerjee, Sourish; Sinha, T. P.

    2018-04-01

    Cobalt (2 %, 5 % and 10 %) doped cadmium selenide (CdSe) nanoparticles have been synthesized by soft chemical route. The XRD pattern shows the cubic structure of the sample. Crystallization temperature of the samples is calculated using differential scanning calorimeter. The average particle size of all the samples is found to be ˜ 25 nm. Field dependent (M-H) and temperature dependent (M-T) magnetization explains the presence of ferromagnetic components in the samples at room temperature and low temperature. In order to estimate the antiferromagnetic coupling among the doped TM atoms, an M-T measurement at 500 Oe has been carried out under zero field cooled (ZFC) and field cooled (FC) conditions and Curie-Weiss temperature θ of the samples has been estimated from 1/χ vs T plots.

  8. Multilocus lod scores in large pedigrees: combination of exact and approximate calculations.

    PubMed

    Tong, Liping; Thompson, Elizabeth

    2008-01-01

    To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some 'key' individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. (c) 2007 S. Karger AG, Basel

  9. Multilocus Lod Scores in Large Pedigrees: Combination of Exact and Approximate Calculations

    PubMed Central

    Tong, Liping; Thompson, Elizabeth

    2007-01-01

    To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some ‘key’ individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. PMID:17934317

  10. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  11. Spatial distribution of nymphs of Scaphoideus titanus (Homoptera: Cicadellidae) in grapes, and evaluation of sequential sampling plans.

    PubMed

    Lessio, Federico; Alma, Alberto

    2006-04-01

    The spatial distribution of the nymphs of Scaphoideus titanus Ball (Homoptera Cicadellidae), the vector of grapevine flavescence dorée (Candidatus Phytoplasma vitis, 16Sr-V), was studied by applying Taylor's power law. Studies were conducted from 2002 to 2005, in organic and conventional vineyards of Piedmont, northern Italy. Minimum sample size and fixed precision level stop lines were calculated to develop appropriate sampling plans. Model validation was performed, using independent field data, by means of Resampling Validation of Sample Plans (RVSP) resampling software. The nymphal distribution, analyzed via Taylor's power law, was aggregated, with b = 1.49. A sample of 32 plants was adequate at low pest densities with a precision level of D0 = 0.30; but for a more accurate estimate (D0 = 0.10), the required sample size needs to be 292 plants. Green's fixed precision level stop lines seem to be more suitable for field sampling: RVSP simulations of this sampling plan showed precision levels very close to the desired levels. However, at a prefixed precision level of 0.10, sampling would become too time-consuming, whereas a precision level of 0.25 is easily achievable. How these results could influence the correct application of the compulsory control of S. titanus and Flavescence dorée in Italy is discussed.

  12. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    USGS Publications Warehouse

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  13. Using lod scores to detect sex differences in male-female recombination fractions.

    PubMed

    Feenstra, B; Greenberg, D A; Hodge, S E

    2004-01-01

    Human recombination fraction (RF) can differ between males and females, but investigators do not always know which disease genes are located in genomic areas of large RF sex differences. Knowledge of RF sex differences contributes to our understanding of basic biology and can increase the power of a linkage study, improve gene localization, and provide clues to possible imprinting. One way to detect these differences is to use lod scores. In this study we focused on detecting RF sex differences and answered the following questions, in both phase-known and phase-unknown matings: (1) How large a sample size is needed to detect a RF sex difference? (2) What are "optimal" proportions of paternally vs. maternally informative matings? (3) Does ascertaining nonoptimal proportions of paternally or maternally informative matings lead to ascertainment bias? Our results were as follows: (1) We calculated expected lod scores (ELODs) under two different conditions: "unconstrained," allowing sex-specific RF parameters (theta(female), theta(male)); and "constrained," requiring theta(female) = theta(male). We then examined the DeltaELOD (identical with difference between maximized constrained and unconstrained ELODs) and calculated minimum sample sizes required to achieve statistically significant DeltaELODs. For large RF sex differences, samples as small as 10 to 20 fully informative matings can achieve statistical significance. We give general sample size guidelines for detecting RF differences in informative phase-known and phase-unknown matings. (2) We defined p as the proportion of paternally informative matings in the dataset; and the optimal proportion p(circ) as that value of p that maximizes DeltaELOD. We determined that, surprisingly, p(circ) does not necessarily equal (1/2), although it does fall between approximately 0.4 and 0.6 in most situations. (3) We showed that if p in a sample deviates from its optimal value, no bias is introduced (asymptotically) to the maximum likelihood estimates of theta(female) and theta(male), even though ELOD is reduced (see point 2). This fact is important because often investigators cannot control the proportions of paternally and maternally informative families. In conclusion, it is possible to reliably detect sex differences in recombination fraction. Copyright 2004 S. Karger AG, Basel

  14. Nanoparticle size detection limits by single particle ICP-MS for 40 elements.

    PubMed

    Lee, Sungyun; Bi, Xiangyu; Reed, Robert B; Ranville, James F; Herckes, Pierre; Westerhoff, Paul

    2014-09-02

    The quantification and characterization of natural, engineered, and incidental nano- to micro-size particles are beneficial to assessing a nanomaterial's performance in manufacturing, their fate and transport in the environment, and their potential risk to human health. Single particle inductively coupled plasma mass spectrometry (spICP-MS) can sensitively quantify the amount and size distribution of metallic nanoparticles suspended in aqueous matrices. To accurately obtain the nanoparticle size distribution, it is critical to have knowledge of the size detection limit (denoted as Dmin) using spICP-MS for a wide range of elements (other than a few available assessed ones) that have been or will be synthesized into engineered nanoparticles. Herein is described a method to estimate the size detection limit using spICP-MS and then apply it to nanoparticles composed of 40 different elements. The calculated Dmin values correspond well for a few of the elements with their detectable sizes that are available in the literature. Assuming each nanoparticle sample is composed of one element, Dmin values vary substantially among the 40 elements: Ta, U, Ir, Rh, Th, Ce, and Hf showed the lowest Dmin values, ≤10 nm; Bi, W, In, Pb, Pt, Ag, Au, Tl, Pd, Y, Ru, Cd, and Sb had Dmin in the range of 11-20 nm; Dmin values of Co, Sr, Sn, Zr, Ba, Te, Mo, Ni, V, Cu, Cr, Mg, Zn, Fe, Al, Li, and Ti were located at 21-80 nm; and Se, Ca, and Si showed high Dmin values, greater than 200 nm. A range of parameters that influence the Dmin, such as instrument sensitivity, nanoparticle density, and background noise, is demonstrated. It is observed that, when the background noise is low, the instrument sensitivity and nanoparticle density dominate the Dmin significantly. Approaches for reducing the Dmin, e.g., collision cell technology (CCT) and analyte isotope selection, are also discussed. To validate the Dmin estimation approach, size distributions for three engineered nanoparticle samples were obtained using spICP-MS. The use of this methodology confirms that the observed minimum detectable sizes are consistent with the calculated Dmin values. Overall, this work identifies the elements and nanoparticles to which current spICP-MS approaches can be applied, in order to enable quantification of very small nanoparticles at low concentrations in aqueous media.

  15. Chemical Characterization of an Envelope A Sample from Hanford Tank 241-AN-103

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hay, M.S.

    2000-08-23

    A whole tank composite sample from Hanford waste tank 241-AN-103 was received at the Savannah River Technology Center (SRTC) and chemically characterized. Prior to characterization the sample was diluted to {approximately}5 M sodium concentration. The filtered supernatant liquid, the total dried solids of the diluted sample, and the washed insoluble solids obtained from filtration of the diluted sample were analyzed. A mass balance calculation of the three fractions of the sample analyzed indicate the analytical results appear relatively self-consistent for major components of the sample. However, some inconsistency was observed between results where more than one method of determination wasmore » employed and for species present in low concentrations. A direct comparison to previous analyses of material from tank 241-AN-103 was not possible due to unavailability of data for diluted samples of tank 241-AN-103 whole tank composites. However, the analytical data for other types of samples from 241-AN-103 we re mathematically diluted and compare reasonably with the current results. Although the segments of the core samples used to prepare the sample received at SRTC were combined in an attempt to produce a whole tank composite, determination of how well the results of the current analysis represent the actual composition of the Hanford waste tank 241-AN-103 remains problematic due to the small sample size and the large size of the non-homogenized waste tank.« less

  16. Characterization of stormwater runoff from bridge decks in eastern Massachusetts, 2014–16

    USGS Publications Warehouse

    Smith, Kirk P.; Sorenson, Jason R.; Granato, Gregory E.

    2018-05-02

    The quality of stormwater runoff from bridge decks (hereafter referred to as “bridge-deck runoff”) was characterized in a field study from August 2014 through August 2016 in which concentrations of suspended sediment (SS) and total nutrients were monitored. These new data were collected to supplement existing highway-runoff data collected in Massachusetts which were deficient in bridge-deck runoff concentration data. Monitoring stations were installed at three bridges maintained by the Massachusetts Department of Transportation in eastern Massachusetts (State Route 2A in the city of Boston, Interstate 90 in the town of Weston, and State Route 20 near Quinsigamond Village in the city of Worcester). The bridges had annual average daily traffic volumes from 21,200 to 124,000 vehicles per day; the land use surrounding the monitoring stations was 25 to 67 percent impervious.Automatic-monitoring techniques were used to collect more than 160 flow-proportional composite samples of bridge-deck runoff. Samples were analyzed for concentrations of SS, loss on ignition of suspended solids (LOI), particulate carbon (PC), total phosphorus (TP), total dissolved nitrogen (DN), and particulate nitrogen (PN). The distribution of particle size of SS also was determined for composite samples. Samples of bridge-deck runoff were collected year round during rain, mixed precipitation, and snowmelt runoff and with different dry antecedent periods throughout the 2-year sampling period.At the three bridge-deck-monitoring stations, median concentrations of SS in composite samples of bridge-deck runoff ranged from 1,490 to 2,020 milligrams per liter (mg/L); however, the range of SS in individual composites was vast at 44 to 142,000 mg/L. Median concentrations of SS were similar in composite samples collected from the State Route 2A and Interstate 90 bridge (2,010 and 2,020 mg/L, respectively), and lowest at the State Route 20 bridge (1,490 mg/L). Concentrations of coarse sediment (greater than 0.25 millimeters in diameter) dominated the SS matrix by more than an order of magnitude. Concentrations of LOI and PC in composite samples ranged from 15 to 1,740 mg/L and 6.68 to 1,360 mg/L, respectively, and generally represented less than 10 and 3 percent of the median mass of SS, respectively. Concentrations of TP in composite samples ranged from 0.09 to 7.02 mg/L; median concentrations of TP ranged from 0.505 to 0.69 mg/L and were highest on the bridge on State Route 2A in Boston. Concentrations of total nitrogen (TN) (sum DN and PN) in composite samples were variable (0.36 to 29 mg/L). Median DN (0.64 to 0.90 mg/L) concentrations generally represented about 40 percent of the TN concentration at each bridge and were similar to annual volume-weighted mean concentrations of nitrogen in precipitation in Massachusetts.Nonparametric statistical methods were used to test for differences between sample constituent concentrations among the three bridges. These results indicated that there are no statistically significant differences for concentrations of SS, LOI, PC, and TP among the three bridges (one-way analysis of variance test on rank-transformed data, 95-percent confidence level). Test results for concentrations of TN in composite samples indicated that concentrations of TN collected on State Route 20 near Quinsigamond Village were significantly higher than those concentrations collected on State Route 2A in Boston and Interstate 90 near Weston. Median concentrations of TN were about 93 and 55 percent lower at State Route 2A and at Interstate 90, respectively, compared to the median concentrations of TN at State Route 20.Samples of sediment were collected from five fixed locations on each bridge on three occasions during dry weather to calculate semiquantitative distributions of sediment yields on the bridge surface relative to the monitoring location. Mean yields of bridge-deck sediment during this study for State Route 2A in Boston, Interstate 90 near Weston, and State Route 20 near Quinsigamond Village were 1,500, 250, and 5,700 pounds per curb-mile, respectively. Sediment yields at each sampling location varied widely (26 to 25,000 pounds per curb-mile) but were similar to yields reported elsewhere in Massachusetts and the United States. Yields calculated for each sampling location indicated that the sediment was not evenly distributed across each bridge in this study for plausible reasons such as bridge slope, vehicular tracking, and bridge deterioration.Bridge-deck sediment quality was largely affected by the distribution of sediment particle size. Concentrations of TP in the fine sediment-size fraction (less than 0.0625 millimeter in diameter) of samples of bridge-deck sediment were about 6 times greater than in the coarse size fraction. Concentrations for many total-recoverable metals were 2 to 17 times greater in the fine size fraction compared to concentrations in the coarse size fraction (greater than or equal to 0.25 millimeter in diameter), and concentrations of total-recoverable copper and lead in the fine size fraction were 2 to 65 times higher compared to concentrations in the intermediate (greater than or equal to 0.0625 to 0.25 millimeter in diameter) or the coarse size fraction. However, the proportion of sediment particles less than 0.0625 millimeter in diameter in composite samples of bridge-deck runoff was small (median values range from 4 to 8 percent at each bridge) compared to the larger sediment particle-size mass. As a result, more than 50 percent of the sediment-associated TP, aluminum, chromium, manganese, and nickel was estimated to be associated with the coarse size fraction of the SS load. In contrast, about 95 percent of the estimated sediment-associated copper concentration was associated with the fine size fraction of the SS load.Version 1.0.2 of the Stochastic Empirical Loading and Dilution Model was used to simulate long-term (29–30-year) concentrations and annual yields of SS, TP, and TN in bridge-deck runoff and in discharges from a hypothetical stormwater treatment best-management practice structure. Three methods (traditional statistics, robust statistics, and L-moments) were used to calculate statistics for stochastic simulations because the high variability in measured concentration values during the field study resulted in extreme simulated concentrations. Statistics of each dataset, including the average, standard deviation, and skew of the common (base 10) logarithms, for each of the three bridges, and for a lumped dataset, were calculated and used for simulations; statistics representing the median of statistics calculated for the three bridges also were used for simulations. These median statistics were selected for the interpretive simulations so that the simulations could be used to estimate concentrations and yields from other, unmonitored bridges in Massachusetts. Comparisons of the standard and robust statistics indicated that simulation results with either method would be similar, which indicated that the large variability in simulated results was not caused by a few outliers. Comparison to statistics calculated by the L-moments methods indicated that L-moments do not produce extreme concentrations; however, they also do not produce results that represent the bulk of concentration data.The runoff-quality risk analysis indicated that bridge-deck runoff would exceed discharge standards commonly used for large, advanced wastewater treatment plants, but that commonly used stormwater best-management practices may reduce the percentage of exceedances by one-half. Results of simulations indicated that long-term average yields of TN, TP, and SS may be about 21.4, 6.44, and 40,600 pounds per acre per year, respectively. These yields are about 1.3, 3.4, and 16 times simulated ultra-urban highway yields in Massachusetts; however, simulations indicated that use of a best-management practice structure to treat bridge-deck runoff may reduce discharge yields to about 10, 2.8, and 4,300, pounds per acre per year, respectively.

  17. Morphology of meteoroid and space debris craters on LDEF metal targets

    NASA Technical Reports Server (NTRS)

    Love, S. G.; Brownlee, D. E.; King, N. L.; Hoerz, F.

    1994-01-01

    We measured the depths, average diameters, and circularity indices of over 600 micrometeoroid and space debris craters on various metal surfaces exposed to space on the Long Duration Exposure Facility (LDEF) satellite, as a test of some of the formalisms used to convert the diameters of craters on space-exposed surfaces into penetration depths for the purpose of calculating impactor sizes or masses. The topics covered include the following: targe materials orientation; crater measurements and sample populations; effects of oblique impacts; effects of projectile velocity; effects of crater size; effects of target hardness; effects of target density; and effects of projectile properties.

  18. Sample similarity analysis of angles of repose based on experimental results for DEM calibration

    NASA Astrophysics Data System (ADS)

    Tan, Yuan; Günthner, Willibald A.; Kessler, Stephan; Zhang, Lu

    2017-06-01

    As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR), such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.

  19. A reanalysis of the Cu-7 intrauterine contraceptive device clinical trial and the incidence of pelvic inflammatory disease: a paradigm for assessing intrauterine contraceptive device safety.

    PubMed

    Roy, S; Azen, C

    1994-06-01

    We calculated and compared the incidence of pelvic inflammatory disease in a 10% random sample of the Cu-7 intrauterine contraceptive device (G.D. Searle & Co., Skokie, Ill.) clinical trial with the rates reported to the Food and Drug Administration and those in subsequent trials published in the world literature. A 10% random sample of the Cu-7 clinical trial was examined because calculations had demonstrated this random sample to be sufficient in size (n = 1614) to detect a difference in rates of pelvic inflammatory disease from those reported to the Food and Drug Administration. An audit of a subset of the patient files, compared with the original files in Skokie, Illinois, confirmed that the files available for analysis were complete. Standard definitions were used to identify cases of pelvic inflammatory disease and to calculate rates of pelvic inflammatory disease. The world literature on Cu-7 clinical trials was reviewed. The calculated crude and Pearl index rates of pelvic inflammatory disease were consistent with those rates previously reported to the Food and Drug Administration and published in the medical literature. Life-table pelvic inflammatory disease rates were not different between nulliparous and parous women and pelvic inflammatory disease did not differ from basal annual rates in fecund women. On the basis of the analysis of this 10% sample, the pelvic inflammatory disease patient rates reported to the Food and Drug Administration for the entire Cu-7 clinical trial are accurate and are similar to those published in the world literature.

  20. Small Sample Performance of Bias-corrected Sandwich Estimators for Cluster-Randomized Trials with Binary Outcomes

    PubMed Central

    Li, Peng; Redden, David T.

    2014-01-01

    SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738

Top