Sample records for sample size conclusions

  1. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  2. [Practical aspects regarding sample size in clinical research].

    PubMed

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  3. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  4. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  5. Particle size analysis of sediments, soils and related particulate materials for forensic purposes using laser granulometry.

    PubMed

    Pye, Kenneth; Blott, Simon J

    2004-08-11

    Particle size is a fundamental property of any sediment, soil or dust deposit which can provide important clues to nature and provenance. For forensic work, the particle size distribution of sometimes very small samples requires precise determination using a rapid and reliable method with a high resolution. The Coulter trade mark LS230 laser granulometer offers rapid and accurate sizing of particles in the range 0.04-2000 microm for a variety of sample types, including soils, unconsolidated sediments, dusts, powders and other particulate materials. Reliable results are possible for sample weights of just 50 mg. Discrimination between samples is performed on the basis of the shape of the particle size curves and statistical measures of the size distributions. In routine forensic work laser granulometry data can rarely be used in isolation and should be considered in combination with results from other techniques to reach an overall conclusion.

  6. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  7. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  8. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    PubMed Central

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  9. Potential Reporting Bias in Neuroimaging Studies of Sex Differences.

    PubMed

    David, Sean P; Naudet, Florian; Laude, Jennifer; Radua, Joaquim; Fusar-Poli, Paolo; Chu, Isabella; Stefanick, Marcia L; Ioannidis, John P A

    2018-04-17

    Numerous functional magnetic resonance imaging (fMRI) studies have reported sex differences. To empirically evaluate for evidence of excessive significance bias in this literature, we searched for published fMRI studies of human brain to evaluate sex differences, regardless of the topic investigated, in Medline and Scopus over 10 years. We analyzed the prevalence of conclusions in favor of sex differences and the correlation between study sample sizes and number of significant foci identified. In the absence of bias, larger studies (better powered) should identify a larger number of significant foci. Across 179 papers, median sample size was n = 32 (interquartile range 23-47.5). A median of 5 foci related to sex differences were reported (interquartile range, 2-9.5). Few articles (n = 2) had titles focused on no differences or on similarities (n = 3) between sexes. Overall, 158 papers (88%) reached "positive" conclusions in their abstract and presented some foci related to sex differences. There was no statistically significant relationship between sample size and the number of foci (-0.048% increase for every 10 participants, p = 0.63). The extremely high prevalence of "positive" results and the lack of the expected relationship between sample size and the number of discovered foci reflect probable reporting bias and excess significance bias in this literature.

  10. Rasch fit statistics and sample size considerations for polytomous data

    PubMed Central

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-01-01

    Background Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Methods Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire – 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. Results The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. Conclusion It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges. PMID:18510722

  11. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  12. Estimating the size of hidden populations using respondent-driven sampling data: Case examples from Morocco

    PubMed Central

    Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S

    2015-01-01

    Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908

  13. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  14. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  15. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review

    PubMed Central

    Morris, Tom; Gray, Laura

    2017-01-01

    Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637

  16. Mass spectra features of biomass burning boiler and coal burning boiler emitted particles by single particle aerosol mass spectrometer.

    PubMed

    Xu, Jiao; Li, Mei; Shi, Guoliang; Wang, Haiting; Ma, Xian; Wu, Jianhui; Shi, Xurong; Feng, Yinchang

    2017-11-15

    In this study, single particle mass spectra signatures of both coal burning boiler and biomass burning boiler emitted particles were studied. Particle samples were suspended in clean Resuspension Chamber, and analyzed by ELPI and SPAMS simultaneously. The size distribution of BBB (biomass burning boiler sample) and CBB (coal burning boiler sample) are different, as BBB peaks at smaller size, and CBB peaks at larger size. Mass spectra signatures of two samples were studied by analyzing the average mass spectrum of each particle cluster extracted by ART-2a in different size ranges. In conclusion, BBB sample mostly consists of OC and EC containing particles, and a small fraction of K-rich particles in the size range of 0.2-0.5μm. In 0.5-1.0μm, BBB sample consists of EC, OC, K-rich and Al_Silicate containing particles; CBB sample consists of EC, ECOC containing particles, while Al_Silicate (including Al_Ca_Ti_Silicate, Al_Ti_Silicate, Al_Silicate) containing particles got higher fractions as size increase. The similarity of single particle mass spectrum signatures between two samples were studied by analyzing the dot product, results indicated that part of the single particle mass spectra of two samples in the same size range are similar, which bring challenge to the future source apportionment activity by using single particle aerosol mass spectrometer. Results of this study will provide physicochemical information of important sources which contribute to particle pollution, and will support source apportionment activities. Copyright © 2017. Published by Elsevier B.V.

  17. Characteristics of randomised trials on diseases in the digestive system registered in ClinicalTrials.gov: a retrospective analysis.

    PubMed

    Wildt, Signe; Krag, Aleksander; Gluud, Liselotte

    2011-01-01

    Objectives To evaluate the adequacy of reporting of protocols for randomised trials on diseases of the digestive system registered in http://ClinicalTrials.gov and the consistency between primary outcomes, secondary outcomes and sample size specified in http://ClinicalTrials.gov and published trials. Methods Randomised phase III trials on adult patients with gastrointestinal diseases registered before January 2009 in http://ClinicalTrials.gov were eligible for inclusion. From http://ClinicalTrials.gov all data elements in the database required by the International Committee of Medical Journal Editors (ICMJE) member journals were extracted. The subsequent publications for registered trials were identified. For published trials, data concerning publication date, primary and secondary endpoint, sample size, and whether the journal adhered to ICMJE principles were extracted. Differences between primary and secondary outcomes, sample size and sample size calculations data in http://ClinicalTrials.gov and in the published paper were registered. Results 105 trials were evaluated. 66 trials (63%) were published. 30% of trials were registered incorrectly after their completion date. Several data elements of the required ICMJE data list were not filled in, with missing data in 22% and 11%, respectively, of cases concerning the primary outcome measure and sample size. In 26% of the published papers, data on sample size calculations were missing and discrepancies between sample size reporting in http://ClinicalTrials.gov and published trials existed. Conclusion The quality of registration of randomised controlled trials still needs improvement.

  18. Internet Pornography Use and Sexual Body Image in a Dutch Sample

    PubMed Central

    Cranney, Stephen

    2016-01-01

    Objectives A commonly attributed cause of sexual body image dissatisfaction is pornography use. This relationship has received little verification. Methods The relationship between sexual body image dissatisfaction and Internet pornography use was tested using a large-N sample of Dutch respondents. Results/Conclusion Penis size dissatisfaction is associated with pornography use. The relationship between pornography use and breast size dissatisfaction is null. These results support prior speculation and self-reports about the relationship between pornography use and sexual body image among men. These results also support a prior null finding of the relationship between breast size satisfaction for women and pornography use. PMID:26918066

  19. Sample size calculation in economic evaluations.

    PubMed

    Al, M J; van Hout, B A; Michel, B C; Rutten, F F

    1998-06-01

    A simulation method is presented for sample size calculation in economic evaluations. As input the method requires: the expected difference and variance of costs and effects, their correlation, the significance level (alpha) and the power of the testing method and the maximum acceptable ratio of incremental effectiveness to incremental costs. The method is illustrated with data from two trials. The first compares primary coronary angioplasty with streptokinase in the treatment of acute myocardial infarction, in the second trial, lansoprazole is compared with omeprazole in the treatment of reflux oesophagitis. These case studies show how the various parameters influence the sample size. Given the large number of parameters that have to be specified in advance, the lack of knowledge about costs and their standard deviation, and the difficulty of specifying the maximum acceptable ratio of incremental effectiveness to incremental costs, the conclusion of the study is that from a technical point of view it is possible to perform a sample size calculation for an economic evaluation, but one should wonder how useful it is.

  20. Long-term effective population size dynamics of an intensively monitored vertebrate population

    PubMed Central

    Mueller, A-K; Chakarov, N; Krüger, O; Hoffman, J I

    2016-01-01

    Long-term genetic data from intensively monitored natural populations are important for understanding how effective population sizes (Ne) can vary over time. We therefore genotyped 1622 common buzzard (Buteo buteo) chicks sampled over 12 consecutive years (2002–2013 inclusive) at 15 microsatellite loci. This data set allowed us to both compare single-sample with temporal approaches and explore temporal patterns in the effective number of parents that produced each cohort in relation to the observed population dynamics. We found reasonable consistency between linkage disequilibrium-based single-sample and temporal estimators, particularly during the latter half of the study, but no clear relationship between annual Ne estimates () and census sizes. We also documented a 14-fold increase in between 2008 and 2011, a period during which the census size doubled, probably reflecting a combination of higher adult survival and immigration from further afield. Our study thus reveals appreciable temporal heterogeneity in the effective population size of a natural vertebrate population, confirms the need for long-term studies and cautions against drawing conclusions from a single sample. PMID:27553455

  1. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    PubMed

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  2. Estimating numbers of females with cubs-of-the-year in the Yellowstone grizzly bear population

    USGS Publications Warehouse

    Keating, K.A.; Schwartz, C.C.; Haroldson, M.A.; Moody, D.

    2001-01-01

    For grizzly bears (Ursus arctos horribilis) in the Greater Yellowstone Ecosystem (GYE), minimum population size and allowable numbers of human-caused mortalities have been calculated as a function of the number of unique females with cubs-of-the-year (FCUB) seen during a 3- year period. This approach underestimates the total number of FCUB, thereby biasing estimates of population size and sustainable mortality. Also, it does not permit calculation of valid confidence bounds. Many statistical methods can resolve or mitigate these problems, but there is no universal best method. Instead, relative performances of different methods can vary with population size, sample size, and degree of heterogeneity among sighting probabilities for individual animals. We compared 7 nonparametric estimators, using Monte Carlo techniques to assess performances over the range of sampling conditions deemed plausible for the Yellowstone population. Our goal was to estimate the number of FCUB present in the population each year. Our evaluation differed from previous comparisons of such estimators by including sample coverage methods and by treating individual sightings, rather than sample periods, as the sample unit. Consequently, our conclusions also differ from earlier studies. Recommendations regarding estimators and necessary sample sizes are presented, together with estimates of annual numbers of FCUB in the Yellowstone population with bootstrap confidence bounds.

  3. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  4. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    PubMed Central

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-01-01

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150

  5. Results and Conclusions from the NASA Isokinetic Total Water Content Probe 2009 IRT Test

    NASA Technical Reports Server (NTRS)

    Reehorst, Andrew; Brinker, David

    2010-01-01

    The NASA Glenn Research Center has developed and tested a Total Water Content Isokinetic Sampling Probe. Since, by its nature, it is not sensitive to cloud water particle phase nor size, it is particularly attractive to support super-cooled large droplet and high ice water content aircraft icing studies. The instrument comprises the Sampling Probe, Sample Flow Control, and Water Vapor Measurement subsystems. Results and conclusions are presented from probe tests in the NASA Glenn Icing Research Tunnel (IRT) during January and February 2009. The use of reference probe heat and the control of air pressure in the water vapor measurement subsystem are discussed. Several run-time error sources were found to produce identifiable signatures that are presented and discussed. Some of the differences between measured Isokinetic Total Water Content Probe and IRT calibration seems to be caused by tunnel humidification and moisture/ice crystal blow around. Droplet size, airspeed, and liquid water content effects also appear to be present in the IRT calibration. Based upon test results, the authors provide recommendations for future Isokinetic Total Water Content Probe development.

  6. Vitamin D receptor gene and osteoporosis - author`s response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Looney, J.E.; Yoon, Hyun Koo; Fischer, M.

    1996-04-01

    We appreciate the comments of Dr. Nguyen et al. about our recent study, but we disagree with their suggestion that the lack of an association between low bone density and the BB VDR genotype, which we reported, is an artifact generated by the small sample size. Furthermore, our results are consistent with similar conclusions reached by a number of other investigators, as recently reported by Peacock. Peacock states {open_quotes}Taken as a whole, the results of studies outlined ... indicate that VDR alleles, cannot account for the major part of the heritable component of bone density as indicated by Morrison etmore » al.{close_quotes}. The majority of the 17 studies cited in this editorial could not confirm an association between the VDR genotype and the bone phenotype. Surely one cannot criticize this combined work as representing an artifact because of a too small sample size. We do not dispute the suggestion by Nguyen et al. that large sample sizes are required to analyze small biological effects. This is evident in both Peacock`s summary and in their own bone density studies. We did not design our study with a larger sample size because, based on the work of Morrison et al., we had hypothesized a large biological effect; large sample sizes are only needed for small biological effects. 4 refs.« less

  7. Angiographic core laboratory reproducibility analyses: implications for planning clinical trials using coronary angiography and left ventriculography end-points.

    PubMed

    Steigen, Terje K; Claudio, Cheryl; Abbott, David; Schulzer, Michael; Burton, Jeff; Tymchak, Wayne; Buller, Christopher E; John Mancini, G B

    2008-06-01

    To assess reproducibility of core laboratory performance and impact on sample size calculations. Little information exists about overall reproducibility of core laboratories in contradistinction to performance of individual technicians. Also, qualitative parameters are being adjudicated increasingly as either primary or secondary end-points. The comparative impact of using diverse indexes on sample sizes has not been previously reported. We compared initial and repeat assessments of five quantitative parameters [e.g., minimum lumen diameter (MLD), ejection fraction (EF), etc.] and six qualitative parameters [e.g., TIMI myocardial perfusion grade (TMPG) or thrombus grade (TTG), etc.], as performed by differing technicians and separated by a year or more. Sample sizes were calculated from these results. TMPG and TTG were also adjudicated by a second core laboratory. MLD and EF were the most reproducible, yielding the smallest sample size calculations, whereas percent diameter stenosis and centerline wall motion require substantially larger trials. Of the qualitative parameters, all except TIMI flow grade gave reproducibility characteristics yielding sample sizes of many 100's of patients. Reproducibility of TMPG and TTG was only moderately good both within and between core laboratories, underscoring an intrinsic difficulty in assessing these. Core laboratories can be shown to provide reproducibility performance that is comparable to performance commonly ascribed to individual technicians. The differences in reproducibility yield huge differences in sample size when comparing quantitative and qualitative parameters. TMPG and TTG are intrinsically difficult to assess and conclusions based on these parameters should arise only from very large trials.

  8. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp; Zhang, Xu

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources andmore » pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.« less

  9. Workplace Health Promotion Implementation, Readiness, and Capacity Among Mid-Sized Employers in Low-Wage Industries: A National Survey

    PubMed Central

    Hannon, Peggy A.; Garson, Gayle; Harris, Jeffrey R.; Hammerback, Kristen; Sopher, Carrie J.; Clegg-Thorp, Catherine

    2012-01-01

    Objective To describe workplace health promotion (WHP) implementation, readiness, and capacity among mid-sized employers in low-wage industries in the United States. Methods A cross-sectional survey of a national sample of mid-sized employers (100–4,999 employees) representing five low-wage industries. Results Employers’ WHP implementation for both employees and employees’ spouses and partners was low. Readiness scales showed that employers believe WHP would benefit their employees and their companies, but they were less likely to believe that WHP was feasible for their companies. Employers’ capacity to implement WHP was very low; nearly half the sample reported no capacity. Conclusion Mid-sized employers in low-wage industries implement few WHP programs; their responses to readiness and capacity measures indicate that low capacity may be one of the principal barriers to WHP implementation. PMID:23090160

  10. Sample size and power considerations in network meta-analysis

    PubMed Central

    2012-01-01

    Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327

  11. Précis of statistical significance: rationale, validity, and utility.

    PubMed

    Chow, S L

    1998-04-01

    The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.

  12. Cancer-Related Fatigue and Its Associations with Depression and Anxiety: A Systematic Review

    PubMed Central

    Brown, Linda F.; Kroenke, Kurt

    2010-01-01

    Background Fatigue is an important symptom in cancer and has been shown to be associated with psychological distress. Objectives This review assesses evidence regarding associations of CRF with depression and anxiety. Methods Database searches yielded 59 studies reporting correlation coefficients or odds ratios. Results Combined sample size was 12,103. Average correlation of fatigue with depression, weighted by sample size, was 0.56 and for anxiety, 0.46. Thirty-one instruments were used to assess fatigue, suggesting a lack of consensus on measurement. Conclusion This review confirms the association of fatigue with depression and anxiety. Directionality needs to be better delineated in longitudinal studies. PMID:19855028

  13. Affected States Soft Independent Modeling by Class Analogy from the Relation Between Independent Variables, Number of Independent Variables and Sample Size

    PubMed Central

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz

    2013-01-01

    Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065

  14. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less

  15. Observed oil and gas field size distributions: A consequence of the discovery process and prices of oil and gas

    USGS Publications Warehouse

    Drew, L.J.; Attanasi, E.D.; Schuenemeyer, J.H.

    1988-01-01

    If observed oil and gas field size distributions are obtained by random samplings, the fitted distributions should approximate that of the parent population of oil and gas fields. However, empirical evidence strongly suggests that larger fields tend to be discovered earlier in the discovery process than they would be by random sampling. Economic factors also can limit the number of small fields that are developed and reported. This paper examines observed size distributions in state and federal waters of offshore Texas. Results of the analysis demonstrate how the shape of the observable size distributions change with significant hydrocarbon price changes. Comparison of state and federal observed size distributions in the offshore area shows how production cost differences also affect the shape of the observed size distribution. Methods for modifying the discovery rate estimation procedures when economic factors significantly affect the discovery sequence are presented. A primary conclusion of the analysis is that, because hydrocarbon price changes can significantly affect the observed discovery size distribution, one should not be confident about inferring the form and specific parameters of the parent field size distribution from the observed distributions. ?? 1988 International Association for Mathematical Geology.

  16. Analysis of Duplicated Multiple-Samples Rank Data Using the Mack-Skillings Test.

    PubMed

    Carabante, Kennet Mariano; Alonso-Marenco, Jose Ramon; Chokumnoyporn, Napapan; Sriwattana, Sujinda; Prinyawiwatkul, Witoon

    2016-07-01

    Appropriate analysis for duplicated multiple-samples rank data is needed. This study compared analysis of duplicated rank preference data using the Friedman versus Mack-Skillings tests. Panelists (n = 125) ranked twice 2 orange juice sets: different-samples set (100%, 70%, vs. 40% juice) and similar-samples set (100%, 95%, vs. 90%). These 2 sample sets were designed to get contrasting differences in preference. For each sample set, rank sum data were obtained from (1) averaged rank data of each panelist from the 2 replications (n = 125), (2) rank data of all panelists from each of the 2 separate replications (n = 125 each), (3) jointed rank data of all panelists from the 2 replications (n = 125), and (4) rank data of all panelists pooled from the 2 replications (n = 250); rank data (1), (2), and (4) were separately analyzed by the Friedman test, although those from (3) by the Mack-Skillings test. The effect of sample sizes (n = 10 to 125) was evaluated. For the similar-samples set, higher variations in rank data from the 2 replications were observed; therefore, results of the main effects were more inconsistent among methods and sample sizes. Regardless of analysis methods, the larger the sample size, the higher the χ(2) value, the lower the P-value (testing H0 : all samples are not different). Analyzing rank data (2) separately by replication yielded inconsistent conclusions across sample sizes, hence this method is not recommended. The Mack-Skillings test was more sensitive than the Friedman test. Furthermore, it takes into account within-panelist variations and is more appropriate for analyzing duplicated rank data. © 2016 Institute of Food Technologists®

  17. Effects of normalization on quantitative traits in association test

    PubMed Central

    2009-01-01

    Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest. PMID:20003414

  18. Experimental design, power and sample size for animal reproduction experiments.

    PubMed

    Chapman, Phillip L; Seidel, George E

    2008-01-01

    The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.

  19. At convenience and systematic random sampling: effects on the prognostic value of nuclear area assessments in breast cancer patients.

    PubMed

    Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P

    1995-01-01

    This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.

  20. Structural phase transitions in SrTiO 3 nanoparticles

    DOE PAGES

    Zhang, Han; Liu, Sizhan; Scofield, Megan E.; ...

    2017-08-04

    We present that pressure dependent structural measurements on monodispersed nanoscale SrTiO 3 samples with average diameters of 10 to ~80 nm were conducted to enhance the understanding of the structural phase diagram of nanoscale SrTiO 3. A robust pressure independent polar structure was found in the 10 nm sample for pressures up to 13 GPa, while a size dependent cubic to tetragonal transition occurs (at P = P c) for larger particle sizes. In conclusion, the results suggest that the growth of ~10 nm STO particles on substrates with significant lattice mismatch may maintain a polar state for a largemore » range of strain values, possibly enabling device use.« less

  1. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  2. Towards Monitoring Biodiversity in Amazonian Forests: How Regular Samples Capture Meso-Scale Altitudinal Variation in 25 km2 Plots

    PubMed Central

    Norris, Darren; Fortin, Marie-Josée; Magnusson, William E.

    2014-01-01

    Background Ecological monitoring and sampling optima are context and location specific. Novel applications (e.g. biodiversity monitoring for environmental service payments) call for renewed efforts to establish reliable and robust monitoring in biodiversity rich areas. As there is little information on the distribution of biodiversity across the Amazon basin, we used altitude as a proxy for biological variables to test whether meso-scale variation can be adequately represented by different sample sizes in a standardized, regular-coverage sampling arrangement. Methodology/Principal Findings We used Shuttle-Radar-Topography-Mission digital elevation values to evaluate if the regular sampling arrangement in standard RAPELD (rapid assessments (“RAP”) over the long-term (LTER [“PELD” in Portuguese])) grids captured patters in meso-scale spatial variation. The adequacy of different sample sizes (n = 4 to 120) were examined within 32,325 km2/3,232,500 ha (1293×25 km2 sample areas) distributed across the legal Brazilian Amazon. Kolmogorov-Smirnov-tests, correlation and root-mean-square-error were used to measure sample representativeness, similarity and accuracy respectively. Trends and thresholds of these responses in relation to sample size and standard-deviation were modeled using Generalized-Additive-Models and conditional-inference-trees respectively. We found that a regular arrangement of 30 samples captured the distribution of altitude values within these areas. Sample size was more important than sample standard deviation for representativeness and similarity. In contrast, accuracy was more strongly influenced by sample standard deviation. Additionally, analysis of spatially interpolated data showed that spatial patterns in altitude were also recovered within areas using a regular arrangement of 30 samples. Conclusions/Significance Our findings show that the logistically feasible sample used in the RAPELD system successfully recovers meso-scale altitudinal patterns. This suggests that the sample size and regular arrangement may also be generally appropriate for quantifying spatial patterns in biodiversity at similar scales across at least 90% (≈5 million km2) of the Brazilian Amazon. PMID:25170894

  3. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  4. Novel Insights in the Fecal Egg Count Reduction Test for Monitoring Drug Efficacy against Soil-Transmitted Helminths in Large-Scale Treatment Programs

    PubMed Central

    Levecke, Bruno; Speybroeck, Niko; Dobson, Robert J.; Vercruysse, Jozef; Charlier, Johannes

    2011-01-01

    Background The fecal egg count reduction test (FECRT) is recommended to monitor drug efficacy against soil-transmitted helminths (STHs) in public health. However, the impact of factors inherent to study design (sample size and detection limit of the fecal egg count (FEC) method) and host-parasite interactions (mean baseline FEC and aggregation of FEC across host population) on the reliability of FECRT is poorly understood. Methodology/Principal Findings A simulation study was performed in which FECRT was assessed under varying conditions of the aforementioned factors. Classification trees were built to explore critical values for these factors required to obtain conclusive FECRT results. The outcome of this analysis was subsequently validated on five efficacy trials across Africa, Asia, and Latin America. Unsatisfactory (<85.0%) sensitivity and specificity results to detect reduced efficacy were found if sample sizes were small (<10) or if sample sizes were moderate (10–49) combined with highly aggregated FEC (k<0.25). FECRT remained inconclusive under any evaluated condition for drug efficacies ranging from 87.5% to 92.5% for a reduced-efficacy-threshold of 90% and from 92.5% to 97.5% for a threshold of 95%. The most discriminatory study design required 200 subjects independent of STH status (including subjects who are not excreting eggs). For this sample size, the detection limit of the FEC method and the level of aggregation of the FEC did not affect the interpretation of the FECRT. Only for a threshold of 90%, mean baseline FEC <150 eggs per gram of stool led to a reduced discriminatory power. Conclusions/Significance This study confirms that the interpretation of FECRT is affected by a complex interplay of factors inherent to both study design and host-parasite interactions. The results also highlight that revision of the current World Health Organization guidelines to monitor drug efficacy is indicated. We, therefore, propose novel guidelines to support future monitoring programs. PMID:22180801

  5. [An investigation of the statistical power of the effect size in randomized controlled trials for the treatment of patients with type 2 diabetes mellitus using Chinese medicine].

    PubMed

    Ma, Li-Xin; Liu, Jian-Ping

    2012-01-01

    To investigate whether the power of the effect size was based on adequate sample size in randomized controlled trials (RCTs) for the treatment of patients with type 2 diabetes mellitus (T2DM) using Chinese medicine. China Knowledge Resource Integrated Database (CNKI), VIP Database for Chinese Technical Periodicals (VIP), Chinese Biomedical Database (CBM), and Wangfang Data were systematically recruited using terms like "Xiaoke" or diabetes, Chinese herbal medicine, patent medicine, traditional Chinese medicine, randomized, controlled, blinded, and placebo-controlled. Limitation was set on the intervention course > or = 3 months in order to identify the information of outcome assessement and the sample size. Data collection forms were made according to the checking lists found in the CONSORT statement. Independent double data extractions were performed on all included trials. The statistical power of the effects size for each RCT study was assessed using sample size calculation equations. (1) A total of 207 RCTs were included, including 111 superiority trials and 96 non-inferiority trials. (2) Among the 111 superiority trials, fasting plasma glucose (FPG) and glycosylated hemoglobin HbA1c (HbA1c) outcome measure were reported in 9% and 12% of the RCTs respectively with the sample size > 150 in each trial. For the outcome of HbA1c, only 10% of the RCTs had more than 80% power. For FPG, 23% of the RCTs had more than 80% power. (3) In the 96 non-inferiority trials, the outcomes FPG and HbA1c were reported as 31% and 36% respectively. These RCTs had a samples size > 150. For HbA1c only 36% of the RCTs had more than 80% power. For FPG, only 27% of the studies had more than 80% power. The sample size for statistical analysis was distressingly low and most RCTs did not achieve 80% power. In order to obtain a sufficient statistic power, it is recommended that clinical trials should establish clear research objective and hypothesis first, and choose scientific and evidence-based study design and outcome measurements. At the same time, calculate required sample size to ensure a precise research conclusion.

  6. A U-statistics based approach to sample size planning of two-arm trials with discrete outcome criterion aiming to establish either superiority or noninferiority.

    PubMed

    Wellek, Stefan

    2017-02-28

    In current practice, the most frequently applied approach to the handling of ties in the Mann-Whitney-Wilcoxon (MWW) test is based on the conditional distribution of the sum of mid-ranks, given the observed pattern of ties. Starting from this conditional version of the testing procedure, a sample size formula was derived and investigated by Zhao et al. (Stat Med 2008). In contrast, the approach we pursue here is a nonconditional one exploiting explicit representations for the variances of and the covariance between the two U-statistics estimators involved in the Mann-Whitney form of the test statistic. The accuracy of both ways of approximating the sample sizes required for attaining a prespecified level of power in the MWW test for superiority with arbitrarily tied data is comparatively evaluated by means of simulation. The key qualitative conclusions to be drawn from these numerical comparisons are as follows: With the sample sizes calculated by means of the respective formula, both versions of the test maintain the level and the prespecified power with about the same degree of accuracy. Despite the equivalence in terms of accuracy, the sample size estimates obtained by means of the new formula are in many cases markedly lower than that calculated for the conditional test. Perhaps, a still more important advantage of the nonconditional approach based on U-statistics is that it can be also adopted for noninferiority trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies

    PubMed Central

    2014-01-01

    Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686

  8. How Broad Liberal Arts Training Produces Phd Economists: Carleton's Story

    ERIC Educational Resources Information Center

    Bourne, Jenny; Grawe, Nathan D.

    2015-01-01

    Several recent studies point to strong performance in economics PhD programs of graduates from liberal arts colleges. While every undergraduate program is unique and the likelihood of selection bias combines with small sample sizes to caution against drawing strong conclusions, the authors reflect on their experience at Carleton College to…

  9. Influence of androgen receptor repeat polymorphisms on personality traits in men

    PubMed Central

    Westberg, Lars; Henningsson, Susanne; Landén, Mikael; Annerbrink, Kristina; Melke, Jonas; Nilsson, Staffan; Rosmond, Roland; Holm, Göran; Anckarsäter, Henrik; Eriksson, Elias

    2009-01-01

    Background Testosterone has been attributed importance for various aspects of behaviour. The aim of our study was to investigate the potential influence of 2 functional polymorphisms in the amino terminal of the androgen receptor on personality traits in men. Methods We assessed and genotyped 141 men born in 1944 recruited from the general population. We used 2 different instruments: the Karolinska Scales of Personality and the Temperament and Character Inventory. For replication, we similarly assessed 63 men recruited from a forensic psychiatry study group. Results In the population-recruited sample, the lengths of the androgen receptor repeats were associated with neuroticism, extraversion and self-transcendence. The association with extraversion was replicated in the independent sample. Limitations Our 2 samples differed in size; sample 1 was of moderate size and sample 2 was small. In addition, the homogeneity of sample 1 probably enhanced our ability to detect significant associations between genotype and phenotype. Conclusion Our results suggest that the repeat polymorphisms in the androgen receptor gene may influence personality traits in men. PMID:19448851

  10. A study of ferromagnetic signals in SrTiO{sub 3} nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovacs, P.; Des Roches, B.; Crandles, D. A.

    It has been suggested that ferromagnetism may be a universal feature of nanoparticles related to particle size. We study this claim for the case of commercially produced SrTiO{sub 3} nanoparticles purchased from Alfa-Aesar. Both loosely-packed nanoparticle samples and pellets formed using uniaxial pressure were studied. Both loose and pressed samples were annealed in either air or in vacuum of 5×10{sup −6} Torr at 600, 800 and 1000°C. Then x-ray diffraction and SQUID measurements were made on the resulting samples. It was found that annealed loose powder samples always had a linear diamagnetic magnetization versus field response, while their pressed pelletmore » counterparts exhibit a ferromagnetic hysteresis component in addition to the linear diamagnetic signal. Williamson-Hall analysis reveals that the particle size in pressed pellet samples increases with annealing temperature but does not change significantly in loose powder samples. The main conclusion is that the act of pressing pellets in a die introduces a spurious ferromagnetic signal into SQUID measurements.« less

  11. Random Distribution Pattern and Non-adaptivity of Genome Size in a Highly Variable Population of Festuca pallens

    PubMed Central

    Šmarda, Petr; Bureš, Petr; Horová, Lucie

    2007-01-01

    Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968

  12. A Bayesian nonparametric method for prediction in EST analysis

    PubMed Central

    Lijoi, Antonio; Mena, Ramsés H; Prünster, Igor

    2007-01-01

    Background Expressed sequence tags (ESTs) analyses are a fundamental tool for gene identification in organisms. Given a preliminary EST sample from a certain library, several statistical prediction problems arise. In particular, it is of interest to estimate how many new genes can be detected in a future EST sample of given size and also to determine the gene discovery rate: these estimates represent the basis for deciding whether to proceed sequencing the library and, in case of a positive decision, a guideline for selecting the size of the new sample. Such information is also useful for establishing sequencing efficiency in experimental design and for measuring the degree of redundancy of an EST library. Results In this work we propose a Bayesian nonparametric approach for tackling statistical problems related to EST surveys. In particular, we provide estimates for: a) the coverage, defined as the proportion of unique genes in the library represented in the given sample of reads; b) the number of new unique genes to be observed in a future sample; c) the discovery rate of new genes as a function of the future sample size. The Bayesian nonparametric model we adopt conveys, in a statistically rigorous way, the available information into prediction. Our proposal has appealing properties over frequentist nonparametric methods, which become unstable when prediction is required for large future samples. EST libraries, previously studied with frequentist methods, are analyzed in detail. Conclusion The Bayesian nonparametric approach we undertake yields valuable tools for gene capture and prediction in EST libraries. The estimators we obtain do not feature the kind of drawbacks associated with frequentist estimators and are reliable for any size of the additional sample. PMID:17868445

  13. The effect of membrane filtration on dissolved trace element concentrations

    USGS Publications Warehouse

    Horowitz, A.J.; Lum, K.R.; Garbarino, J.R.; Hall, G.E.M.; Lemieux, C.; Demas, C.R.

    1996-01-01

    The almost universally accepted operational definition for dissolved constituents is based on processing whole-water samples through a 0.45-??m membrane filter. Results from field and laboratory experiments indicate that a number of factors associated with filtration, other than just pore size (e.g., diameter, manufacturer, volume of sample processed, amount of suspended sediment in the sample), can produce substantial variations in the 'dissolved' concentrations of such elements as Fe, Al, Cu, Zn, Pb, Co, and Ni. These variations result from the inclusion/exclusion of colloidally- associated trace elements. Thus, 'dissolved' concentrations quantitated by analyzing filtrates generated by processing whole-water through similar pore- sized membrane filters may not be equal/comparable. As such, simple filtration through a 0.45-??m membrane filter may no longer represent an acceptable operational definition for dissolved chemical constituents. This conclusion may have important implications for environmental studies and regulatory agencies.

  14. Using meta-analysis to inform the design of subsequent studies of diagnostic test accuracy.

    PubMed

    Hinchliffe, Sally R; Crowther, Michael J; Phillips, Robert S; Sutton, Alex J

    2013-06-01

    An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial evidence to underpin reliable clinical decision-making. Very few investigators consider any sample size calculations when designing a new diagnostic accuracy study. However, it is important to consider the number of subjects in a new study in order to achieve a precise measure of accuracy. Sutton et al. have suggested previously that when designing a new therapeutic trial, it could be more beneficial to consider the power of the updated meta-analysis including the new trial rather than of the new trial itself. The methodology involves simulating new studies for a range of sample sizes and estimating the power of the updated meta-analysis with each new study added. Plotting the power values against the range of sample sizes allows the clinician to make an informed decision about the sample size of a new trial. This paper extends this approach from the trial setting and applies it to diagnostic accuracy studies. Several meta-analytic models are considered including bivariate random effects meta-analysis that models the correlation between sensitivity and specificity. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  15. Setting health research priorities using the CHNRI method: VI. Quantitative properties of human collective opinion

    PubMed Central

    Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon

    2016-01-01

    Introduction Crowdsourcing has become an increasingly important tool to address many problems – from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. Methods We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. Results The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14–16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94–0.96). Conclusions Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45–55 experts. PMID:27350874

  16. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  17. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE PAGES

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...

    2016-05-25

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  18. The Contribution of Expanding Portion Sizes to the US Obesity Epidemic

    PubMed Central

    Young, Lisa R.; Nestle, Marion

    2002-01-01

    Objectives. Because larger food portions could be contributing to the increasing prevalence of overweight and obesity, this study was designed to weigh samples of marketplace foods, identify historical changes in the sizes of those foods, and compare current portions with federal standards. Methods. We obtained information about current portions from manufacturers or from direct weighing; we obtained information about past portions from manufacturers or contemporary publications. Results. Marketplace food portions have increased in size and now exceed federal standards. Portion sizes began to grow in the 1970s, rose sharply in the 1980s, and have continued in parallel with increasing body weights. Conclusions. Because energy content increases with portion size, educational and other public health efforts to address obesity should focus on the need for people to consume smaller portions. PMID:11818300

  19. Tissue recommendations for precision cancer therapy using next generation sequencing: a comprehensive single cancer center’s experiences

    PubMed Central

    Hong, Mineui; Bang, Heejin; Van Vrancken, Michael; Kim, Seungtae; Lee, Jeeyun; Park, Se Hoon; Park, Joon Oh; Park, Young Suk; Lim, Ho Yeong; Kang, Won Ki; Sun, Jong-Mu; Lee, Se Hoon; Ahn, Myung-Ju; Park, Keunchil; Kim, Duk Hwan; Lee, Seunggwan; Park, Woongyang; Kim, Kyoung-Mee

    2017-01-01

    To generate accurate next-generation sequencing (NGS) data, the amount and quality of DNA extracted is critical. We analyzed 1564 tissue samples from patients with metastatic or recurrent solid tumor submitted for NGS according to their sample size, acquisition method, organ, and fixation to propose appropriate tissue requirements. Of the 1564 tissue samples, 481 (30.8%) consisted of fresh-frozen (FF) tissue, and 1,083 (69.2%) consisted of formalin-fixed paraffin-embedded (FFPE) tissue. We obtained successful NGS results in 95.9% of cases. Out of 481 FF biopsies, 262 tissue samples were from lung, and the mean fragment size was 2.4 mm. Compared to lung, GI tract tumor fragments showed a significantly lower DNA extraction failure rate (2.1 % versus 6.1%, p = 0.04). For FFPE biopsy samples, the size of biopsy tissue was similar regardless of tumor type with a mean of 0.8 × 0.3 cm, and the mean DNA yield per one unstained slide was 114 ng. We obtained highest amount of DNA from the colorectum (2353 ng) and the lowest amount from the hepatobiliary tract (760.3 ng) likely due to a relatively smaller biopsy size, extensive hemorrhage and necrosis, and lower tumor volume. On one unstained slide from FFPE operation specimens, the mean size of the specimen was 2.0 × 1.0 cm, and the mean DNA yield per one unstained slide was 1800 ng. In conclusions, we present our experiences on tissue requirements for appropriate NGS workflow: > 1 mm2 for FF biopsy, > 5 unstained slides for FFPE biopsy, and > 1 unstained slide for FFPE operation specimens for successful test results in 95.9% of cases. PMID:28477007

  20. A Class of Population Covariance Matrices in the Bootstrap Approach to Covariance Structure Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro; Yanagihara, Hirokazu

    2007-01-01

    Model evaluation in covariance structure analysis is critical before the results can be trusted. Due to finite sample sizes and unknown distributions of real data, existing conclusions regarding a particular statistic may not be applicable in practice. The bootstrap procedure automatically takes care of the unknown distribution and, for a given…

  1. Perceived Racism and Mental Health among Black American Adults: A Meta-Analytic Review

    ERIC Educational Resources Information Center

    Pieterse, Alex L.; Todd, Nathan R.; Neville, Helen A.; Carter, Robert T.

    2012-01-01

    The literature indicates that perceived racism tends to be associated with adverse psychological and physiological outcomes; however, findings in this area are not yet conclusive. In this meta-analysis, we systematically reviewed 66 studies (total sample size of 18,140 across studies), published between January 1996 and April 2011, on the…

  2. High-concentration zeta potential measurements using light-scattering techniques

    PubMed Central

    Kaszuba, Michael; Corbett, Jason; Watson, Fraser Mcneil; Jones, Andrew

    2010-01-01

    Zeta potential is the key parameter that controls electrostatic interactions in particle dispersions. Laser Doppler electrophoresis is an accepted method for the measurement of particle electrophoretic mobility and hence zeta potential of dispersions of colloidal size materials. Traditionally, samples measured by this technique have to be optically transparent. Therefore, depending upon the size and optical properties of the particles, many samples will be too concentrated and will require dilution. The ability to measure samples at or close to their neat concentration would be desirable as it would minimize any changes in the zeta potential of the sample owing to dilution. However, the ability to measure turbid samples using light-scattering techniques presents a number of challenges. This paper discusses electrophoretic mobility measurements made on turbid samples at high concentration using a novel cell with reduced path length. Results are presented on two different sample types, titanium dioxide and a polyurethane dispersion, as a function of sample concentration. For both of the sample types studied, the electrophoretic mobility results show a gradual decrease as the sample concentration increases and the possible reasons for these observations are discussed. Further, a comparison of the data against theoretical models is presented and discussed. Conclusions and recommendations are made from the zeta potential values obtained at high concentrations. PMID:20732896

  3. Technical Factors Influencing Cone Packing Density Estimates in Adaptive Optics Flood Illuminated Retinal Images

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681

  4. Reconciling PM10 analyses by different sampling methods for Iron King Mine tailings dust.

    PubMed

    Li, Xu; Félix, Omar I; Gonzales, Patricia; Sáez, Avelino Eduardo; Ela, Wendell P

    2016-03-01

    The overall project objective at the Iron King Mine Superfund site is to determine the level and potential risk associated with heavy metal exposure of the proximate population emanating from the site's tailings pile. To provide sufficient size-fractioned dust for multi-discipline research studies, a dust generator was built and is now being used to generate size-fractioned dust samples for toxicity investigations using in vitro cell culture and animal exposure experiments as well as studies on geochemical characterization and bioassay solubilization with simulated lung and gastric fluid extractants. The objective of this study is to provide a robust method for source identification by comparing the tailing sample produced by dust generator and that collected by MOUDI sampler. As and Pb concentrations of the PM10 fraction in the MOUDI sample were much lower than in tailing samples produced by the dust generator, indicating a dilution of Iron King tailing dust by dust from other sources. For source apportionment purposes, single element concentration method was used based on the assumption that the PM10 fraction comes from a background source plus the Iron King tailing source. The method's conclusion that nearly all arsenic and lead in the PM10 dust fraction originated from the tailings substantiates our previous Pb and Sr isotope study conclusion. As and Pb showed a similar mass fraction from Iron King for all sites suggesting that As and Pb have the same major emission source. Further validation of this simple source apportionment method is needed based on other elements and sites.

  5. Sources of variability in collection and preparation of paint and lead-coating samples.

    PubMed

    Harper, S L; Gutknecht, W F

    2001-06-01

    Chronic exposure of children to lead (Pb) can result in permanent physiological impairment. Since surfaces coated with lead-containing paints and varnishes are potential sources of exposure, it is extremely important that reliable methods for sampling and analysis be available. The sources of variability in the collection and preparation of samples were investigated to improve the performance and comparability of methods and to ensure that data generated will be adequate for its intended use. Paint samples of varying sizes (areas and masses) were collected at different locations across a variety of surfaces including metal, plaster, concrete, and wood. A variety of grinding techniques were compared. Manual mortar and pestle grinding for at least 1.5 min and mechanized grinding techniques were found to generate similar homogenous particle size distributions required for aliquots as small as 0.10 g. When 342 samples were evaluated for sample weight loss during mortar and pestle grinding, 4% had 20% or greater loss with a high of 41%. Homogenization and sub-sampling steps were found to be the principal sources of variability related to the size of the sample collected. Analysis of samples from different locations on apparently identical surfaces were found to vary by more than a factor of two both in Pb concentration (mg cm-2 or %) and areal coating density (g cm-2). Analyses of substrates were performed to determine the Pb remaining after coating removal. Levels as high as 1% Pb were found in some substrate samples, corresponding to more than 35 mg cm-2 Pb. In conclusion, these sources of variability must be considered in development and/or application of any sampling and analysis methodologies.

  6. Gaps in Survey Data on Cancer in American Indian and Alaska Native Populations: Examination of US Population Surveys, 1960–2010

    PubMed Central

    Duran, Tinka; Stimpson, Jim P.; Smith, Corey

    2013-01-01

    Introduction Population-based data are essential for quantifying the problems and measuring the progress made by comprehensive cancer control programs. However, cancer information specific to the American Indian/Alaska Native (AI/AN) population is not readily available. We identified major population-based surveys conducted in the United States that contain questions related to cancer, documented the AI/AN sample size in these surveys, and identified gaps in the types of cancer-related information these surveys collect. Methods We conducted an Internet query of US Department of Health and Human Services agency websites and a Medline search to identify population-based surveys conducted in the United States from 1960 through 2010 that contained information about cancer. We used a data extraction form to collect information about the purpose, sample size, data collection methods, and type of information covered in the surveys. Results Seventeen survey sources met the inclusion criteria. Information on access to and use of cancer treatment, follow-up care, and barriers to receiving timely and quality care was not consistently collected. Estimates specific to the AI/AN population were often lacking because of inadequate AI/AN sample size. For example, 9 national surveys reviewed reported an AI/AN sample size smaller than 500, and 10 had an AI/AN sample percentage less than 1.5%. Conclusion Continued efforts are needed to increase the overall number of AI/AN participants in these surveys, improve the quality of information on racial/ethnic background, and collect more information on treatment and survivorship. PMID:23517582

  7. Interval estimation and optimal design for the within-subject coefficient of variation for continuous and binary variables

    PubMed Central

    Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D

    2006-01-01

    Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943

  8. Accounting for Sampling Error When Inferring Population Synchrony from Time-Series Data: A Bayesian State-Space Modelling Approach with Applications

    PubMed Central

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates. PMID:24489839

  9. Sample size and power estimation for studies with health related quality of life outcomes: a comparison of four methods using the SF-36.

    PubMed

    Walters, Stephen J

    2004-05-25

    We describe and compare four different methods for estimating sample size and power, when the primary outcome of the study is a Health Related Quality of Life (HRQoL) measure. These methods are: 1. assuming a Normal distribution and comparing two means; 2. using a non-parametric method; 3. Whitehead's method based on the proportional odds model; 4. the bootstrap. We illustrate the various methods, using data from the SF-36. For simplicity this paper deals with studies designed to compare the effectiveness (or superiority) of a new treatment compared to a standard treatment at a single point in time. The results show that if the HRQoL outcome has a limited number of discrete values (< 7) and/or the expected proportion of cases at the boundaries is high (scoring 0 or 100), then we would recommend using Whitehead's method (Method 3). Alternatively, if the HRQoL outcome has a large number of distinct values and the proportion at the boundaries is low, then we would recommend using Method 1. If a pilot or historical dataset is readily available (to estimate the shape of the distribution) then bootstrap simulation (Method 4) based on this data will provide a more accurate and reliable sample size estimate than conventional methods (Methods 1, 2, or 3). In the absence of a reliable pilot set, bootstrapping is not appropriate and conventional methods of sample size estimation or simulation will need to be used. Fortunately, with the increasing use of HRQoL outcomes in research, historical datasets are becoming more readily available. Strictly speaking, our results and conclusions only apply to the SF-36 outcome measure. Further empirical work is required to see whether these results hold true for other HRQoL outcomes. However, the SF-36 has many features in common with other HRQoL outcomes: multi-dimensional, ordinal or discrete response categories with upper and lower bounds, and skewed distributions, so therefore, we believe these results and conclusions using the SF-36 will be appropriate for other HRQoL measures.

  10. Operationalizing hippocampal volume as an enrichment biomarker for amnestic MCI trials: effect of algorithm, test-retest variability and cut-point on trial cost, duration and sample size

    PubMed Central

    Yu, P.; Sun, J.; Wolz, R.; Stephenson, D.; Brewer, J.; Fox, N.C.; Cole, P.E.; Jack, C.R.; Hill, D.L.G.; Schwarz, A.J.

    2014-01-01

    Objective To evaluate the effect of computational algorithm, measurement variability and cut-point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). Methods We used normal control and amnestic MCI subjects from ADNI-1 as normative reference and screening cohorts. We evaluated the enrichment performance of four widely-used hippocampal segmentation algorithms (FreeSurfer, HMAPS, LEAP and NeuroQuant) in terms of two-year changes in MMSE, ADAS-Cog and CDR-SB. We modeled the effect of algorithm, test-retest variability and cut-point on sample size, screen fail rates and trial cost and duration. Results HCV-based patient selection yielded not only reduced sample sizes (by ~40–60%) but also lower trial costs (by ~30–40%) across a wide range of cut-points. Overall, the dependence on the cut-point value was similar for the three clinical instruments considered. Conclusion These results provide a guide to the choice of HCV cut-point for aMCI clinical trials, allowing an informed trade-off between statistical and practical considerations. PMID:24211008

  11. Extracting samples of high diversity from thematic collections of large gene banks using a genetic-distance based approach

    PubMed Central

    2010-01-01

    Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152

  12. Cost-efficient designs for three-arm trials with treatment delivered by health professionals: Sample sizes for a combination of nested and crossed designs

    PubMed Central

    Moerbeek, Mirjam

    2018-01-01

    Background This article studies the design of trials that compare three treatment conditions that are delivered by two types of health professionals. The one type of health professional delivers one treatment, and the other type delivers two treatments, hence, this design is a combination of a nested and crossed design. As each health professional treats multiple patients, the data have a nested structure. This nested structure has thus far been ignored in the design of such trials, which may result in an underestimate of the required sample size. In the design stage, the sample sizes should be determined such that a desired power is achieved for each of the three pairwise comparisons, while keeping costs or sample size at a minimum. Methods The statistical model that relates outcome to treatment condition and explicitly takes the nested data structure into account is presented. Mathematical expressions that relate sample size to power are derived for each of the three pairwise comparisons on the basis of this model. The cost-efficient design achieves sufficient power for each pairwise comparison at lowest costs. Alternatively, one may minimize the total number of patients. The sample sizes are found numerically and an Internet application is available for this purpose. The design is also compared to a nested design in which each health professional delivers just one treatment. Results Mathematical expressions show that this design is more efficient than the nested design. For each pairwise comparison, power increases with the number of health professionals and the number of patients per health professional. The methodology of finding a cost-efficient design is illustrated using a trial that compares treatments for social phobia. The optimal sample sizes reflect the costs for training and supervising psychologists and psychiatrists, and the patient-level costs in the three treatment conditions. Conclusion This article provides the methodology for designing trials that compare three treatment conditions while taking the nesting of patients within health professionals into account. As such, it helps to avoid underpowered trials. To use the methodology, a priori estimates of the total outcome variances and intraclass correlation coefficients must be obtained from experts’ opinions or findings in the literature. PMID:29316807

  13. Sample size requirements for separating out the effects of combination treatments: Randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis

    PubMed Central

    2011-01-01

    Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Trial registration Current Controlled Trials ISRCTN61649292 PMID:21288326

  14. Replication Validity of Initial Association Studies: A Comparison between Psychiatry, Neurology and Four Somatic Diseases

    PubMed Central

    Dumas-Mallet, Estelle; Button, Katherine; Boraud, Thomas; Munafo, Marcus; Gonon, François

    2016-01-01

    Context There are growing concerns about effect size inflation and replication validity of association studies, but few observational investigations have explored the extent of these problems. Objective Using meta-analyses to measure the reliability of initial studies and explore whether this varies across biomedical domains and study types (cognitive/behavioral, brain imaging, genetic and “others”). Methods We analyzed 663 meta-analyses describing associations between markers or risk factors and 12 pathologies within three biomedical domains (psychiatry, neurology and four somatic diseases). We collected the effect size, sample size, publication year and Impact Factor of initial studies, largest studies (i.e., with the largest sample size) and the corresponding meta-analyses. Initial studies were considered as replicated if they were in nominal agreement with meta-analyses and if their effect size inflation was below 100%. Results Nominal agreement between initial studies and meta-analyses regarding the presence of a significant effect was not better than chance in psychiatry, whereas it was somewhat better in neurology and somatic diseases. Whereas effect sizes reported by largest studies and meta-analyses were similar, most of those reported by initial studies were inflated. Among the 256 initial studies reporting a significant effect (p<0.05) and paired with significant meta-analyses, 97 effect sizes were inflated by more than 100%. Nominal agreement and effect size inflation varied with the biomedical domain and study type. Indeed, the replication rate of initial studies reporting a significant effect ranged from 6.3% for genetic studies in psychiatry to 86.4% for cognitive/behavioral studies. Comparison between eight subgroups shows that replication rate decreases with sample size and “true” effect size. We observed no evidence of association between replication rate and publication year or Impact Factor. Conclusion The differences in reliability between biological psychiatry, neurology and somatic diseases suggest that there is room for improvement, at least in some subdomains. PMID:27336301

  15. Statistical considerations in monitoring birds over large areas

    USGS Publications Warehouse

    Johnson, D.H.

    2000-01-01

    The proper design of a monitoring effort depends primarily on the objectives desired, constrained by the resources available to conduct the work. Typically, managers have numerous objectives, such as determining abundance of the species, detecting changes in population size, evaluating responses to management activities, and assessing habitat associations. A design that is optimal for one objective will likely not be optimal for others. Careful consideration of the importance of the competing objectives may lead to a design that adequately addresses the priority concerns, although it may not be optimal for any individual objective. Poor design or inadequate sample sizes may result in such weak conclusions that the effort is wasted. Statistical expertise can be used at several stages, such as estimating power of certain hypothesis tests, but is perhaps most useful in fundamental considerations of describing objectives and designing sampling plans.

  16. Dental size variation in the Atapuerca-SH Middle Pleistocene hominids.

    PubMed

    Bermúdez de Castro, J M; Sarmiento, S; Cunha, E; Rosas, A; Bastir, M

    2001-09-01

    The Middle Pleistocene Atapuerca-Sima de los Huesos (SH) site in Spain has yielded the largest sample of fossil hominids so far found from a single site and belonging to the same biological population. The SH dental sample includes a total of 452 permanent and deciduous teeth, representing a minimum of 27 individuals. We present a study of the dental size variation in these hominids, based on the analysis of the mandibular permanent dentition: lateral incisors, n=29; canines, n=27; third premolars, n=30; fourth premolars, n=34; first molars, n=38; second molars, n=38. We have obtained the buccolingual diameter and the crown area (measured on occlusal photographs) of these teeth, and used the bootstrap method to assess the amount of variation in the SH sample compared with the variation of a modern human sample from the Museu Antropologico of the Universidade of Coimbra (Portugal). The SH hominids have, in general terms, a dental size variation higher than that of the modern human sample. The analysis is especially conclusive for the canines. Furthermore, we have estimated the degree of sexual dimorphism of the SH sample by obtaining male and female dental subsamples by means of sexing the large sample of SH mandibular specimens. We obtained the index of sexual dimorphism (ISD=male mean/female mean) and the values were compared with those obtained from the sexed modern human sample from Coimbra, and with data found in the literature concerning several recent human populations. In all tooth classes the ISD of the SH hominids was higher than that of modern humans, but the differences were generally modest, except for the canines, thus suggesting that canine size sexual dimorphism in Homo heidelbergensis was probably greater than that of modern humans. Since the approach of sexing fossil specimens has some obvious limitations, these results should be assessed with caution. Additional data from SH and other European Middle Pleistocene sites would be necessary to test this hypothesis. Copyright 2001 Academic Press.

  17. Summary and Synthesis: How to Present a Research Proposal.

    PubMed

    Setia, Maninder Singh; Panda, Saumya

    2017-01-01

    This concluding module attempts to synthesize the key learning points discussed during the course of the previous ten sets of modules on methodology and biostatistics. The objective of this module is to discuss how to present a model research proposal, based on whatever was discussed in the preceding modules. The lynchpin of a research proposal is the protocol, and the key component of a protocol is the study design. However, one must not neglect the other areas, be it the project summary through which one catches the eyes of the reviewer of the proposal, or the background and the literature review, or the aims and objectives of the study. Two critical areas in the "methods" section that cannot be emphasized more are the sampling strategy and a formal estimation of sample size. Without a legitimate sample size, none of the conclusions based on the statistical analysis would be valid. Finally, the ethical parameters of the study should be well understood by the researchers, and that should get reflected in the proposal.

  18. Summary and Synthesis: How to Present a Research Proposal

    PubMed Central

    Setia, Maninder Singh; Panda, Saumya

    2017-01-01

    This concluding module attempts to synthesize the key learning points discussed during the course of the previous ten sets of modules on methodology and biostatistics. The objective of this module is to discuss how to present a model research proposal, based on whatever was discussed in the preceding modules. The lynchpin of a research proposal is the protocol, and the key component of a protocol is the study design. However, one must not neglect the other areas, be it the project summary through which one catches the eyes of the reviewer of the proposal, or the background and the literature review, or the aims and objectives of the study. Two critical areas in the “methods” section that cannot be emphasized more are the sampling strategy and a formal estimation of sample size. Without a legitimate sample size, none of the conclusions based on the statistical analysis would be valid. Finally, the ethical parameters of the study should be well understood by the researchers, and that should get reflected in the proposal. PMID:28979004

  19. Small studies may overestimate the effect sizes in critical care meta-analyses: a meta-epidemiological study

    PubMed Central

    2013-01-01

    Introduction Small-study effects refer to the fact that trials with limited sample sizes are more likely to report larger beneficial effects than large trials. However, this has never been investigated in critical care medicine. Thus, the present study aimed to examine the presence and extent of small-study effects in critical care medicine. Methods Critical care meta-analyses involving randomized controlled trials and reported mortality as an outcome measure were considered eligible for the study. Component trials were classified as large (≥100 patients per arm) and small (<100 patients per arm) according to their sample sizes. Ratio of odds ratio (ROR) was calculated for each meta-analysis and then RORs were combined using a meta-analytic approach. ROR<1 indicated larger beneficial effect in small trials. Small and large trials were compared in methodological qualities including sequence generating, blinding, allocation concealment, intention to treat and sample size calculation. Results A total of 27 critical care meta-analyses involving 317 trials were included. Of them, five meta-analyses showed statistically significant RORs <1, and other meta-analyses did not reach a statistical significance. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Large trials showed significantly better reporting quality than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data. Conclusions Small trials are more likely to report larger beneficial effects than large trials in critical care medicine, which could be partly explained by the lower methodological quality in small trials. Caution should be practiced in the interpretation of meta-analyses involving small trials. PMID:23302257

  20. Clinical and MRI activity as determinants of sample size for pediatric multiple sclerosis trials

    PubMed Central

    Verhey, Leonard H.; Signori, Alessio; Arnold, Douglas L.; Bar-Or, Amit; Sadovnick, A. Dessa; Marrie, Ruth Ann; Banwell, Brenda

    2013-01-01

    Objective: To estimate sample sizes for pediatric multiple sclerosis (MS) trials using new T2 lesion count, annualized relapse rate (ARR), and time to first relapse (TTFR) endpoints. Methods: Poisson and negative binomial models were fit to new T2 lesion and relapse count data, and negative binomial time-to-event and exponential models were fit to TTFR data of 42 children with MS enrolled in a national prospective cohort study. Simulations were performed by resampling from the best-fitting model of new T2 lesion count, number of relapses, or TTFR, under various assumptions of the effect size, trial duration, and model parameters. Results: Assuming a 50% reduction in new T2 lesions over 6 months, 90 patients/arm are required, whereas 165 patients/arm are required for a 40% treatment effect. Sample sizes for 2-year trials using relapse-related endpoints are lower than that for 1-year trials. For 2-year trials and a conservative assumption of overdispersion (ϑ), sample sizes range from 70 patients/arm (using ARR) to 105 patients/arm (TTFR) for a 50% reduction in relapses, and 230 patients/arm (ARR) to 365 patients/arm (TTFR) for a 30% relapse reduction. Assuming a less conservative ϑ, 2-year trials using ARR require 45 patients/arm (60 patients/arm for TTFR) for a 50% reduction in relapses and 145 patients/arm (200 patients/arm for TTFR) for a 30% reduction. Conclusion: Six-month phase II trials using new T2 lesion count as an endpoint are feasible in the pediatric MS population; however, trials powered on ARR or TTFR will need to be 2 years in duration and will require multicentered collaboration. PMID:23966255

  1. Zipf's law and city size distribution: A survey of the literature and future research agenda

    NASA Astrophysics Data System (ADS)

    Arshad, Sidra; Hu, Shougeng; Ashraf, Badar Nadeem

    2018-02-01

    This study provides a systematic review of the existing literature on Zipf's law for city size distribution. Existing empirical evidence suggests that Zipf's law is not always observable even for the upper-tail cities of a territory. However, the controversy with empirical findings arises due to sample selection biases, methodological weaknesses and data limitations. The hypothesis of Zipf's law is more likely to be rejected for the entire city size distribution and, in such case, alternative distributions have been suggested. On the contrary, the hypothesis is more likely to be accepted if better empirical methods are employed and cities are properly defined. The debate is still far from to be conclusive. In addition, we identify four emerging areas in Zipf's law and city size distribution research including the size distribution of lower-tail cities, the size distribution of cities in sub-national regions, the alternative forms of Zipf's law, and the relationship between Zipf's law and the coherence property of the urban system.

  2. Comparative tests of ectoparasite species richness in seabirds

    PubMed Central

    Hughes, Joseph; Page, Roderic DM

    2007-01-01

    Background The diversity of parasites attacking a host varies substantially among different host species. Understanding the factors that explain these patterns of parasite diversity is critical to identifying the ecological principles underlying biodiversity. Seabirds (Charadriiformes, Pelecaniformes and Procellariiformes) and their ectoparasitic lice (Insecta: Phthiraptera) are ideal model groups in which to study correlates of parasite species richness. We evaluated the relative importance of morphological (body size, body weight, wingspan, bill length), life-history (longevity, clutch size), ecological (population size, geographical range) and behavioural (diving versus non-diving) variables as predictors of louse diversity on 413 seabird hosts species. Diversity was measured at the level of louse suborder, genus, and species, and uneven sampling of hosts was controlled for using literature citations as a proxy for sampling effort. Results The only variable consistently correlated with louse diversity was host population size and to a lesser extent geographic range. Other variables such as clutch size, longevity, morphological and behavioural variables including body mass showed inconsistent patterns dependent on the method of analysis. Conclusion The comparative analysis presented herein is (to our knowledge) the first to test correlates of parasite species richness in seabirds. We believe that the comparative data and phylogeny provide a valuable framework for testing future evolutionary hypotheses relating to the diversity and distribution of parasites on seabirds. PMID:18005412

  3. About Cats and Dogs...Reconsidering the Relationship between Pet Ownership and Health Related Outcomes in Community-Dwelling Elderly

    ERIC Educational Resources Information Center

    Rijken, Mieke; van Beek, Sandra

    2011-01-01

    Having a pet has been claimed to have beneficial health effects, but methodologically sound empirical studies are scarce. Small sample sizes and a lack of information about the specific type of pets involved make it difficult to draw unambiguous conclusions. We aimed to shed light on the relationship between pet ownership and several health…

  4. Numerical sedimentation particle-size analysis using the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.

    2015-12-01

    Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.

  5. Outlier Removal and the Relation with Reporting Errors and Quality of Psychological Research

    PubMed Central

    Bakker, Marjan; Wicherts, Jelte M.

    2014-01-01

    Background The removal of outliers to acquire a significant result is a questionable research practice that appears to be commonly used in psychology. In this study, we investigated whether the removal of outliers in psychology papers is related to weaker evidence (against the null hypothesis of no effect), a higher prevalence of reporting errors, and smaller sample sizes in these papers compared to papers in the same journals that did not report the exclusion of outliers from the analyses. Methods and Findings We retrieved a total of 2667 statistical results of null hypothesis significance tests from 153 articles in main psychology journals, and compared results from articles in which outliers were removed (N = 92) with results from articles that reported no exclusion of outliers (N = 61). We preregistered our hypotheses and methods and analyzed the data at the level of articles. Results show no significant difference between the two types of articles in median p value, sample sizes, or prevalence of all reporting errors, large reporting errors, and reporting errors that concerned the statistical significance. However, we did find a discrepancy between the reported degrees of freedom of t tests and the reported sample size in 41% of articles that did not report removal of any data values. This suggests common failure to report data exclusions (or missingness) in psychological articles. Conclusions We failed to find that the removal of outliers from the analysis in psychological articles was related to weaker evidence (against the null hypothesis of no effect), sample size, or the prevalence of errors. However, our control sample might be contaminated due to nondisclosure of excluded values in articles that did not report exclusion of outliers. Results therefore highlight the importance of more transparent reporting of statistical analyses. PMID:25072606

  6. On sample size and different interpretations of snow stability datasets

    NASA Astrophysics Data System (ADS)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar aspect distributions to the large dataset. We used 100 different subsets for each sample size. Statistical variations obtained in the complete dataset were also tested on the smaller subsets using the Mann-Whitney or the Kruskal-Wallis test. For each subset size, the number of subsets were counted in which the significance level was reached. For these tests no nominal data scale was assumed. (iii) For the same subsets described above, the distribution of the aspect median was determined. A count of how often this distribution was substantially different from the distribution obtained with the complete dataset was made. Since two valid stability interpretations were available (an objective and a subjective interpretation as described above), the effect of the arbitrary choice of the interpretation on spatial variability results was tested. In over one third of the cases the two interpretations came to different results. The effect of these differences were studied in a similar method as described in (iii): the distribution of the aspect median was determined for subsets of the complete dataset using both interpretations, compared against each other as well as to the results of the complete dataset. For the complete dataset the two interpretations showed mainly identical results. Therefore the subset size was determined from the point at which the results of the two interpretations converged. A universal result for the optimal subset size cannot be presented since results differed between different situations contained in the dataset. The optimal subset size is thus dependent on stability variation in a given situation, which is unknown initially. There are indications that for some situations even the complete dataset might be not large enough. At a subset size of approximately 25, the significant differences between aspect groups (as determined using the whole dataset) were only obtained in one out of five situations. In some situations, up to 20% of the subsets showed a substantially different distribution of the aspect median. Thus, in most cases, 25 measurements (which can be achieved by six two-person teams in one day) did not allow to draw reliable conclusions.

  7. Got power? A systematic review of sample size adequacy in health professions education research.

    PubMed

    Cook, David A; Hatala, Rose

    2015-03-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.

  8. SU-E-I-46: Sample-Size Dependence of Model Observers for Estimating Low-Contrast Detection Performance From CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiser, I; Lu, Z

    2014-06-01

    Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions includedmore » two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task.« less

  9. Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme

    PubMed Central

    Bonacho dos Anjos Henriques-Cadby, Inês; Bortolami, Oscar; Flight, Laura; Hind, Daniel; Knox, Christopher; Nadin, Ben; Rothwell, Joanne; Surtees, Michael; Julious, Steven A

    2017-01-01

    Background Substantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope. Objectives To review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme. Data sources and study selection HTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed. Data extraction Information was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers. Main outcome measures Target sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data). Results This review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43–2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79–97%). Conclusions There is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections. PMID:28320800

  10. The Discovery of Single-Nucleotide Polymorphisms—and Inferences about Human Demographic History

    PubMed Central

    Wakeley, John; Nielsen, Rasmus; Liu-Cordero, Shau Neen; Ardlie, Kristin

    2001-01-01

    A method of historical inference that accounts for ascertainment bias is developed and applied to single-nucleotide polymorphism (SNP) data in humans. The data consist of 84 short fragments of the genome that were selected, from three recent SNP surveys, to contain at least two polymorphisms in their respective ascertainment samples and that were then fully resequenced in 47 globally distributed individuals. Ascertainment bias is the deviation, from what would be observed in a random sample, caused either by discovery of polymorphisms in small samples or by locus selection based on levels or patterns of polymorphism. The three SNP surveys from which the present data were derived differ both in their protocols for ascertainment and in the size of the samples used for discovery. We implemented a Monte Carlo maximum-likelihood method to fit a subdivided-population model that includes a possible change in effective size at some time in the past. Incorrectly assuming that ascertainment bias does not exist causes errors in inference, affecting both estimates of migration rates and historical changes in size. Migration rates are overestimated when ascertainment bias is ignored. However, the direction of error in inferences about changes in effective population size (whether the population is inferred to be shrinking or growing) depends on whether either the numbers of SNPs per fragment or the SNP-allele frequencies are analyzed. We use the abbreviation “SDL,” for “SNP-discovered locus,” in recognition of the genomic-discovery context of SNPs. When ascertainment bias is modeled fully, both the number of SNPs per SDL and their allele frequencies support a scenario of growth in effective size in the context of a subdivided population. If subdivision is ignored, however, the hypothesis of constant effective population size cannot be rejected. An important conclusion of this work is that, in demographic or other studies, SNP data are useful only to the extent that their ascertainment can be modeled. PMID:11704929

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majid, Z.A.; Mahmud, H.; Shaaban, M.G.

    Stabilization/solidification of hazardous wastes is used to convert hazardous metal hydroxide waste sludge into a solid mass with better handling properties. This study investigated the pore size development of ordinary portland cement pastes containing metal hydroxide waste sludge and rice husk ash using mercury intrusion porosimetry. The effects of acre and the addition of rice husk ash on pore size development and strength were studied. It was found that the pore structures of mixes changed significantly with curing acre. The pore size shifted from 1,204 to 324 {angstrom} for 3-day old cement paste, and from 956 to 263 {angstrom} formore » a 7-day old sample. A reduction in pore size distribution for different curing ages was also observed in the other mixtures. From this limited study, no conclusion could be made as to any correlation between strength development and porosity. 10 refs., 6 figs., 3 tabs.« less

  12. Understanding the City Size Wage Gap*

    PubMed Central

    Baum-Snow, Nathaniel; Pavan, Ronni

    2013-01-01

    In this paper, we decompose city size wage premia into various components. We base these decompositions on an estimated on-the-job search model that incorporates latent ability, search frictions, firm-worker match quality, human capital accumulation and endogenous migration between large, medium and small cities. Counterfactual simulations of the model indicate that variation in returns to experience and differences in wage intercepts across location type are the most important mechanisms contributing to observed city size wage premia. Variation in returns to experience is more important for generating wage premia between large and small locations while differences in wage intercepts are more important for generating wage premia betwen medium and small locations. Sorting on unobserved ability within education group and differences in labor market search frictions and distributions of firm-worker match quality contribute little to observed city size wage premia. These conclusions hold for separate samples of high school and college graduates. PMID:24273347

  13. Understanding the City Size Wage Gap.

    PubMed

    Baum-Snow, Nathaniel; Pavan, Ronni

    2012-01-01

    In this paper, we decompose city size wage premia into various components. We base these decompositions on an estimated on-the-job search model that incorporates latent ability, search frictions, firm-worker match quality, human capital accumulation and endogenous migration between large, medium and small cities. Counterfactual simulations of the model indicate that variation in returns to experience and differences in wage intercepts across location type are the most important mechanisms contributing to observed city size wage premia. Variation in returns to experience is more important for generating wage premia between large and small locations while differences in wage intercepts are more important for generating wage premia betwen medium and small locations. Sorting on unobserved ability within education group and differences in labor market search frictions and distributions of firm-worker match quality contribute little to observed city size wage premia. These conclusions hold for separate samples of high school and college graduates.

  14. Context Matters: Volunteer Bias, Small Sample Size, and the Value of Comparison Groups in the Assessment of Research-Based Undergraduate Introductory Biology Lab Courses

    PubMed Central

    Brownell, Sara E.; Kloser, Matthew J.; Fukami, Tadashi; Shavelson, Richard J.

    2013-01-01

    The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course. PMID:24358380

  15. Context matters: volunteer bias, small sample size, and the value of comparison groups in the assessment of research-based undergraduate introductory biology lab courses.

    PubMed

    Brownell, Sara E; Kloser, Matthew J; Fukami, Tadashi; Shavelson, Richard J

    2013-01-01

    The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.

  16. Technical advances in flow cytometry-based diagnosis and monitoring of paroxysmal nocturnal hemoglobinuria

    PubMed Central

    Correia, Rodolfo Patussi; Bento, Laiz Cameirão; Bortolucci, Ana Carolina Apelle; Alexandre, Anderson Marega; Vaz, Andressa da Costa; Schimidell, Daniela; Pedro, Eduardo de Carvalho; Perin, Fabricio Simões; Nozawa, Sonia Tsukasa; Mendes, Cláudio Ernesto Albers; Barroso, Rodrigo de Souza; Bacal, Nydia Strachman

    2016-01-01

    ABSTRACT Objective: To discuss the implementation of technical advances in laboratory diagnosis and monitoring of paroxysmal nocturnal hemoglobinuria for validation of high-sensitivity flow cytometry protocols. Methods: A retrospective study based on analysis of laboratory data from 745 patient samples submitted to flow cytometry for diagnosis and/or monitoring of paroxysmal nocturnal hemoglobinuria. Results: Implementation of technical advances reduced test costs and improved flow cytometry resolution for paroxysmal nocturnal hemoglobinuria clone detection. Conclusion: High-sensitivity flow cytometry allowed more sensitive determination of paroxysmal nocturnal hemoglobinuria clone type and size, particularly in samples with small clones. PMID:27759825

  17. Closantel nano-encapsulated polyvinyl alcohol (PVA) solutions.

    PubMed

    Vega, Abraham Faustino; Medina-Torres, Luis; Calderas, Fausto; Gracia-Mora, Jesus; Bernad-Bernad, MaJosefa

    2016-08-01

    The influence of closantel on the rheological and physicochemical properties (particle size and by UV-Vis absorption spectroscopy) of PVA aqueous solutions is studied here. About 1% PVA aqueous solutions were prepared by varying the closantel content. The increase of closantel content led to a reduction in the particle size of final solutions. All the solutions were buffered at pH 7.4 and exhibited shear-thinning behavior. Furthermore, in oscillatory flow, a "solid-like" type behavior was observed for the sample containing 30 μg/mL closantel. Indicating a strong interaction between the dispersed and continuous phases and evidencing an interconnected network between the nanoparticle and PVA, this sample also showed the highest shear viscosity and higher shear thinning slope, indicating a more intrincate structure disrupted by shear. In conclusion, PVA interacts with closantel in aqueous solution and the critical concentration for closantel encapsulation by PVA was about 30 μg/mL; above this concentration, the average particle size decreased notoriously which was associated to closantel interacting with the surface of the PVA aggregates and thus avoiding to some extent direct polymer-polymer interaction.

  18. Thoracic and respirable particle definitions for human health risk assessment

    PubMed Central

    2013-01-01

    Background Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. Methods We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Results Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. Conclusions By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects. PMID:23575443

  19. Discrepancies Between Plastic Surgery Meeting Abstracts and Subsequent Full-Length Manuscript Publications.

    PubMed

    Denadai, Rafael; Araujo, Gustavo Henrique; Pinho, Andre Silveira; Denadai, Rodrigo; Samartine, Hugo; Raposo-Amaral, Cassio Eduardo

    2016-10-01

    The purpose of this bibliometric study was to assess the discrepancies between plastic surgery meeting abstracts and subsequent full-length manuscript publications. Abstracts presented at the Brazilian Congress of Plastic Surgery from 2010 to 2011 were compared with matching manuscript publications. Discrepancies between the abstract and the subsequent manuscript were categorized as major (changes in the purpose, methods, study design, sample size, statistical analysis, results, and conclusions) and minor (changes in the title and authorship) variations. The overall discrepancy rate was 96 %, with at least one major (76 %) and/or minor (96 %) variation. There were inconsistencies between the study title (56 %), authorship (92 %), purpose (6 %), methods (20 %), study design (36 %), sample size (51.2 %), statistical analysis (14 %), results (20 %), and conclusions (8 %) of manuscripts compared with their corresponding meeting abstracts. As changes occur before manuscript publication of plastic surgery meeting abstracts, caution should be exercised in referencing abstracts or altering surgical practices based on abstracts' content. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  20. Choosing an Optimal Database for Protein Identification from Tandem Mass Spectrometry Data.

    PubMed

    Kumar, Dhirendra; Yadav, Amit Kumar; Dash, Debasis

    2017-01-01

    Database searching is the preferred method for protein identification from digital spectra of mass to charge ratios (m/z) detected for protein samples through mass spectrometers. The search database is one of the major influencing factors in discovering proteins present in the sample and thus in deriving biological conclusions. In most cases the choice of search database is arbitrary. Here we describe common search databases used in proteomic studies and their impact on final list of identified proteins. We also elaborate upon factors like composition and size of the search database that can influence the protein identification process. In conclusion, we suggest that choice of the database depends on the type of inferences to be derived from proteomics data. However, making additional efforts to build a compact and concise database for a targeted question should generally be rewarding in achieving confident protein identifications.

  1. Age at menopause: imputing age at menopause for women with a hysterectomy with application to risk of postmenopausal breast cancer

    PubMed Central

    Rosner, Bernard; Colditz, Graham A.

    2011-01-01

    Purpose Age at menopause, a major marker in the reproductive life, may bias results for evaluation of breast cancer risk after menopause. Methods We follow 38,948 premenopausal women in 1980 and identify 2,586 who reported hysterectomy without bilateral oophorectomy, and 31,626 who reported natural menopause during 22 years of follow-up. We evaluate risk factors for natural menopause, impute age at natural menopause for women reporting hysterectomy without bilateral oophorectomy and estimate the hazard of reaching natural menopause in the next 2 years. We apply this imputed age at menopause to both increase sample size and to evaluate the relation between postmenopausal exposures and risk of breast cancer. Results Age, cigarette smoking, age at menarche, pregnancy history, body mass index, history of benign breast disease, and history of breast cancer were each significantly related to age at natural menopause; duration of oral contraceptive use and family history of breast cancer were not. The imputation increased sample size substantially and although some risk factors after menopause were weaker in the expanded model (height, and alcohol use), use of hormone therapy is less biased. Conclusions Imputing age at menopause increases sample size, broadens generalizability making it applicable to women with hysterectomy, and reduces bias. PMID:21441037

  2. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    PubMed Central

    2012-01-01

    Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes. PMID:23057445

  3. Design and Weighting Methods for a Nationally Representative Sample of HIV-infected Adults Receiving Medical Care in the United States-Medical Monitoring Project

    PubMed Central

    Iachan, Ronaldo; H. Johnson, Christopher; L. Harding, Richard; Kyle, Tonja; Saavedra, Pedro; L. Frazier, Emma; Beer, Linda; L. Mattson, Christine; Skarbinski, Jacek

    2016-01-01

    Background: Health surveys of the general US population are inadequate for monitoring human immunodeficiency virus (HIV) infection because the relatively low prevalence of the disease (<0.5%) leads to small subpopulation sample sizes. Objective: To collect a nationally and locally representative probability sample of HIV-infected adults receiving medical care to monitor clinical and behavioral outcomes, supplementing the data in the National HIV Surveillance System. This paper describes the sample design and weighting methods for the Medical Monitoring Project (MMP) and provides estimates of the size and characteristics of this population. Methods: To develop a method for obtaining valid, representative estimates of the in-care population, we implemented a cross-sectional, three-stage design that sampled 23 jurisdictions, then 691 facilities, then 9,344 HIV patients receiving medical care, using probability-proportional-to-size methods. The data weighting process followed standard methods, accounting for the probabilities of selection at each stage and adjusting for nonresponse and multiplicity. Nonresponse adjustments accounted for differing response at both facility and patient levels. Multiplicity adjustments accounted for visits to more than one HIV care facility. Results: MMP used a multistage stratified probability sampling design that was approximately self-weighting in each of the 23 project areas and nationally. The probability sample represents the estimated 421,186 HIV-infected adults receiving medical care during January through April 2009. Methods were efficient (i.e., induced small, unequal weighting effects and small standard errors for a range of weighted estimates). Conclusion: The information collected through MMP allows monitoring trends in clinical and behavioral outcomes and informs resource allocation for treatment and prevention activities. PMID:27651851

  4. Statistical considerations in evaluating pharmacogenomics-based clinical effect for confirmatory trials.

    PubMed

    Wang, Sue-Jane; O'Neill, Robert T; Hung, Hm James

    2010-10-01

    The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.

  5. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2008-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  6. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2012-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  7. Plasma asymmetric dimethylarginine, L-arginine and Left Ventricular Structure and Function in a Community-based Sample

    PubMed Central

    Lieb, Wolfgang; Benndorf, Ralf A.; Benjamin, Emelia J.; Sullivan, Lisa M.; Maas, Renke; Xanthakis, Vanessa; Schwedhelm, Edzard; Aragam, Jayashri; Schulze, Friedrich; Böger, Rainer H.; Vasan, Ramachandran S.

    2009-01-01

    Objective Increasing evidence indicates that cardiac structure and function are modulated by the nitric oxide (NO) system. Elevated plasma concentrations of asymmetric dimethylarginine (ADMA; a competitive inhibitor of NO synthase) have been reported in patients with end-stage renal disease. It is unclear if circulating ADMA and L-arginine levels are related to cardiac structure and function in the general population. Methods We related plasma ADMA and L-Arginine (the amino acid precursor of NO) to echocardiographic left ventricular (LV) mass, left atrial (LA) size and fractional shortening (FS) using multivariable linear regression analyses in 1,919 Framingham Offspring Study participants (mean age 57 years, 58 % women). Results Overall, neither ADMA or L-arginine, nor their ratio was associated with LV mass, LA size and FS in multivariable models (p>0.10 for all). However, we observed effect modification by obesity of the relations of ADMA and LA size (p for interaction p=0.04): ADMA was positively related to LA size in obese individuals (adjusted-p=0.0004 for trend across ADMA quartiles) but not in non-obese people. Conclusion In our large community-based sample, plasma ADMA and L-arginine concentrations were not related to cardiac structure or function. The observation of positive relations of LA size and ADMA in obese individuals warrants confirmation. PMID:18829028

  8. Cluster randomised crossover trials with binary data and unbalanced cluster sizes: application to studies of near-universal interventions in intensive care.

    PubMed

    Forbes, Andrew B; Akram, Muhammad; Pilcher, David; Cooper, Jamie; Bellomo, Rinaldo

    2015-02-01

    Cluster randomised crossover trials have been utilised in recent years in the health and social sciences. Methods for analysis have been proposed; however, for binary outcomes, these have received little assessment of their appropriateness. In addition, methods for determination of sample size are currently limited to balanced cluster sizes both between clusters and between periods within clusters. This article aims to extend this work to unbalanced situations and to evaluate the properties of a variety of methods for analysis of binary data, with a particular focus on the setting of potential trials of near-universal interventions in intensive care to reduce in-hospital mortality. We derive a formula for sample size estimation for unbalanced cluster sizes, and apply it to the intensive care setting to demonstrate the utility of the cluster crossover design. We conduct a numerical simulation of the design in the intensive care setting and for more general configurations, and we assess the performance of three cluster summary estimators and an individual-data estimator based on binomial-identity-link regression. For settings similar to the intensive care scenario involving large cluster sizes and small intra-cluster correlations, the sample size formulae developed and analysis methods investigated are found to be appropriate, with the unweighted cluster summary method performing well relative to the more optimal but more complex inverse-variance weighted method. More generally, we find that the unweighted and cluster-size-weighted summary methods perform well, with the relative efficiency of each largely determined systematically from the study design parameters. Performance of individual-data regression is adequate with small cluster sizes but becomes inefficient for large, unbalanced cluster sizes. When outcome prevalences are 6% or less and the within-cluster-within-period correlation is 0.05 or larger, all methods display sub-nominal confidence interval coverage, with the less prevalent the outcome the worse the coverage. As with all simulation studies, conclusions are limited to the configurations studied. We confined attention to detecting intervention effects on an absolute risk scale using marginal models and did not explore properties of binary random effects models. Cluster crossover designs with binary outcomes can be analysed using simple cluster summary methods, and sample size in unbalanced cluster size settings can be determined using relatively straightforward formulae. However, caution needs to be applied in situations with low prevalence outcomes and moderate to high intra-cluster correlations. © The Author(s) 2014.

  9. Multiple Category-Lot Quality Assurance Sampling: A New Classification System with Application to Schistosomiasis Control

    PubMed Central

    Olives, Casey; Valadez, Joseph J.; Brooker, Simon J.; Pagano, Marcello

    2012-01-01

    Background Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. Methodology We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n = 15 and n = 25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Principle Findings Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n = 15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. Conclusion/Significance This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools. PMID:22970333

  10. Coalescent: an open-science framework for importance sampling in coalescent theory.

    PubMed

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.

  11. The impact of sample non-normality on ANOVA and alternative methods.

    PubMed

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  12. Knowledge and attitudes of nurses on a regional neurological intensive therapy unit towards brain stem death and organ donation.

    PubMed

    Davies, C

    1997-01-01

    The study aimed to explore nurses knowledge and attitudes towards brain stem death and organ donation. An ex post facto research design was used to determine relationships between variables. A 16 item questionnaire was used to collect data. Statistical analysis revealed one significant result. The limitations of the sample size is acknowledged and the conclusion suggests a larger study is required.

  13. Considerations for Integrating Women into Closed Occupations in the U.S. Special Operations Forces

    DTIC Science & Technology

    2015-05-01

    effectiveness of integration. Ideally, studies adopting an experimental design (using both test and control groups ) would be preferred, but sample sizes may...data -- a survey of SOF personnel and a series of focus group discussions -- collected by the research team regarding the potential challenges to... controlled positions. This report summarizes our research , analysis, and conclusions. We used a mixed-methods approach. We reviewed the current state of

  14. A basic introduction to statistics for the orthopaedic surgeon.

    PubMed

    Bertrand, Catherine; Van Riet, Roger; Verstreken, Frederik; Michielsen, Jef

    2012-02-01

    Orthopaedic surgeons should review the orthopaedic literature in order to keep pace with the latest insights and practices. A good understanding of basic statistical principles is of crucial importance to the ability to read articles critically, to interpret results and to arrive at correct conclusions. This paper explains some of the key concepts in statistics, including hypothesis testing, Type I and Type II errors, testing of normality, sample size and p values.

  15. Title I Preschool Program in the Wake County Public School System (WCPSS): Short- and Long-Term Outcomes. Eye on Evaluation. E&R Report No.11.16

    ERIC Educational Resources Information Center

    Baenen, Nancy

    2011-01-01

    The longitudinal study of the 2005-06 preschool in Wake County Public School System (WCPSS) found short-term gains during the preschool year, but limited impact by kindergarten and no average impact by the end of 3rd grade on achievement, retention rates, special education placements, or attendance. Small sample sizes limit conclusions that can be…

  16. Risk factors for lower extremity injury: a review of the literature

    PubMed Central

    Murphy, D; Connolly, D; Beynnon, B

    2003-01-01

    Prospective studies on risk factors for lower extremity injury are reviewed. Many intrinsic and extrinsic risk factors have been implicated; however, there is little agreement with respect to the findings. Future prospective studies are needed using sufficient sample sizes of males and females, including collection of exposure data, and using established methods for identifying and classifying injury severity to conclusively determine addtional risk factors for lower extremity injury. PMID:12547739

  17. Socioeconomic status, urbanicity and risk behaviors in Mexican youth: an analysis of three cross-sectional surveys

    PubMed Central

    2011-01-01

    Background The relationship between urbanicity and adolescent health is a critical issue for which little empirical evidence has been reported. Although an association has been suggested, a dichotomous rural versus urban comparison may not succeed in identifying differences between adolescent contexts. This study aims to assess the influence of locality size on risk behaviors in a national sample of young Mexicans living in low-income households, while considering the moderating effect of socioeconomic status (SES). Methods This is a secondary analysis of three national surveys of low-income households in Mexico in different settings: rural, semi-urban and urban areas. We analyzed risk behaviors in 15-21-year-olds and their potential relation to urbanicity. The risk behaviors explored were: tobacco and alcohol consumption, sexual initiation and condom use. The adolescents' localities of residence were classified according to the number of inhabitants in each locality. We used a logistical model to identify an association between locality size and risk behaviors, including an interaction term with SES. Results The final sample included 17,974 adolescents from 704 localities in Mexico. Locality size was associated with tobacco and alcohol consumption, showing a similar effect throughout all SES levels: the larger the size of the locality, the lower the risk of consuming tobacco or alcohol compared with rural settings. The effect of locality size on sexual behavior was more complex. The odds of adolescent condom use were higher in larger localities only among adolescents in the lowest SES levels. We found no statically significant association between locality size and sexual initiation. Conclusions The results suggest that in this sample of adolescents from low-income areas in Mexico, risk behaviors are related to locality size (number of inhabitants). Furthermore, for condom use, this relation is moderated by SES. Such heterogeneity suggests the need for more detailed analyses of both the effects of urbanicity on behavior, and the responses--which are also heterogeneous--required to address this situation. PMID:22129110

  18. The Population Structure of Glossina palpalis gambiensis from Island and Continental Locations in Coastal Guinea

    PubMed Central

    Solano, Philippe; Ravel, Sophie; Bouyer, Jeremy; Camara, Mamadou; Kagbadouno, Moise S.; Dyer, Naomi; Gardes, Laetitia; Herault, Damien; Donnelly, Martin J.; De Meeûs, Thierry

    2009-01-01

    Background We undertook a population genetics analysis of the tsetse fly Glossina palpalis gambiensis, a major vector of sleeping sickness in West Africa, using microsatellite and mitochondrial DNA markers. Our aims were to estimate effective population size and the degree of isolation between coastal sites on the mainland of Guinea and Loos Islands. The sampling locations encompassed Dubréka, the area with the highest Human African Trypanosomosis (HAT) prevalence in West Africa, mangrove and savannah sites on the mainland, and two islands, Fotoba and Kassa, within the Loos archipelago. These data are discussed with respect to the feasibility and sustainability of control strategies in those sites currently experiencing, or at risk of, sleeping sickness. Principal Findings We found very low migration rates between sites except between those sampled around the Dubréka area that seems to contain a widely dispersed and panmictic population. In the Kassa island samples, various effective population size estimates all converged on surprisingly small values (10

  19. Comparative Toxicity of Size-Fractionated Airborne Particulate Matter Collected at Different Distances from an Urban Highway

    PubMed Central

    Cho, Seung-Hyun; Tong, Haiyan; McGee, John K.; Baldauf, Richard W.; Krantz, Q. Todd; Gilmour, M. Ian

    2009-01-01

    Background Epidemiologic studies have reported an association between proximity to highway traffic and increased cardiopulmonary illnesses. Objectives We investigated the effect of size-fractionated particulate matter (PM), obtained at different distances from a highway, on acute cardiopulmonary toxicity in mice. Methods We collected PM for 2 weeks in July–August 2006 using a three-stage (ultrafine, < 0.1 μm; fine, 0.1–2.5 μm; coarse, 2.5–10 μm) high-volume impactor at distances of 20 m [near road (NR)] and 275 m [far road (FR)] from an interstate highway in Raleigh, North Carolina. Samples were extracted in methanol, dried, diluted in saline, and then analyzed for chemical constituents. Female CD-1 mice received either 25 or 100 μg of each size fraction via oropharyngeal aspiration. At 4 and 18 hr postexposure, mice were assessed for pulmonary responsiveness to inhaled methacholine, biomarkers of lung injury and inflammation; ex vivo cardiac pathophysiology was assessed at 18 hr only. Results Overall chemical composition between NR and FR PM was similar, although NR samples comprised larger amounts of PM, endotoxin, and certain metals than did the FR samples. Each PM size fraction showed differences in ratios of major chemical classes. Both NR and FR coarse PM produced significant pulmonary inflammation irrespective of distance, whereas both NR and FR ultrafine PM induced cardiac ischemia–reperfusion injury. Conclusions On a comparative mass basis, the coarse and ultrafine PM affected the lung and heart, respectively. We observed no significant differences in the overall toxicity end points and chemical makeup between the NR and FR PM. The results suggest that PM of different size-specific chemistry might be associated with different toxicologic mechanisms in cardiac and pulmonary tissues. PMID:20049117

  20. Physical characterization of whole and skim dried milk powders.

    PubMed

    Pugliese, Alessandro; Cabassi, Giovanni; Chiavaro, Emma; Paciulli, Maria; Carini, Eleonora; Mucchetti, Germano

    2017-10-01

    The lack of updated knowledge about the physical properties of milk powders aimed us to evaluate selected physical properties (water activity, particle size, density, flowability, solubility and colour) of eleven skim and whole milk powders produced in Europe. These physical properties are crucial both for the management of milk powder during the final steps of the drying process, and for their use as food ingredients. In general, except for the values of water activity, the physical properties of skim and whole milk powders are very different. Particle sizes of the spray-dried skim milk powders, measured as volume and surface mean diameter were significantly lower than that of the whole milk powders, while the roller dried sample showed the largest particle size. For all the samples the size distribution was quite narrow, with a span value less than 2. The loose density of skim milk powders was significantly higher than whole milk powders (541.36 vs 449.75 kg/m 3 ). Flowability, measured by Hausner ratio and Carr's index indicators, ranged from passable to poor when evaluated according to pharmaceutical criteria. The insolubility index of the spray-dried skim and whole milk powders, measured as weight of the sediment (from 0.5 to 34.8 mg), allowed a good discrimination of the samples. Colour analysis underlined the relevant contribution of fat content and particle size, resulted in higher lightness ( L *) for skim milk powder than whole milk powder, which, on the other hand, showed higher yellowness ( b *) and lower greenness (- a *). In conclusion a detailed knowledge of functional properties of milk powders may allow the dairy to tailor the products to the user and help the food processor to perform a targeted choice according to the intended use.

  1. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  2. Treated and untreated rock dust: Quartz content and physical characterization.

    PubMed

    Soo, Jhy-Charm; Lee, Taekhee; Chisholm, William P; Farcas, Daniel; Schwegler-Berry, Diane; Harper, Martin

    2016-11-01

    Rock dusting is used to prevent secondary explosions in coal mines, but inhalation of rock dusts can be hazardous if the crystalline silica (e.g., quartz) content in the respirable fraction is high. The objective of this study is to assess the quartz content and physical characteristics of four selected rock dusts, consisting of limestone or marble in both treated (such as treatment with stearic acid or stearates) and untreated forms. Four selected rock dusts (an untreated and treated limestone and an untreated and treated marble) were aerosolized in an aerosol chamber. Respirable size-selective sampling was conducted along with particle size-segregated sampling using a Micro-Orifice Uniform Deposit Impactor. Fourier Transform Infrared spectroscopy and scanning electron microscopy with energy-dispersive X-ray (SEM-EDX) analyses were used to determine quartz mass and particle morphology, respectively. Quartz percentage in the respirable dust fraction of untreated and treated forms of the limestone dust was significantly higher than in bulk samples, but since the bulk percentage was low the enrichment factor would not have resulted in any major change to conclusions regarding the contribution of respirable rock dust to the overall airborne quartz concentration. The quartz percentage in the marble dust (untreated and treated) was very low and the respirable fractions showed no enrichment. The spectra from SEM-EDX analysis for all materials were predominantly from calcium carbonate, clay, and gypsum particles. No free quartz particles were observed. The four rock dusts used in this study are representative of those presented for use in rock dusting, but the conclusions may not be applicable to all available materials.

  3. Evolution of eye size and shape in primates.

    PubMed

    Ross, Callum F; Kirk, E Christopher

    2007-03-01

    Strepsirrhine and haplorhine primates exhibit highly derived features of the visual system that distinguish them from most other mammals. Comparative data link the evolution of these visual specializations to the sequential acquisition of nocturnal visual predation in the primate stem lineage and diurnal visual predation in the anthropoid stem lineage. However, it is unclear to what extent these shifts in primate visual ecology were accompanied by changes in eye size and shape. Here we investigate the evolution of primate eye morphology using a comparative study of a large sample of mammalian eyes. Our analysis shows that primates differ from other mammals in having large eyes relative to body size and that anthropoids exhibit unusually small corneas relative to eye size and body size. The large eyes of basal primates probably evolved to improve visual acuity while maintaining high sensitivity in a nocturnal context. The reduced corneal sizes of anthropoids reflect reductions in the size of the dioptric apparatus as a means of increasing posterior nodal distance to improve visual acuity. These data support the conclusion that the origin of anthropoids was associated with a change in eye shape to improve visual acuity in the context of a diurnal predatory habitus.

  4. The structure of Turkish trait-descriptive adjectives.

    PubMed

    Somer, O; Goldberg, L R

    1999-03-01

    This description of the Turkish lexical project reports some initial findings on the structure of Turkish personality-related variables. In addition, it provides evidence on the effects of target evaluative homogeneity vs. heterogeneity (e.g., samples of well-liked target individuals vs. samples of both liked and disliked targets) on the resulting factor structures, and thus it provides a first test of the conclusions reached by D. Peabody and L. R. Goldberg (1989) using English trait terms. In 2 separate studies, and in 2 types of data sets, clear versions of the Big Five factor structure were found. And both studies replicated and extended the findings of Peabody and Goldberg; virtually orthogonal factors of relatively equal size were found in the homogeneous samples, and a more highly correlated set of factors with relatively large Agreeableness and Conscientiousness dimensions was found in the heterogeneous samples.

  5. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  6. Pooling sheep faecal samples for the assessment of anthelmintic drug efficacy using McMaster and Mini-FLOTAC in gastrointestinal strongyle and Nematodirus infection.

    PubMed

    Kenyon, Fiona; Rinaldi, Laura; McBean, Dave; Pepe, Paola; Bosco, Antonio; Melville, Lynsey; Devin, Leigh; Mitchell, Gillian; Ianniello, Davide; Charlier, Johannes; Vercruysse, Jozef; Cringoli, Giuseppe; Levecke, Bruno

    2016-07-30

    In small ruminants, faecal egg counts (FECs) and reduction in FECs (FECR) are the most common methods for the assessment of intensity of gastrointestinal (GI) nematodes infections and anthelmintic drug efficacy, respectively. The main limitation of these methods is the time and cost to conduct FECs on a representative number of individual animals. A cost-saving alternative would be to examine pooled faecal samples, however little is known regarding whether pooling can give representative results. In the present study, we compared the FECR results obtained by both an individual and a pooled examination strategy across different pool sizes and analytical sensitivity of the FEC techniques. A survey was conducted on 5 sheep farms in Scotland, where anthelmintic resistance is known to be widespread. Lambs were treated with fenbendazole (4 groups), levamisole (3 groups), ivermectin (3 groups) or moxidectin (1 group). For each group, individual faecal samples were collected from 20 animals, at baseline (D0) and 14 days after (D14) anthelmintic administration. Faecal samples were analyzed as pools of 3-5, 6-10, and 14-20 individual samples. Both individual and pooled samples were screened for GI strongyle and Nematodirus eggs using two FEC techniques with three different levels of analytical sensitivity, including Mini-FLOTAC (analytical sensitivity of 10 eggs per gram of faeces (EPG)) and McMaster (analytical sensitivity of 15 or 50 EPG).For both Mini-FLOTAC and McMaster (analytical sensitivity of 15 EPG), there was a perfect agreement in classifying the efficacy of the anthelmintic as 'normal', 'doubtful' or 'reduced' regardless of pool size. When using the McMaster method (analytical sensitivity of 50 EPG) anthelmintic efficacy was often falsely classified as 'normal' or assessment was not possible due to zero FECs at D0, and this became more pronounced when the pool size increased. In conclusion, pooling ovine faecal samples holds promise as a cost-saving and efficient strategy for assessing GI nematode FECR. However, for the assessment FECR one will need to consider the baseline FEC, pool size and analytical sensitivity of the method. Copyright © 2016. Published by Elsevier B.V.

  7. Characterization of the porosity of human dental enamel and shear bond strength in vitro after variable etch times: initial findings using the BET method.

    PubMed

    Nguyen, Trang T; Miller, Arthur; Orellana, Maria F

    2011-07-01

    (1) To quantitatively characterize human enamel porosity and surface area in vitro before and after etching for variable etching times; and (2) to evaluate shear bond strength after variable etching times. Specifically, our goal was to identify the presence of any correlation between enamel porosity and shear bond strength. Pore surface area, pore volume, and pore size of enamel from extracted human teeth were analyzed by Brunauer-Emmett-Teller (BET) gas adsorption before and after etching for 15, 30, and 60 seconds with 37% phosphoric acid. Orthodontic brackets were bonded with Transbond to the samples with variable etch times and were subsequently applied to a single-plane lap shear testing system. Pore volume and surface area increased after etching for 15 and 30 seconds. At 60 seconds, this increase was less pronounced. On the contrary, pore size appears to decrease after etching. No correlation was found between variable etching times and shear strength. Samples etched for 15, 30, and 60 seconds all demonstrated clinically viable shear strength values. The BET adsorption method could be a valuable tool in enhancing our understanding of enamel characteristics. Our findings indicate that distinct quantitative changes in enamel pore architecture are evident after etching. Further testing with a larger sample size would have to be carried out for more definitive conclusions to be made.

  8. Improving risk classification of critical illness with biomarkers: a simulation study

    PubMed Central

    Seymour, Christopher W.; Cooke, Colin R.; Wang, Zheyu; Kerr, Kathleen F.; Yealy, Donald M.; Angus, Derek C.; Rea, Thomas D.; Kahn, Jeremy M.; Pepe, Margaret S.

    2012-01-01

    Purpose Optimal triage of patients at risk of critical illness requires accurate risk prediction, yet little data exists on the performance criteria required of a potential biomarker to be clinically useful. Materials and Methods We studied an adult cohort of non-arrest, non-trauma emergency medical services encounters transported to a hospital from 2002–2006. We simulated hypothetical biomarkers increasingly associated with critical illness during hospitalization, and determined the biomarker strength and sample size necessary to improve risk classification beyond a best clinical model. Results Of 57,647 encounters, 3,121 (5.4%) were hospitalized with critical illness and 54,526 (94.6%) without critical illness. The addition of a moderate strength biomarker (odds ratio=3.0 for critical illness) to a clinical model improved discrimination (c-statistic 0.85 vs. 0.8, p<0.01), reclassification (net reclassification improvement=0.15, 95%CI: 0.13,0.18), and increased the proportion of cases in the highest risk categoryby+8.6% (95%CI: 7.5,10.8%). Introducing correlation between the biomarker and physiological variables in the clinical risk score did not modify the results. Statistically significant changes in net reclassification required a sample size of at least 1000 subjects. Conclusions Clinical models for triage of critical illness could be significantly improved by incorporating biomarkers, yet, substantial sample sizes and biomarker strength may be required. PMID:23566734

  9. How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

    PubMed

    Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

    2014-09-01

    Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  10. Evaluation of Sampling Recommendations From the Influenza Virologic Surveillance Right Size Roadmap for Idaho

    PubMed Central

    2017-01-01

    Background The Right Size Roadmap was developed by the Association of Public Health Laboratories and the Centers for Disease Control and Prevention to improve influenza virologic surveillance efficiency. Guidelines were provided to state health departments regarding representativeness and statistical estimates of specimen numbers needed for seasonal influenza situational awareness, rare or novel influenza virus detection, and rare or novel influenza virus investigation. Objective The aim of this study was to compare Roadmap sampling recommendations with Idaho’s influenza virologic surveillance to determine implementation feasibility. Methods We calculated the proportion of medically attended influenza-like illness (MA-ILI) from Idaho’s influenza-like illness surveillance among outpatients during October 2008 to May 2014, applied data to Roadmap-provided sample size calculators, and compared calculations with actual numbers of specimens tested for influenza by the Idaho Bureau of Laboratories (IBL). We assessed representativeness among patients’ tested specimens to census estimates by age, sex, and health district residence. Results Among outpatients surveilled, Idaho’s mean annual proportion of MA-ILI was 2.30% (20,834/905,818) during a 5-year period. Thus, according to Roadmap recommendations, Idaho needs to collect 128 specimens from MA-ILI patients/week for situational awareness, 1496 influenza-positive specimens/week for detection of a rare or novel influenza virus at 0.2% prevalence, and after detection, 478 specimens/week to confirm true prevalence is ≤2% of influenza-positive samples. The mean number of respiratory specimens Idaho tested for influenza/week, excluding the 2009-2010 influenza season, ranged from 6 to 24. Various influenza virus types and subtypes were collected and specimen submission sources were representative in terms of geographic distribution, patient age range and sex, and disease severity. Conclusions Insufficient numbers of respiratory specimens are submitted to IBL for influenza laboratory testing. Increased specimen submission would facilitate meeting Roadmap sample size recommendations. PMID:28838883

  11. Toxic Hazards Research Unit Annual Technical Report: 1975

    DTIC Science & Technology

    1975-10-01

    by different sampling rates 160 21 Particle size distribution curves 167 22 Effect of 03 or N02 concentrations on rat lung weight 177 23 Relationship...previously, consisted of female C57 black/6 mice obtained from Jackson Laboratories, male CDF (Fischer 344 derived) albino rats from Charles River...the exposure phase of the study but made at the conclusion of the 5 ppm and 0. 5 ppm experiments were: Blood urea nitrogen SGOT Chloride Prothrombin

  12. Published methodological quality of randomized controlled trials does not reflect the actual quality assessed in protocols

    PubMed Central

    Mhaskar, Rahul; Djulbegovic, Benjamin; Magazin, Anja; Soares, Heloisa P.; Kumar, Ambuj

    2011-01-01

    Objectives To assess whether reported methodological quality of randomized controlled trials (RCTs) reflect the actual methodological quality, and to evaluate the association of effect size (ES) and sample size with methodological quality. Study design Systematic review Setting Retrospective analysis of all consecutive phase III RCTs published by 8 National Cancer Institute Cooperative Groups until year 2006. Data were extracted from protocols (actual quality) and publications (reported quality) for each study. Results 429 RCTs met the inclusion criteria. Overall reporting of methodological quality was poor and did not reflect the actual high methodological quality of RCTs. The results showed no association between sample size and actual methodological quality of a trial. Poor reporting of allocation concealment and blinding exaggerated the ES by 6% (ratio of hazard ratio [RHR]: 0.94, 95%CI: 0.88, 0.99) and 24% (RHR: 1.24, 95%CI: 1.05, 1.43), respectively. However, actual quality assessment showed no association between ES and methodological quality. Conclusion The largest study to-date shows poor quality of reporting does not reflect the actual high methodological quality. Assessment of the impact of quality on the ES based on reported quality can produce misleading results. PMID:22424985

  13. Design of the Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) Trial and the Impact of Uncertainty on Power

    PubMed Central

    Ambrosius, Walter T.; Polonsky, Tamar S.; Greenland, Philip; Goff, David C.; Perdue, Letitia H.; Fortmann, Stephen P.; Margolis, Karen L.; Pajewski, Nicholas M.

    2014-01-01

    Background Although observational evidence has suggested that the measurement of CAC may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether coronary artery calcium (CAC) testing leads to improved patient outcomes. Purpose To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. Methods The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of non-fatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, non-fatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including: (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. Results We have proposed a sample size of 30,000 (800 events) which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events) and 27,078 (722 events) provide 80, 85, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8 to 89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9 to 78.0), 82.5% (82.5 to 82.6), and 87.2% (87.2 to 87.3), respectively. Limitations These power estimates are dependent on form and parameters of the prior distributions. Conclusions Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power. PMID:22333998

  14. EXTENDING THE FLOOR AND THE CEILING FOR ASSESSMENT OF PHYSICAL FUNCTION

    PubMed Central

    Fries, James F.; Lingala, Bharathi; Siemons, Liseth; Glas, Cees A. W.; Cella, David; Hussain, Yusra N; Bruce, Bonnie; Krishnan, Eswar

    2014-01-01

    Objective The objective of the current study was to improve the assessment of physical function by improving the precision of assessment at the floor (extremely poor function) and at the ceiling (extremely good health) of the health continuum. Methods Under the NIH PROMIS program, we developed new physical function floor and ceiling items to supplement the existing item bank. Using item response theory (IRT) and the standard PROMIS methodology, we developed 30 floor items and 26 ceiling items and administered them during a 12-month prospective observational study of 737 individuals at the extremes of health status. Change over time was compared across anchor instruments and across items by means of effect sizes. Using the observed changes in scores, we back-calculated sample size requirements for the new and comparison measures. Results We studied 444 subjects with chronic illness and/or extreme age, and 293 generally fit subjects including athletes in training. IRT analyses confirmed that the new floor and ceiling items outperformed reference items (p<0.001). The estimated post-hoc sample size requirements were reduced by a factor of two to four at the floor and a factor of two at the ceiling. Conclusion Extending the range of physical function measurement can substantially improve measurement quality, can reduce sample size requirements and improve research efficiency. The paradigm shift from Disability to Physical Function includes the entire spectrum of physical function, signals improvement in the conceptual base of outcome assessment, and may be transformative as medical goals more closely approach societal goals for health. PMID:24782194

  15. A post hoc evaluation of a sample size re-estimation in the Secondary Prevention of Small Subcortical Strokes study.

    PubMed

    McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S

    2016-10-01

    The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (<130 mmHg) target on recurrent stroke risk in a two-by-two factorial design. A sample size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not achieve, statistical significance with the larger sample (n = 3020 HR (95% confidence interval): 0.81 (0.63, 1.0), p = 0.089; n = 2500 HR (95% confidence interval): 0.89 (0.68, 1.17), p = 0.40). The results from the safety analyses were similar to 3020 and 2500 patients for both study interventions. Other trial-related factors, such as contracts, finances, and study management, were impacted as well. Adaptive designs can have benefits in randomized clinical trials, but do not always result in significant findings. The impact of adaptive designs should be measured in terms of both trial results, as well as practical issues related to trial management. More post hoc analyses of study adaptations will lead to better understanding of the balance between the benefits and the costs. © The Author(s) 2016.

  16. Effect of modulation of the particle size distributions in the direct solid analysis by total-reflection X-ray fluorescence

    NASA Astrophysics Data System (ADS)

    Fernández-Ruiz, Ramón; Friedrich K., E. Josue; Redrejo, M. J.

    2018-02-01

    The main goal of this work was to investigate, in a systematic way, the influence of the controlled modulation of the particle size distribution of a representative solid sample with respect to the more relevant analytical parameters of the Direct Solid Analysis (DSA) by Total-reflection X-Ray Fluorescence (TXRF) quantitative method. In particular, accuracy, uncertainty, linearity and detection limits were correlated with the main parameters of their size distributions for the following elements; Al, Si, P, S, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Rb, Sr, Ba and Pb. In all cases strong correlations were finded. The main conclusion of this work can be resumed as follows; the modulation of particles shape to lower average sizes next to a minimization of the width of particle size distributions, produce a strong increment of accuracy, minimization of uncertainties and limit of detections for DSA-TXRF methodology. These achievements allow the future use of the DSA-TXRF analytical methodology for development of ISO norms and standardized protocols for the direct analysis of solids by mean of TXRF.

  17. Hypoglossal canal size and hominid speech

    PubMed Central

    DeGusta, David; Gilbert, W. Henry; Turner, Scott P.

    1999-01-01

    The mammalian hypoglossal canal transmits the nerve that supplies the motor innervation to the tongue. Hypoglossal canal size has previously been used to date the origin of human-like speech capabilities to at least 400,000 years ago and to assign modern human vocal abilities to Neandertals. These conclusions are based on the hypothesis that the size of the hypoglossal canal is indicative of speech capabilities. This hypothesis is falsified here by the finding of numerous nonhuman primate taxa that have hypoglossal canals in the modern human size range, both absolutely and relative to oral cavity volume. Specimens of Australopithecus afarensis, Australopithecus africanus, and Australopithecus boisei also have hypoglossal canals that, both absolutely and relative to oral cavity volume, are equal in size to those of modern humans. The basis for the hypothesis that hypoglossal canal size is indicative of speech was the assumption that hypoglossal canal size is correlated with hypoglossal nerve size, which in turn is related to tongue function. This assumption is probably incorrect, as we found no apparent correlation between the size of the hypoglossal nerve, or the number of axons it contains, and the size of the hypoglossal canal in a sample of cadavers. Our data demonstrate that the size of the hypoglossal canal does not reflect vocal capabilities or language usage. Thus the date of origin for human language and the speech capabilities of Neandertals remain open questions. PMID:9990105

  18. Distribution and diversity of cytotypes in Dianthus broteri as evidenced by genome size variations

    PubMed Central

    Balao, Francisco; Casimiro-Soriguer, Ramón; Talavera, María; Herrera, Javier; Talavera, Salvador

    2009-01-01

    Background and Aims Studying the spatial distribution of cytotypes and genome size in plants can provide valuable information about the evolution of polyploid complexes. Here, the spatial distribution of cytological races and the amount of DNA in Dianthus broteri, an Iberian carnation with several ploidy levels, is investigated. Methods Sample chromosome counts and flow cytometry (using propidium iodide) were used to determine overall genome size (2C value) and ploidy level in 244 individuals of 25 populations. Both fresh and dried samples were investigated. Differences in 2C and 1Cx values among ploidy levels within biogeographical provinces were tested using ANOVA. Geographical correlations of genome size were also explored. Key Results Extensive variation in chromosomes numbers (2n = 2x = 30, 2n = 4x = 60, 2n = 6x = 90 and 2n = 12x =180) was detected, and the dodecaploid cytotype is reported for the first time in this genus. As regards cytotype distribution, six populations were diploid, 11 were tetraploid, three were hexaploid and five were dodecaploid. Except for one diploid population containing some triploid plants (2n = 45), the remaining populations showed a single cytotype. Diploids appeared in two disjunct areas (south-east and south-west), and so did tetraploids (although with a considerably wider geographic range). Dehydrated leaf samples provided reliable measurements of DNA content. Genome size varied significantly among some cytotypes, and also extensively within diploid (up to 1·17-fold) and tetraploid (1·22-fold) populations. Nevertheless, variations were not straightforwardly congruent with ecology and geographical distribution. Conclusions Dianthus broteri shows the highest diversity of cytotypes known to date in the genus Dianthus. Moreover, some cytotypes present remarkable internal genome size variation. The evolution of the complex is discussed in terms of autopolyploidy, with primary and secondary contact zones. PMID:19633312

  19. Effectiveness of massage therapy for shoulder pain: a systematic review and meta-analysis.

    PubMed

    Yeun, Young-Ran

    2017-05-01

    [Purpose] This study performed an effect-size analysis of massage therapy for shoulder pain. [Subjects and Methods] The database search was conducted using PubMed, CINAHL, Embase, PsycINFO, RISS, NDSL, NANET, DBpia, and KoreaMed. The meta-analysis was based on 15 studies, covering a total of 635 participants, and used a random effects model. [Results] The effect size estimate showed that massage therapy had a significant effect on reducing shoulder pain for short-term efficacy (SMD: -1.08, 95% CI: -1.51 to -0.65) and for long-term efficacy (SMD: -0.47, 95% CI: -0.71 to -0.23). [Conclusion] The findings from this review suggest that massage therapy is effective at improving shoulder pain. However, further research is needed, especially a randomized controlled trial design or a large sample size, to provide evidence-based recommendations.

  20. Mindfulness Meditation for Substance Use Disorders: A Systematic Review

    PubMed Central

    Zgierska, Aleksandra; Rabago, David; Chawla, Neharika; Kushner, Kenneth; Koehler, Robert; Marlatt, Allan

    2009-01-01

    Relapse is common in substance use disorders (SUDs), even among treated individuals. The goal of this article was to systematically review the existing evidence on mindfulness meditation-based interventions (MM) for SUDs. The comprehensive search for and review of literature found over 2,000 abstracts and resulted in 25 eligible manuscripts (22 published, 3 unpublished: 8 RCTs, 7 controlled non-randomized, 6 non-controlled prospective, 2 qualitative studies, 1 case report). When appropriate, methodological quality, absolute risk reduction, number needed to treat, and effect size (ES) were assessed. Overall, although preliminary evidence suggests MM efficacy and safety, conclusive data for MM as a treatment of SUDs are lacking. Significant methodological limitations exist in most studies. Further, it is unclear which persons with SUDs might benefit most from MM. Future trials must be of sufficient sample size to answer a specific clinical question and should target both assessment of effect size and mechanisms of action. PMID:19904664

  1. Macrophage Migration Inhibitory Factor for the Early Prediction of Infarct Size

    PubMed Central

    Chan, William; White, David A.; Wang, Xin‐Yu; Bai, Ru‐Feng; Liu, Yang; Yu, Hai‐Yi; Zhang, You‐Yi; Fan, Fenling; Schneider, Hans G.; Duffy, Stephen J.; Taylor, Andrew J.; Du, Xiao‐Jun; Gao, Wei; Gao, Xiao‐Ming; Dart, Anthony M.

    2013-01-01

    Background Early diagnosis and knowledge of infarct size is critical for the management of acute myocardial infarction (MI). We evaluated whether early elevated plasma level of macrophage migration inhibitory factor (MIF) is useful for these purposes in patients with ST‐elevation MI (STEMI). Methods and Results We first studied MIF level in plasma and the myocardium in mice and determined infarct size. MI for 15 or 60 minutes resulted in 2.5‐fold increase over control values in plasma MIF levels while MIF content in the ischemic myocardium reduced by 50% and plasma MIF levels correlated with myocardium‐at‐risk and infarct size at both time‐points (P<0.01). In patients with STEMI, we obtained admission plasma samples and measured MIF, conventional troponins (TnI, TnT), high sensitive TnI (hsTnI), creatine kinase (CK), CK‐MB, and myoglobin. Infarct size was assessed by cardiac magnetic resonance (CMR) imaging. Patients with chronic stable angina and healthy volunteers were studied as controls. Of 374 STEMI patients, 68% had elevated admission MIF levels above the highest value in healthy controls (>41.6 ng/mL), a proportion similar to hsTnI (75%) and TnI (50%), but greater than other biomarkers studied (20% to 31%, all P<0.05 versus MIF). Only admission MIF levels correlated with CMR‐derived infarct size, ventricular volumes and ejection fraction (n=42, r=0.46 to 0.77, all P<0.01) at 3 day and 3 months post‐MI. Conclusion Plasma MIF levels are elevated in a high proportion of STEMI patients at the first obtainable sample and these levels are predictive of final infarct size and the extent of cardiac remodeling. PMID:24096574

  2. Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: A tutorial using simulations and empirical data.

    PubMed

    de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff

    2016-09-01

    The Pearson product–moment correlation coefficient ( r p ) and the Spearman rank correlation coefficient ( r s ) are widely used in psychological research. We compare r p and r s on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, r p and r s have similar expected values but r s is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, r p is more variable than r s . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, r p had lower variability than r s in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, r s had lower variability than r p , and often corresponded more accurately to the population Pearson correlation coefficient ( R p ) than r p did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing r s instead of r p . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of r s and r p . In conclusion, r p is suitable for light-tailed distributions, whereas r s is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. PsycINFO Database Record (c) 2016 APA, all rights reserved

  3. The Influence of Hot-Rolled Temperature on Plasma Nitriding Behavior of Iron-Based Alloys

    NASA Astrophysics Data System (ADS)

    El-Hossary, F. M.; Khalil, S. M.; Lotfy, Kh.; Kassem, M. A.

    2009-07-01

    Experiments were performed with an aim of studying the effect of hot-rolled temperature (600 and 900°C) on radio frequency (rf) plasma nitriding of Fe93Ni4Zr3 alloy. Nitriding was carried out for 10 min in a nitrogen atmosphere at a base pressure of 10-2 mbarr. Different continuous plasma processing powers of 300-550 W in steps 50 W or less were applied. Nitrided hot-rolled specimens were characterized by optical microscopy (OM), X-ray diffraction (XRD) and microhardness measurements. The results reveal that the surface of hot-rolled rf plasma nitrided specimens at 600°C is characterized with a fine microstructure as a result of the high nitrogen solubility and diffusivity. Moreover, the hot-rolled treated samples at 600°C exhibit higher microhardness value than the associated values of hot-rolled treated samples at 900°C. The enhancement of microhardness is due to precipitation and predominance of new phases ( γ and ɛ phases). Mainly, this conclusion has been attributed to the high defect densities and small grain sizes of the samples hot-rolled at 600°C. Generally, the refinement of grain size plays a dramatic role in improvement of mechanical properties of tested samples.

  4. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  5. Estimating individual glomerular volume in the human kidney: clinical perspectives

    PubMed Central

    Puelles, Victor G.; Zimanyi, Monika A.; Samuel, Terence; Hughson, Michael D.; Douglas-Denton, Rebecca N.; Bertram, John F.

    2012-01-01

    Background. Measurement of individual glomerular volumes (IGV) has allowed the identification of drivers of glomerular hypertrophy in subjects without overt renal pathology. This study aims to highlight the relevance of IGV measurements with possible clinical implications and determine how many profiles must be measured in order to achieve stable size distribution estimates. Methods. We re-analysed 2250 IGV estimates obtained using the disector/Cavalieri method in 41 African and 34 Caucasian Americans. Pooled IGV analysis of mean and variance was conducted. Monte-Carlo (Jackknife) simulations determined the effect of the number of sampled glomeruli on mean IGV. Lin’s concordance coefficient (RC), coefficient of variation (CV) and coefficient of error (CE) measured reliability. Results. IGV mean and variance increased with overweight and hypertensive status. Superficial glomeruli were significantly smaller than juxtamedullary glomeruli in all subjects (P < 0.01), by race (P < 0.05) and in obese individuals (P < 0.01). Subjects with multiple chronic kidney disease (CKD) comorbidities showed significant increases in IGV mean and variability. Overall, mean IGV was particularly reliable with nine or more sampled glomeruli (RC > 0.95, <5% difference in CV and CE). These observations were not affected by a reduced sample size and did not disrupt the inverse linear correlation between mean IGV and estimated total glomerular number. Conclusions. Multiple comorbidities for CKD are associated with increased IGV mean and variance within subjects, including overweight, obesity and hypertension. Zonal selection and the number of sampled glomeruli do not represent drawbacks for future longitudinal biopsy-based studies of glomerular size and distribution. PMID:21984554

  6. Temporal analysis of genetic structure to assess population dynamics of reintroduced swift foxes.

    PubMed

    Cullingham, Catherine I; Moehrenschlager, Axel

    2013-12-01

    Reintroductions are increasingly used to reestablish species, but a paucity of long-term postrelease monitoring has limited understanding of whether and when viable populations subsequently persist. We conducted temporal genetic analyses of reintroduced populations of swift foxes (Vulpes velox) in Canada (Alberta and Saskatchewan) and the United States (Montana). We used samples collected 4 years apart, 17 years from the initiation of the reintroduction, and 3 years after the conclusion of releases. To assess program success, we genotyped 304 hair samples, subsampled from the known range in 2000 and 2001, and 2005 and 2006, at 7 microsatellite loci. We compared diversity, effective population size, and genetic connectivity over time in each population. Diversity remained stable over time and there was evidence of increasing effective population size. We determined population structure in both periods after correcting for differences in sample sizes. The geographic distribution of these populations roughly corresponded with the original release locations, which suggests the release sites had residual effects on the population structure. However, given that both reintroduction sites had similar source populations, habitat fragmentation, due to cropland, may be associated with the population structure we found. Although our results indicate growing, stable populations, future connectivity analyses are warranted to ensure both populations are not subject to negative small-population effects. Our results demonstrate the importance of multiple sampling years to fully capture population dynamics of reintroduced populations. Análisis Temporal de la Estructura Genética para Evaluar la Dinámica Poblacional de Zorros (Vulpes velox) Reintroducidos. © 2013 Society for Conservation Biology.

  7. Further statistics in dentistry. Part 4: Clinical trials 2.

    PubMed

    Petrie, A; Bulman, J S; Osborn, J F

    2002-11-23

    The principles which underlie a well-designed clinical trial were introduced in a previous paper. The trial should be controlled (to ensure that the appropriate comparisons are made), randomised (to avoid allocation bias) and, preferably, blinded (to obviate assessment bias). However, taken in isolation, these concepts will not necessarily ensure that meaningful conclusions can be drawn from the study. It is essential that the sample size is large enough to enable the effects of interest to be estimated precisely, and to detect any real treatment differences.

  8. Historically low mitochondrial DNA diversity in koalas (Phascolarctos cinereus)

    PubMed Central

    2012-01-01

    Background The koala (Phascolarctos cinereus) is an arboreal marsupial that was historically widespread across eastern Australia until the end of the 19th century when it suffered a steep population decline. Hunting for the fur trade, habitat conversion, and disease contributed to a precipitous reduction in koala population size during the late 1800s and early 1900s. To examine the effects of these reductions in population size on koala genetic diversity, we sequenced part of the hypervariable region of mitochondrial DNA (mtDNA) in koala museum specimens collected in the 19th and 20th centuries, hypothesizing that the historical samples would exhibit greater genetic diversity. Results The mtDNA haplotypes present in historical museum samples were identical to haplotypes found in modern koala populations, and no novel haplotypes were detected. Rarefaction analyses suggested that the mtDNA genetic diversity present in the museum samples was similar to that of modern koalas. Conclusions Low mtDNA diversity may have been present in koala populations prior to recent population declines. When considering management strategies, low genetic diversity of the mtDNA hypervariable region may not indicate recent inbreeding or founder events but may reflect an older historical pattern for koalas. PMID:23095716

  9. Genetics of wellbeing and its components satisfaction with life, happiness, and quality of life: a review and meta-analysis of heritability studies.

    PubMed

    Bartels, Meike

    2015-03-01

    Wellbeing is a major topic of research across several disciplines, reflecting the increasing recognition of its strong value across major domains in life. Previous twin-family studies have revealed that individual differences in wellbeing are accounted for by both genetic as well as environmental factors. A systematic literature search identified 30 twin-family studies on wellbeing or a related measure such as satisfaction with life or happiness. Review of these studies showed considerable variation in heritability estimates (ranging from 0 to 64 %), which makes it difficult to draw firm conclusions regarding the genetic influences on wellbeing. For overall wellbeing twelve heritability estimates, from 10 independent studies, were meta-analyzed by computing a sample size weighted average heritability. Ten heritability estimates, derived from 9 independent samples, were used for the meta-analysis of satisfaction with life. The weighted average heritability of wellbeing, based on a sample size of 55,974 individuals, was 36 % (34-38), while the weighted average heritability for satisfaction with life was 32 % (29-35) (n = 47,750). With this result a more robust estimate of the relative influence of genetic effects on wellbeing is provided.

  10. Motion mitigation for lung cancer patients treated with active scanning proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grassberger, Clemens, E-mail: Grassberger.Clemens@mgh.harvard.edu; Dowdell, Stephen; Sharp, Greg

    2015-05-15

    Purpose: Motion interplay can affect the tumor dose in scanned proton beam therapy. This study assesses the ability of rescanning and gating to mitigate interplay effects during lung treatments. Methods: The treatments of five lung cancer patients [48 Gy(RBE)/4fx] with varying tumor size (21.1–82.3 cm{sup 3}) and motion amplitude (2.9–30.6 mm) were simulated employing 4D Monte Carlo. The authors investigated two spot sizes (σ ∼ 12 and ∼3 mm), three rescanning techniques (layered, volumetric, breath-sampled volumetric) and respiratory gating with a 30% duty cycle. Results: For 4/5 patients, layered rescanning 6/2 times (for the small/large spot size) maintains equivalent uniformmore » dose within the target >98% for a single fraction. Breath sampling the timing of rescanning is ∼2 times more effective than the same number of continuous rescans. Volumetric rescanning is sensitive to synchronization effects, which was observed in 3/5 patients, though not for layered rescanning. For the large spot size, rescanning compared favorably with gating in terms of time requirements, i.e., 2x-rescanning is on average a factor ∼2.6 faster than gating for this scenario. For the small spot size however, 6x-rescanning takes on average 65% longer compared to gating. Rescanning has no effect on normal lung V{sub 20} and mean lung dose (MLD), though it reduces the maximum lung dose by on average 6.9 ± 2.4/16.7 ± 12.2 Gy(RBE) for the large and small spot sizes, respectively. Gating leads to a similar reduction in maximum dose and additionally reduces V{sub 20} and MLD. Breath-sampled rescanning is most successful in reducing the maximum dose to the normal lung. Conclusions: Both rescanning (2–6 times, depending on the beam size) as well as gating was able to mitigate interplay effects in the target for 4/5 patients studied. Layered rescanning is superior to volumetric rescanning, as the latter suffers from synchronization effects in 3/5 patients studied. Gating minimizes the irradiated volume of normal lung more efficiently, while breath-sampled rescanning is superior in reducing maximum doses to organs at risk.« less

  11. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  12. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  13. Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective

    PubMed Central

    Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke

    2015-01-01

    Objective We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Background Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. Method An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. Results The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. Conclusion This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. Application The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model–based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. PMID:26169309

  14. Trial Sequential Analysis in systematic reviews with meta-analysis.

    PubMed

    Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian

    2017-03-06

    Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.

  15. Naltrexone and Cognitive Behavioral Therapy for the Treatment of Alcohol Dependence

    PubMed Central

    Baros, AM; Latham, PK; Anton, RF

    2008-01-01

    Background Sex differences in regards to pharmacotherapy for alcoholism is a topic of concern following publications suggesting naltrexone, one of the longest approved treatments of alcoholism, is not as effective in women as in men. This study was conducted by combining two randomized placebo controlled clinical trials utilizing similar methodologies and personnel in which the data was amalgamated to evaluate sex effects in a reasonable sized sample. Methods 211 alcoholics (57 female; 154 male) were randomized to the naltrexone/CBT or placebo/CBT arm of the two clinical trials analyzed. Baseline variables were examined for differences between sex and treatment groups via analysis of variance (ANOVA) for continuous variable or chi-square test for categorical variables. All initial outcome analysis was conducted under an intent-to-treat analysis plan. Effect sizes for naltrexone over placebo were determined by Cohen’s D (d). Results The effect size of naltrexone over placebo for the following outcome variables was similar in men and women (%days abstinent (PDA) d=0.36, %heavy drinking days (PHDD) d=0.36 and total standard drinks (TSD) d=0.36). Only for men were the differences significant secondary to the larger sample size (PDA p=0.03; PHDD p=0.03; TSD p=0.04). There were a few variables (GGT at wk-12 change from baseline to week-12: men d=0.36, p=0.05; women d=0.20, p=0.45 and drinks per drinking day: men d=0.36, p=0.05; women d=0.28, p=0.34) where the naltrexone effect size for men was greater than women. In women, naltrexone tended to increase continuous abstinent days before a first drink (women d-0.46, p=0.09; men d=0.00, p=0.44). Conclusions The effect size of naltrexone over placebo appeared similar in women and men in our hands suggesting the findings of sex differences in naltrexone response might have to do with sample size and/or endpoint drinking variables rather than any inherent pharmacological or biological differences in response. PMID:18336635

  16. Early lexical characteristics of toddlers with cleft lip and palate.

    PubMed

    Hardin-Jones, Mary; Chapman, Kathy L

    2014-11-01

    Objective : To examine development of early expressive lexicons in toddlers with cleft palate to determine whether they differ from those of noncleft toddlers in terms of size and lexical selectivity. Design : Retrospective. Patients : A total of 37 toddlers with cleft palate and 22 noncleft toddlers. Main Outcome Measures : The groups were compared for size of expressive lexicon reported on the MacArthur Communicative Development Inventory and the percentage of words beginning with obstruents and sonorants produced in a language sample. Differences between groups in the percentage of word initial consonants correct on the language sample were also examined. Results : Although expressive vocabulary was comparable at 13 months of age for both groups, size of the lexicon for the cleft group was significantly smaller than that for the noncleft group at 21 and 27 months of age. Toddlers with cleft palate produced significantly more words beginning with sonorants and fewer words beginning with obstruents in their spontaneous speech samples. They were also less accurate when producing word initial obstruents compared with the noncleft group. Conclusions : Toddlers with cleft palate demonstrate a slower rate of lexical development compared with their noncleft peers. The preference that toddlers with cleft palate demonstrate for words beginning with sonorants could suggest they are selecting words that begin with consonants that are easier for them to produce. An alternative explanation might be that because these children are less accurate in the production of obstruent consonants, listeners may not always identify obstruents when they occur.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorenz, Matthias; Ovchinnikova, Olga S; Van Berkel, Gary J

    RATIONALE: Laser ablation provides for the possibility of sampling a large variety of surfaces with high spatial resolution. This type of sampling when employed in conjunction with liquid capture followed by nanoelectrospray ionization provides the opportunity for sensitive and prolonged interrogation of samples by mass spectrometry as well as the ability to analyze surfaces not amenable to direct liquid extraction. METHODS: A fully automated, reflection geometry, laser ablation liquid capture spot sampling system was achieved by incorporating appropriate laser fiber optics and a focusing lens into a commercially available, liquid extraction surface analysis (LESA ) ready Advion TriVersa NanoMate system.more » RESULTS: Under optimized conditions about 10% of laser ablated material could be captured in a droplet positioned vertically over the ablation region using the NanoMate robot controlled pipette. The sampling spot size area with this laser ablation liquid capture surface analysis (LA/LCSA) mode of operation (typically about 120 m x 160 m) was approximately 50 times smaller than that achievable by direct liquid extraction using LESA (ca. 1 mm diameter liquid extraction spot). The set-up was successfully applied for the analysis of ink on glass and paper as well as the endogenous components in Alstroemeria Yellow King flower petals. In a second mode of operation with a comparable sampling spot size, termed laser ablation/LESA , the laser system was used to drill through, penetrate, or otherwise expose material beneath a solvent resistant surface. Once drilled, LESA was effective in sampling soluble material exposed at that location on the surface. CONCLUSIONS: Incorporating the capability for different laser ablation liquid capture spot sampling modes of operation into a LESA ready Advion TriVersa NanoMate enhanced the spot sampling spatial resolution of this device and broadened the surface types amenable to analysis to include absorbent and solvent resistant materials.« less

  18. Radiopacifier Particle Size Impacts the Physical Properties of Tricalcium Silicate–based Cements

    PubMed Central

    Saghiri, Mohammad Ali; Gutmann, James L.; Orangi, Jafar; Asatourian, Armen; Sheibani, Nader

    2016-01-01

    Introduction The aim of this study was to evaluate the impact of radiopaque additive, bismuth oxide, particle size on the physical properties, and radiopacity of tricalcium silicate–based cements. Methods Six types of tricalcium silicate cement (CSC) including CSC without bismuth oxide, CSC + 10% (wt%) regular bismuth oxide (particle size 10 μm), CSC + 20% regular bismuth oxide (simulating white mineral trioxide aggregate [WMTA]) as a control, CSC + 10% nano bismuth oxide (particle size 50–80 nm), CSC + 20% nano-size bismuth oxide, and nano WMTA (a nano modification of WMTA comprising nanoparticles in the range of 40–100 nm) were prepared. Twenty-four samples from each group were divided into 4 groups and subjected to push-out, surface microhardness, radiopacity, and compressive strength tests. Data were analyzed by 1-way analysis of variance with the post hoc Tukey test. Results The push-out and compressive strength of CSC without bismuth oxide and CSC with 10% and 20% nano bismuth oxide were significantly higher than CSC with 10% or 20% regular bismuth oxide (P < .05). The surface micro-hardness of CSC without bismuth oxide and CSC with 10% regular bismuth oxide had the lowest values (P < .05). The lowest radiopacity values were seen in CSC without bismuth oxide and CSC with 10% nano bismuth oxide (P < .05). Nano WMTA samples showed the highest values for all tested properties (P < .05) except for radiopacity. Conclusions The addition of 20% nano bismuth oxide enhanced the physical properties of CSC without any significant changes in radiopacity. Regular particle-size bismuth oxide reduced the physical properties of CSC material for tested parameters. PMID:25492489

  19. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  1. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  2. Structural Magnetic Resonance Imaging Correlates of Aggression in Psychosis: A Systematic Review and Effect Size Analysis.

    PubMed

    Widmayer, Sonja; Sowislo, Julia F; Jungfer, Hermann A; Borgwardt, Stefan; Lang, Undine E; Stieglitz, Rolf D; Huber, Christian G

    2018-01-01

    Background: Aggression in psychoses is of high clinical importance, and volumetric MRI techniques have been used to explore its structural brain correlates. Methods: We conducted a systematic review searching EMBASE, ScienceDirect, and PsycINFO through September 2017 using thesauri representing aggression, psychosis, and brain imaging. We calculated effect sizes for each study and mean Hedge's g for whole brain (WB) volume. Methodological quality was established using the PRISMA checklist (PROSPERO: CRD42014014461). Results: Our sample consisted of 12 studies with 470 patients and 155 healthy controls (HC). After subtracting subjects due to cohort overlaps, 314 patients and 96 HC remained. Qualitative analyses showed lower volumes of WB, prefrontal regions, temporal lobe, hippocampus, thalamus and cerebellum, and higher volumes of lateral ventricles, amygdala, and putamen in violent vs. non-violent people with schizophrenia. In quantitative analyses, violent persons with schizophrenia exhibited a significantly lower WB volume than HC ( p = 0.004), and also lower than non-violent persons with schizophrenia ( p = 0.007). Conclusions: We reviewed evidence for differences in brain volume correlates of aggression in persons with schizophrenia. Our results point toward a reduced whole brain volume in violent as opposed to non-violent persons with schizophrenia. However, considerable sample overlap in the literature, lack of reporting of potential confounding variables, and missing research on affective psychoses limit our explanatory power. To permit stronger conclusions, further studies evaluating structural correlates of aggression in psychotic disorders are needed.

  3. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  4. Measuring the X-shaped structures in edge-on galaxies

    NASA Astrophysics Data System (ADS)

    Savchenko, S. S.; Sotnikova, N. Ya.; Mosenkov, A. V.; Reshetnikov, V. P.; Bizyaev, D. V.

    2017-11-01

    We present a detailed photometric study of a sample of 22 edge-on galaxies with clearly visible X-shaped structures. We propose a novel method to derive geometrical parameters of these features, along with the parameters of their host galaxies based on the multi-component photometric decomposition of galactic images. To include the X-shaped structure into our photometric model, we use the imfit package, in which we implement a new component describing the X-shaped structure. This method is applied for a sample of galaxies with available Sloan Digital Sky Survey and Spitzer IRAC 3.6 μm observations. In order to explain our results, we perform realistic N-body simulations of a Milky Way-type galaxy and compare the observed and the model X-shaped structures. Our main conclusions are as follows: (1) galaxies with strong X-shaped structures reside in approximately the same local environments as field galaxies; (2) the characteristic size of the X-shaped structures is about 2/3 of the bar size; (3) there is a correlation between the X-shaped structure size and its observed flatness: the larger structures are more flattened; (4) our N-body simulations qualitatively confirm the observational results and support the bar-driven scenario for the X-shaped structure formation.

  5. Recall of health warnings in smokeless tobacco ads

    PubMed Central

    Truitt, L; Hamilton, W; Johnston, P; Bacani, C; Crawford, S; Hozik, L; Celebucki, C

    2002-01-01

    Design: Subjects examined two distracter ads and one of nine randomly assigned smokeless tobacco ads varying in health warning presence, size (8 to 18 point font), and contrast (low versus high)—including no health warning. They were then interviewed about ad content using recall and recognition questions. Subjects: A convenience sample of 895 English speaking males aged 16–24 years old who were intercepted at seven shopping malls throughout Massachusetts during May 2000. Main outcome measures: Proven aided recall, or recall of a health warning and correct recognition of the warning message among distracters, and false recall. Results: Controlling for covariates such as education, employment/student status, and Hispanic background, proven aided recall increased significantly with font size; doubling size from 10 to 20 point font would increase recall from 63% to 76%. Although not statistically significant, recall was somewhat better for high contrast warnings. Ten per cent of the sample mistakenly recalled the warning where none existed. Conclusions: As demonstrated by substantially greater recall among ads that included health warnings over ads that had none, health warnings retained their value to consumers despite years of exposure (that can produce false recall). Larger health warnings would enhance recall, and the proposed model can be used to estimate potential recall that affects communication, perceived health risk, and behaviour modification. PMID:12034984

  6. Spatial scale and sampling resolution affect measures of gap disturbance in a lowland tropical forest: implications for understanding forest regeneration and carbon storage.

    PubMed

    Lobo, Elena; Dalling, James W

    2014-03-07

    Treefall gaps play an important role in tropical forest dynamics and in determining above-ground biomass (AGB). However, our understanding of gap disturbance regimes is largely based either on surveys of forest plots that are small relative to spatial variation in gap disturbance, or on satellite imagery, which cannot accurately detect small gaps. We used high-resolution light detection and ranging data from a 1500 ha forest in Panama to: (i) determine how gap disturbance parameters are influenced by study area size, and the criteria used to define gaps; and (ii) to evaluate how accurately previous ground-based canopy height sampling can determine the size and location of gaps. We found that plot-scale disturbance parameters frequently differed significantly from those measured at the landscape-level, and that canopy height thresholds used to define gaps strongly influenced the gap-size distribution, an important metric influencing AGB. Furthermore, simulated ground surveys of canopy height frequently misrepresented the true location of gaps, which may affect conclusions about how relatively small canopy gaps affect successional processes and contribute to the maintenance of diversity. Across site comparisons need to consider how gap definition, scale and spatial resolution affect characterizations of gap disturbance, and its inferred importance for carbon storage and community composition.

  7. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Process-based selection of copula types for flood peak-volume relationships in Northwest Austria: a case study

    NASA Astrophysics Data System (ADS)

    Kohnová, Silvia; Gaál, Ladislav; Bacigál, Tomáš; Szolgay, Ján; Hlavčová, Kamila; Valent, Peter; Parajka, Juraj; Blöschl, Günter

    2016-12-01

    The case study aims at selecting optimal bivariate copula models of the relationships between flood peaks and flood volumes from a regional perspective with a particular focus on flood generation processes. Besides the traditional approach that deals with the annual maxima of flood events, the current analysis also includes all independent flood events. The target region is located in the northwest of Austria; it consists of 69 small and mid-sized catchments. On the basis of the hourly runoff data from the period 1976- 2007, independent flood events were identified and assigned to one of the following three types of flood categories: synoptic floods, flash floods and snowmelt floods. Flood events in the given catchment are considered independent when they originate from different synoptic situations. Nine commonly-used copula types were fitted to the flood peak - flood volume pairs at each site. In this step, two databases were used: i) a process-based selection of all the independent flood events (three data samples at each catchment) and ii) the annual maxima of the flood peaks and the respective flood volumes regardless of the flood processes (one data sample per catchment). The goodness-of-fit of the nine copula types was examined on a regional basis throughout all the catchments. It was concluded that (1) the copula models for the flood processes are discernible locally; (2) the Clayton copula provides an unacceptable performance for all three processes as well as in the case of the annual maxima; (3) the rejection of the other copula types depends on the flood type and the sample size; (4) there are differences in the copulas with the best fits: for synoptic and flash floods, the best performance is associated with the extreme value copulas; for snowmelt floods, the Frank copula fits the best; while in the case of the annual maxima, no firm conclusion could be made due to the number of copulas with similarly acceptable overall performances. The general conclusion from this case study is that treating flood processes separately is beneficial; however, the usually available sample size in such real life studies is not sufficient to give generally valid recommendations for engineering design tasks.

  9. Sampling for pharmaceuticals and personal care products (PPCPs) and illicit drugs in wastewater systems: are your conclusions valid? A critical review.

    PubMed

    Ort, Christoph; Lawrence, Michael G; Rieckermann, Jörg; Joss, Adriano

    2010-08-15

    The analysis of 87 peer-reviewed journal articles reveals that sampling for pharmaceuticals and personal care products (PPCPs) and illicit drugs in sewers and sewage treatment plant influents is mostly carried out according to existing tradition or standard laboratory protocols. Less than 5% of all studies explicitly consider internationally acknowledged guidelines or methods for the experimental design of monitoring campaigns. In the absence of a proper analysis of the system under investigation, the importance of short-term pollutant variations was typically not addressed. Therefore, due to relatively long sampling intervals, potentially inadequate sampling modes, or insufficient documentation, it remains unclear for the majority of reviewed studies whether observed variations can be attributed to "real" variations or if they simply reflect sampling artifacts. Based on results from previous and current work, the present paper demonstrates that sampling errors can lead to overinterpretation of measured data and ultimately, wrong conclusions. Depending on catchment size, sewer type, sampling setup, substance of interest, and accuracy of analytical method, avoidable sampling artifacts can range from "not significant" to "100% or more" for different compounds even within the same study. However, in most situations sampling errors can be reduced greatly, and sampling biases can be eliminated completely, by choosing an appropriate sampling mode and frequency. This is crucial, because proper sampling will help to maximize the value of measured data for the experimental assessment of the fate of PPCPs as well as for the formulation and validation of mathematical models. The trend from reporting presence or absence of a compound in "clean" water samples toward the quantification of PPCPs in raw wastewater requires not only sophisticated analytical methods but also adapted sampling methods. With increasing accuracy of chemical analyses, inappropriate sampling increasingly represents the major source of inaccuracy. A condensed step-by-step Sampling Guide is proposed as a starting point for future studies.

  10. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  11. Aggregate distribution and associated organic carbon influenced by cover crops

    NASA Astrophysics Data System (ADS)

    Barquero, Irene; García-González, Irene; Benito, Marta; Gabriel, Jose Luis; Quemada, Miguel; Hontoria, Chiquinquirá

    2013-04-01

    Replacing fallow with cover crops during the non-cropping period seems to be a good alternative to diminish soil degradation by enhancing soil aggregation and increasing organic carbon. The aim of this study was to analyze the effect of replacing fallow by different winter cover crops (CC) on the aggregate distribution and C associated of an Haplic Calcisol. The study area was located in Central Spain, under semi-arid Mediterranean climate. A 4-year field trial was conducted using Barley (Hordeum vulgare L.) and Vetch (Vicia sativa L.) as CC during the intercropping period of maize (Zea mays L.) under irrigation. All treatments were equally irrigated and fertilized. Maize was directly sown over CC residues previously killed in early spring. Composite samples were collected at 0-5 and 5-20 cm depths in each treatment on autumn of 2010. Soil samples were separated by wet sieving into four aggregate-size classes: large macroaggregates ( >2000 µm); small macroaggregates (250-2000 µm); microaggregates (53-250 µm); and < 53 µm (silt + clay size). Organic carbon associated to each aggregate-size class was measured by Walkley-Black Method. Our preliminary results showed that the aggregate-size distribution was dominated by microaggregates (48-53%) and the <53 µm fraction (40-44%) resulting in a low mean weight diameter (MWD). Both cover crops increased aggregate size resulting in a higher MWD (0.28 mm) in comparison with fallow (0.20 mm) in the 0-5 cm layer. Barley showed a higher MWD than fallow also in 5-20 cm layer. Organic carbon concentrations in aggregate-size classes at top layer followed the order: large macroaggregates > small macroaggregates > microaggregates > silt + clay size. Treatments did not influence C concentration in aggregate-size classes. In conclusion, cover crops improved soil structure increasing the proportion of macroaggregates and MWD being Barley more effective than Vetch at subsurface layer.

  12. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  13. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  14. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  15. Model of Tooth Morphogenesis Predicts Carabelli Cusp Expression, Size, and Symmetry in Humans

    PubMed Central

    Hunter, John P.; Guatelli-Steinberg, Debbie; Weston, Theresia C.; Durner, Ryan; Betsinger, Tracy K.

    2010-01-01

    Background The patterning cascade model of tooth morphogenesis accounts for shape development through the interaction of a small number of genes. In the model, gene expression both directs development and is controlled by the shape of developing teeth. Enamel knots (zones of nonproliferating epithelium) mark the future sites of cusps. In order to form, a new enamel knot must escape the inhibitory fields surrounding other enamel knots before crown components become spatially fixed as morphogenesis ceases. Because cusp location on a fully formed tooth reflects enamel knot placement and tooth size is limited by the cessation of morphogenesis, the model predicts that cusp expression varies with intercusp spacing relative to tooth size. Although previous studies in humans have supported the model's implications, here we directly test the model's predictions for the expression, size, and symmetry of Carabelli cusp, a variation present in many human populations. Methodology/Principal Findings In a dental cast sample of upper first molars (M1s) (187 rights, 189 lefts, and 185 antimeric pairs), we measured tooth area and intercusp distances with a Hirox digital microscope. We assessed Carabelli expression quantitatively as an area in a subsample and qualitatively using two typological schemes in the full sample. As predicted, low relative intercusp distance is associated with Carabelli expression in both right and left samples using either qualitative or quantitative measures. Furthermore, asymmetry in Carabelli area is associated with asymmetry in relative intercusp spacing. Conclusions/Significance These findings support the model's predictions for Carabelli cusp expression both across and within individuals. By comparing right-left pairs of the same individual, our data show that small variations in developmental timing or spacing of enamel knots can influence cusp pattern independently of genotype. Our findings suggest that during evolution new cusps may first appear as a result of small changes in the spacing of enamel knots relative to crown size. PMID:20689576

  16. Differential foraging preferences on seed size by rodents result in higher dispersal success of medium-sized seeds.

    PubMed

    Cao, Lin; Wang, Zhenyu; Yan, Chuan; Chen, Jin; Guo, Cong; Zhang, Zhibin

    2016-11-01

    Rodent preference for scatter-hoarding large seeds has been widely considered to favor the evolution of large seeds. Previous studies supporting this conclusion were primarily based on observations at earlier stages of seed dispersal, or on a limited sample of successfully established seedlings. Because seed dispersal comprises multiple dispersal stages, we hypothesized that differential foraging preference on seed size by animal dispersers at different dispersal stages would ultimately result in medium-sized seeds having the highest dispersal success rates. In this study, by tracking a large number of seeds for 5 yr, we investigated the effects of seed size on seed fates from seed removal to seedling establishment of a dominant plant Pittosporopsis kerrii (Icacinaceae) dispersed by scatter-hoarding rodents in tropical forest in southwest China. We found that small seeds had a lower survival rate at the early dispersal stage where more small seeds were predated at seed stations and after removal; large seeds had a lower survival rate at the late dispersal stage, more large seeds were recovered, predated after being cached, or larder-hoarded. Medium-sized seeds experienced the highest dispersal success. Our study suggests that differential foraging preferences by scatter-hoarding rodents at different stages of seed dispersal could result in conflicting selective pressures on seed size and higher dispersal success of medium-sized seeds. © 2016 by the Ecological Society of America.

  17. The correlation between tonsil size and academic performance is not a direct one, but the results of various factors.

    PubMed

    Kargoshaie, A A; Najafi, M; Akhlaghi, M; Khazraie, H R; Hekmatdoost, A

    2009-10-01

    Chronic upper airway obstruction most often occurs when both tonsils and adenoid are enlarged but may occur when either is enlarged. Obstructive sleep syndrome in young children has been reported to be associated with an adverse effect on learning and academic performance. The aim of this study was to evaluate the effect of relative size of the tonsil on academic performance in 4th grade school children. In 320 children, physical examination to determine the size of tonsils was performed by the otorhinolaryngologist. A questionnaire was developed to assess sleep patterns and problems, and socio-demographic data for the student participants. Furthermore, their school performance was assessed using their grade in mathematics, science, reading, spelling, and handwriting. No association between tonsil size and academic performance was found. Snoring frequency, body mass index and body weight showed a positive relation with tonsil size. There was no association between tonsil size and sleepiness during the day, sleeping habits, hyperactivity, enuresis, history of tonsillectomy in children and parental cigarette smoking and education. In conclusion, this study did not show any significant relationship between tonsil size and academic performance in 4th grade students. Further studies are recommended with a larger sample size, cognitive exams for evaluation of attention, and follow-up of the students until high school, when the discrepancy of the students' academic performance is more obvious.

  18. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  19. Size and form of the human temporomandibular joint in African-Americans and Caucasians.

    PubMed

    Magnusson, Cecilia; Magnusson, Tomas

    2012-04-01

    The aim of this study was to examine contemporary human skull material for possible differences between Caucasians and African-Americans in respect to size and form of the temporomandibular condyles. The material consisted of a total of 129 Caucasian skulls (94 males and 35 females) and 76 African-American skulls (40 males and 36 females). Their mean age at death was 46 years for the Caucasians (range: 19-89 years) and 37 years for the African-Americans (range: 18-70 years). The mediolateral and anteroposterior dimensions of the 410 condyles were measured, and the condylar form was estimated using both anterior and superior views. No statistically significant differences could be found between Caucasians and African-Americans for any of the recorded variables. In conclusion, the present results lend no support for the existence of ethnic differences between the two groups examined in respect of temporomandibular joint size and form. It is likely that other factors such as evolution, overall cranial size, dietary differences, and genetic factors, irrespective of ethnicity, can explain the differences found in different skull samples.

  20. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  1. Pre and post annealed low cost ZnO nanorods on seeded substrate

    NASA Astrophysics Data System (ADS)

    Nordin, M. N.; Kamil, Wan Maryam Wan Ahmad

    2017-05-01

    We wish to report the photonic band gap (where light is confined) in low cost ZnO nanorods created by two-step chemical bath deposition (CBD) method where the glass substrates were pre-treated with two different seeding thicknesses, 100 nm (sample a) and 150 nm (sample b), of ZnO using radio frequency magnetron sputtering. Then the samples were annealed at 600°C for 1 hour in air before and after immersed into the chemical solution for CBD process. To observe the presence of photonic band gap on the sample, UV-Visible-NIR spectrophotometer was utilized and showed that sample a and sample b both achieved wide band gap between 240 nm and 380 nm, within the UV range for typical ZnO, however sample b provided a better light confinement that may be attributed by the difference in average nanorods size. Field Emission Scanning Electron Microscope (FESEM) of the samples revealed better oriented nanorods uniformly scattered across the surface when substrates were coated with 100 nm of seeding layer whilst the 150 nm seeding sample showed a poor distribution of nanorods probably due to defects in the sample. Finally, the crystal structure of the ZnO crystallite is revealed by employing X-ray diffraction and both samples showed polycrystalline with hexagonal wurtzite structure that matched with JCPDS No. 36-1451. The 100 nm pre-seeded samples was recognized to have bigger average crystallite size, however sample b was suggested as having a higher crystalline quality. In conclusion, the sample b is recognized as a better candidate for future photonic applications due to its more apparent of photonic band gap and this may be contributed by more random distribution of the nanorods as observed in FESEM images as well as higher crystalline quality as suggested from XRD measurements.

  2. Effect of stearic acid modified HAp nanoparticles in different solvents on the properties of Pickering emulsions and HAp/PLLA composites.

    PubMed

    Zhang, Ming; Wang, Ai-Juan; Li, Jun-Ming; Song, Na

    2017-10-01

    Stearic acid (Sa) was used to modify the surface properties of hydroxyapatite (HAp) in different solvents (water, ethanol or dichloromethane(CH 2 Cl 2 )). Effect of different solvents on the properties of HAp particles (activation ratio, grafting ratio, chemical properties), emulsion properties (emulsion stability, emulsion type, droplet morphology) as well as the cured materials (morphology, average pore size) were studied. FT-IR and XPS results confirmed the interaction occurred between stearic acid and HAp particles. Stable O/W and W/O type Pickering emulsions were prepared using unmodified and Sa modified HAp nanoparticles respectively, which indicated a catastrophic inversion of the Pickering emulsion happened possibly because of the enhanced hydrophobicity of HAp particles after surface modification. Porous materials with different structures and pore sizes were obtained using Pickering emulsion as the template via in situ evaporation solvent method. The results indicated the microstructures of cured samples are different form each other when HAp was surface modified in different solvents. HAp particles fabricated using ethanol as solvent has higher activation ratio and grafting ratio. Pickering emulsion with higher stability and cured porous materials with uniform morphology were obtained compared with samples prepared using water and CH 2 Cl 2 as solvents. In conclusion, surface modification of HAp in different solvents played a very important role for its stabilized Pickering emulsion as well as the microstructure of cured samples. It is better to use ethanol as the solvent for Sa modified HAp particles, which could increase the stability of Pickering emulsion and obtain cured samples with uniform pore size. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Short communication: Microbiological quality of raw cow milk and its association with herd management practices in Northern China.

    PubMed

    Lan, X Y; Zhao, S G; Zheng, N; Li, S L; Zhang, Y D; Liu, H M; McKillip, J; Wang, J Q

    2017-06-01

    Contamination of raw milk with bacterial pathogens is potentially hazardous to human health. The aim of this study was to evaluate the total bacteria count (TBC) and presence of pathogens in raw milk in Northern China along with the associated herd management practices. A total of 160 raw milk samples were collected from 80 dairy herds in Northern China. All raw milk samples were analyzed for TBC and pathogens by culturing. The results showed that the number of raw milk samples with TBC <2 × 10 6 cfu/mL and <1 × 10 5 cfu/mL was 146 (91.25%) and 70 (43.75%), respectively. A total of 84 (52.50%) raw milk samples were Staphylococcus aureus positive, 72 (45.00%) were Escherichia coli positive, 2 (1.25%) were Salmonella positive, 2 (1.25%) were Listeria monocytogenes positive, and 3 (1.88%) were Campylobacter positive. The prevalence of S. aureus was influenced by season, herd size, milking frequency, disinfection frequency, and use of a Dairy Herd Improvement program. The TBC was influenced by season and milk frequency. The correlation between TBC and prevalence of S. aureus or E. coli is significant. The effect size statistical analysis showed that season and herd (but not Dairy Herd Improvement, herd size, milking frequency, disinfection frequency, and area) were the most important factors affecting TBC in raw milk. In conclusion, the presence of bacteria in raw milk was associated with season and herd management practices, and further comprehensive study will be powerful for effectively characterizing various factors affecting milk microbial quality in bulk tanks in China. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials

    PubMed Central

    Mi, Michael Y.; Betensky, Rebecca A.

    2013-01-01

    Background Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Purpose Because the basic SPCD design already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and if we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Methods Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. Results From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample size re-estimation, up to 25% power was recovered from underestimated sample size scenarios. Limitations Given the numerous possible test parameters that could have been chosen for the simulations, the study’s results are limited to situations described by the parameters that were used, and may not generalize to all possible scenarios. Furthermore, drop-out of patients is not considered in this study. Conclusions It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments. PMID:23283576

  5. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  6. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  7. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  8. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Dental arch dimensions, form and tooth size ratio among a Saudi sample.

    PubMed

    Omar, Haidi; Alhajrasi, Manar; Felemban, Nayef; Hassan, Ali

    2018-01-01

    To determine the dental arch dimensions and arch forms in a sample of Saudi orthodontic patients, to investigate the prevalence of Bolton anterior and overall tooth size discrepancies, and to compare the effect of gender on the measured parameters. Methods: This study is a biometric analysis of dental casts of 149 young adults recruited from different orthodontic centers in Jeddah, Saudi Arabia. The dental arch dimensions were measured. The measured parameters were arch length, arch width, Bolton's ratio, and arch form. The data were analyzed using IBM SPSS software version 22.0 (IBM Corporation, New York, USA); this cross-sectional study was conducted between April 2015 and May 2016. Results: Dental arch measurements, including inter-canine and inter-molar distance, were found to be significantly greater in males than females (p less than 0.05). The most prevalent dental arch forms were narrow tapered (50.3%) and narrow ovoid (34.2%), respectively. The prevalence of tooth size discrepancy in all cases was 43.6% for anterior ratio and 24.8% for overall ratio. The mean Bolton's anterior ratio in all malocclusion classes was 79.81%, whereas the mean Bolton's overall ratio was 92.21%. There was no significant difference between males and females regarding Bolton's ratio. Conclusion: The most prevalent arch form was narrow tapered, followed by narrow ovoid. Males generally had larger dental arch measurements than females, and the prevalence of tooth size discrepancy was more in Bolton's anterior teeth ratio than in overall ratio.

  10. Two-particle Bose–Einstein correlations in pp collisions at √s=0.9 and 7 TeV measured with the ATLAS detector

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2015-10-01

    The paper presents studies of Bose–Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range pT> 100 MeV and |η|< 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 μb -1, 190 μb -1 and 12.4 nb -1 for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. Amore » saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. In conclusion, the dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.« less

  11. Two-particle Bose–Einstein correlations in pp collisions at √s=0.9 and 7 TeV measured with the ATLAS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    The paper presents studies of Bose–Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range pT> 100 MeV and |η|< 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 μb -1, 190 μb -1 and 12.4 nb -1 for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. Amore » saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. In conclusion, the dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.« less

  12. Nurses' Emotional Intelligence Impact on the Quality of Hospital Services

    PubMed Central

    Ranjbar Ezzatabadi, Mohammad; Bahrami, Mohammad Amin; Hadizadeh, Farzaneh; Arab, Masoomeh; Nasiri, Soheyla; Amiresmaili, Mohammadreza; Ahmadi Tehrani, Gholamreza

    2012-01-01

    Background Emotional intelligence is the potential to feel, use, communicate, recognize, remember, describe, identify, learn from, manage, understand and explain emotions. Service quality also can be defined as the post-consumption assessment of the services by consumers that are determined by many variables. Objectives This study was aimed to determine the nurses’ emotional intelligence impact on the delivered services quality. Materials and Methods This descriptive - applied study was carried out through a cross-sectional method in 2010. The research had 2 populations comprising of patients admitted to three academic hospitals of Yazd and the hospital nurses. Sample size was calculated by sample size formula for unlimited (patients) and limited (nursing staff) populations and obtained with stratified- random method. The data was collected by 4 valid questionnaires. Results The results of study indicated that nurses' emotional intelligence has a direct effect on the hospital services quality. The study also revealed that nurse's job satisfaction and communication skills have an intermediate role in the emotional intelligence and service quality relation. Conclusions This paper reports a new determinant of hospital services quality. PMID:23482866

  13. Similarities and differences in dream content at the cross-cultural, gender, and individual levels.

    PubMed

    William Domhoff, G; Schneider, Adam

    2008-12-01

    The similarities and differences in dream content at the cross-cultural, gender, and individual levels provide one starting point for carrying out studies that attempt to discover correspondences between dream content and various types of waking cognition. Hobson and Kahn's (Hobson, J. A., & Kahn, D. (2007). Dream content: Individual and generic aspects. Consciousness and Cognition, 16, 850-858.) conclusion that dream content may be more generic than most researchers realize, and that individual differences are less salient than usually thought, provides the occasion for a review of findings based on the Hall and Van de Castle (Hall, C., & Van de Castle, R. (1966). The content analysis of dreams. New York: Appleton-Century-Crofts.) coding system for the study of dream content. Then new findings based on a computationally intensive randomization strategy are presented to show the minimum sample sizes needed to detect gender and individual differences in dream content. Generally speaking, sample sizes of 100-125 dream reports are needed because most dream elements appear in less than 50% of dream reports and the magnitude of the differences usually is not large.

  14. The Peroxidation of Leukocytes Index Ratio Reveals the Prooxidant Effect of Green Tea Extract

    PubMed Central

    Manafikhi, Husseen; Raguzzini, Anna; Longhitano, Yaroslava; Reggi, Raffaella; Zanza, Christian

    2016-01-01

    Despite tea increased plasma nonenzymatic antioxidant capacity, the European Food Safety Administration (EFSA) denied claims related to tea and its protection from oxidative damage. Furthermore, the Supplement Information Expert Committee (DSI EC) expressed some doubts on the safety of green tea extract (GTE). We performed a pilot study in order to evaluate the effect of a single dose of two capsules of a GTE supplement (200 mg × 2) on the peroxidation of leukocytes index ratio (PLIR) in relation to uric acid (UA) and ferric reducing antioxidant potential (FRAP), as well as the sample size to reach statistical significance. GTE induced a prooxidant effect on leukocytes, whereas FRAP did not change, in agreement with the EFSA and the DSI EC conclusions. Besides, our results confirm the primary role of UA in the antioxidant defences. The ratio based calculation of the PLIR reduced the sample size to reach statistical significance, compared to the resistance to an exogenous oxidative stress and to the functional capacity of oxidative burst. Therefore, PLIR could be a sensitive marker of redox status. PMID:28101300

  15. Power analysis to detect treatment effect in longitudinal studies with heterogeneous errors and incomplete data.

    PubMed

    Vallejo, Guillermo; Ato, Manuel; Fernández García, Paula; Livacic Rojas, Pablo E; Tuero Herrero, Ellián

    2016-08-01

     S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure.   For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear.   The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes.   The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria.

  16. The Peroxidation of Leukocytes Index Ratio Reveals the Prooxidant Effect of Green Tea Extract.

    PubMed

    Peluso, Ilaria; Manafikhi, Husseen; Raguzzini, Anna; Longhitano, Yaroslava; Reggi, Raffaella; Zanza, Christian; Palmery, Maura

    2016-01-01

    Despite tea increased plasma nonenzymatic antioxidant capacity, the European Food Safety Administration (EFSA) denied claims related to tea and its protection from oxidative damage. Furthermore, the Supplement Information Expert Committee (DSI EC) expressed some doubts on the safety of green tea extract (GTE). We performed a pilot study in order to evaluate the effect of a single dose of two capsules of a GTE supplement (200 mg × 2) on the peroxidation of leukocytes index ratio (PLIR) in relation to uric acid (UA) and ferric reducing antioxidant potential (FRAP), as well as the sample size to reach statistical significance. GTE induced a prooxidant effect on leukocytes, whereas FRAP did not change, in agreement with the EFSA and the DSI EC conclusions. Besides, our results confirm the primary role of UA in the antioxidant defences. The ratio based calculation of the PLIR reduced the sample size to reach statistical significance, compared to the resistance to an exogenous oxidative stress and to the functional capacity of oxidative burst. Therefore, PLIR could be a sensitive marker of redox status.

  17. Global preamplification simplifies targeted mRNA quantification

    PubMed Central

    Kroneis, Thomas; Jonasson, Emma; Andersson, Daniel; Dolatabadi, Soheila; Ståhlberg, Anders

    2017-01-01

    The need to perform gene expression profiling using next generation sequencing and quantitative real-time PCR (qPCR) on small sample sizes and single cells is rapidly expanding. However, to analyse few molecules, preamplification is required. Here, we studied global and target-specific preamplification using 96 optimised qPCR assays. To evaluate the preamplification strategies, we monitored the reactions in real-time using SYBR Green I detection chemistry followed by melting curve analysis. Next, we compared yield and reproducibility of global preamplification to that of target-specific preamplification by qPCR using the same amount of total RNA. Global preamplification generated 9.3-fold lower yield and 1.6-fold lower reproducibility than target-specific preamplification. However, the performance of global preamplification is sufficient for most downstream applications and offers several advantages over target-specific preamplification. To demonstrate the potential of global preamplification we analysed the expression of 15 genes in 60 single cells. In conclusion, we show that global preamplification simplifies targeted gene expression profiling of small sample sizes by a flexible workflow. We outline the pros and cons for global preamplification compared to target-specific preamplification. PMID:28332609

  18. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  19. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.

  20. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Association of funding and conclusions in randomized drug trials: a reflection of treatment effect or adverse events?

    PubMed

    Als-Nielsen, Bodil; Chen, Wendong; Gluud, Christian; Kjaergard, Lise L

    2003-08-20

    Previous studies indicate that industry-sponsored trials tend to draw proindustry conclusions. To explore whether the association between funding and conclusions in randomized drug trials reflects treatment effects or adverse events. Observational study of 370 randomized drug trials included in meta-analyses from Cochrane reviews selected from the Cochrane Library, May 2001. From a random sample of 167 Cochrane reviews, 25 contained eligible meta-analyses (assessed a binary outcome; pooled at least 5 full-paper trials of which at least 1 reported adequate and 1 reported inadequate allocation concealment). The primary binary outcome from each meta-analysis was considered the primary outcome for all trials included in each meta-analysis. The association between funding and conclusions was analyzed by logistic regression with adjustment for treatment effect, adverse events, and additional confounding factors (methodological quality, control intervention, sample size, publication year, and place of publication). Conclusions in trials, classified into whether the experimental drug was recommended as the treatment of choice or not. The experimental drug was recommended as treatment of choice in 16% of trials funded by nonprofit organizations, 30% of trials not reporting funding, 35% of trials funded by both nonprofit and for-profit organizations, and 51% of trials funded by for-profit organizations (P<.001; chi2 test). Logistic regression analyses indicated that funding, treatment effect, and double blinding were the only significant predictors of conclusions. Adjusted analyses showed that trials funded by for-profit organizations were significantly more likely to recommend the experimental drug as treatment of choice (odds ratio, 5.3; 95% confidence interval, 2.0-14.4) compared with trials funded by nonprofit organizations. This association did not appear to reflect treatment effect or adverse events. Conclusions in trials funded by for-profit organizations may be more positive due to biased interpretation of trial results. Readers should carefully evaluate whether conclusions in randomized trials are supported by data.

  2. The problem of pseudoreplication in neuroscientific studies: is it affecting your analysis?

    PubMed Central

    2010-01-01

    Background Pseudoreplication occurs when observations are not statistically independent, but treated as if they are. This can occur when there are multiple observations on the same subjects, when samples are nested or hierarchically organised, or when measurements are correlated in time or space. Analysis of such data without taking these dependencies into account can lead to meaningless results, and examples can easily be found in the neuroscience literature. Results A single issue of Nature Neuroscience provided a number of examples and is used as a case study to highlight how pseudoreplication arises in neuroscientific studies, why the analyses in these papers are incorrect, and appropriate analytical methods are provided. 12% of papers had pseudoreplication and a further 36% were suspected of having pseudoreplication, but it was not possible to determine for certain because insufficient information was provided. Conclusions Pseudoreplication can undermine the conclusions of a statistical analysis, and it would be easier to detect if the sample size, degrees of freedom, the test statistic, and precise p-values are reported. This information should be a requirement for all publications. PMID:20074371

  3. Tumor Size on Abdominal MRI Versus Pathologic Specimen in Resected Pancreatic Adenocarcinoma: Implications for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, William A., E-mail: whall4@emory.edu; Winship Cancer Institute, Emory University, Atlanta, Georgia; Mikell, John L.

    2013-05-01

    Purpose: We assessed the accuracy of abdominal magnetic resonance imaging (MRI) for determining tumor size by comparing the preoperative contrast-enhanced T1-weighted gradient echo (3-dimensional [3D] volumetric interpolated breath-hold [VIBE]) MRI tumor size with pathologic specimen size. Methods and Materials: The records of 92 patients who had both preoperative contrast-enhanced 3D VIBE MRI images and detailed pathologic specimen measurements were available for review. Primary tumor size from the MRI was independently measured by a single diagnostic radiologist (P.M.) who was blinded to the pathology reports. Pathologic tumor measurements from gross specimens were obtained from the pathology reports. The maximum dimensions ofmore » tumor measured in any plane on the MRI and the gross specimen were compared. The median difference between the pathology sample and the MRI measurements was calculated. A paired t test was conducted to test for differences between the MRI and pathology measurements. The Pearson correlation coefficient was used to measure the association of disparity between the MRI and pathology sizes with the pathology size. Disparities relative to pathology size were also examined and tested for significance using a 1-sample t test. Results: The median patient age was 64.5 years. The primary site was pancreatic head in 81 patients, body in 4, and tail in 7. Three patients were American Joint Commission on Cancer stage IA, 7 stage IB, 21 stage IIA, 58 stage IIB, and 3 stage III. The 3D VIBE MRI underestimated tumor size by a median difference of 4 mm (range, −34-22 mm). The median largest tumor dimensions on MRI and pathology specimen were 2.65 cm (range, 1.5-9.5 cm) and 3.2 cm (range, 1.3-10 cm), respectively. Conclusions: Contrast-enhanced 3D VIBE MRI underestimates tumor size by 4 mm when compared with pathologic specimen. Advanced abdominal MRI sequences warrant further investigation for radiation therapy planning in pancreatic adenocarcinoma before routine integration into the treatment planning process.« less

  4. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  5. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  6. Comparison of Individual and Pooled Stool Samples for the Assessment of Soil-Transmitted Helminth Infection Intensity and Drug Efficacy

    PubMed Central

    Mekonnen, Zeleke; Meka, Selima; Ayana, Mio; Bogers, Johannes; Vercruysse, Jozef; Levecke, Bruno

    2013-01-01

    Background In veterinary parasitology samples are often pooled for a rapid assessment of infection intensity and drug efficacy. Currently, studies evaluating this strategy in large-scale drug administration programs to control human soil-transmitted helminths (STHs; Ascaris lumbricoides, Trichuris trichiura, and hookworm), are absent. Therefore, we developed and evaluated a pooling strategy to assess intensity of STH infections and drug efficacy. Methods/Principal Findings Stool samples from 840 children attending 14 primary schools in Jimma, Ethiopia were pooled (pool sizes of 10, 20, and 60) to evaluate the infection intensity of STHs. In addition, the efficacy of a single dose of mebendazole (500 mg) in terms of fecal egg count reduction (FECR; synonym of egg reduction rate) was evaluated in 600 children from two of these schools. Individual and pooled samples were examined with the McMaster egg counting method. For each of the three STHs, we found a significant positive correlation between mean fecal egg counts (FECs) of individual stool samples and FEC of pooled stool samples, ranging from 0.62 to 0.98. Only for A. lumbricoides was any significant difference in mean FEC of the individual and pooled samples found. For this STH species, pools of 60 samples resulted in significantly higher FECs. FECR for the different number of samples pooled was comparable in all pool sizes, except for hookworm. For this parasite, pools of 10 and 60 samples provided significantly higher FECR results. Conclusion/Significance This study highlights that pooling stool samples holds promise as a strategy for rapidly assessing infection intensity and efficacy of administered drugs in programs to control human STHs. However, further research is required to determine when and how pooling of stool samples can be cost-effectively applied along a control program, and to verify whether this approach is also applicable to other NTDs. PMID:23696905

  7. Using a Novel Optical Sensor to Characterize Methane Ebullition Processes

    NASA Astrophysics Data System (ADS)

    Delwiche, K.; Hemond, H.; Senft-Grupp, S.

    2015-12-01

    We have built a novel bubble size sensor that is rugged, economical to build, and capable of accurately measuring methane bubble sizes in aquatic environments over long deployment periods. Accurate knowledge of methane bubble size is important to calculating atmospheric methane emissions from in-land waters. By routing bubbles past pairs of optical detectors, the sensor accurately measures bubbles sizes for bubbles between 0.01 mL and 1 mL, with slightly reduced accuracy for bubbles from 1 mL to 1.5 mL. The sensor can handle flow rates up to approximately 3 bubbles per second. Optional sensor attachments include a gas collection chamber for methane sampling and volume verification, and a detachable extension funnel to customize the quantity of intercepted bubbles. Additional features include a data-cable running from the deployed sensor to a custom surface buoy, allowing us to download data without disturbing on-going bubble measurements. We have successfully deployed numerous sensors in Upper Mystic Lake at depths down to 18 m, 1 m above the sediment. The resulting data gives us bubble size distributions and the precise timing of bubbling events over a period of several months. In addition to allowing us to characterize typical bubble size distributions, this data allows us to draw important conclusions about temporal variations in bubble sizes, as well as bubble dissolution rates within the water column.

  8. Genome Size Variation in the Genus Carthamus (Asteraceae, Cardueae): Systematic Implications and Additive Changes During Allopolyploidization

    PubMed Central

    GARNATJE, TERESA; GARCIA, SÒNIA; VILATERSANA, ROSER; VALLÈS, JOAN

    2006-01-01

    • Background and Aims Plant genome size is an important biological characteristic, with relationships to systematics, ecology and distribution. Currently, there is no information regarding nuclear DNA content for any Carthamus species. In addition to improving the knowledge base, this research focuses on interspecific variation and its implications for the infrageneric classification of this genus. Genome size variation in the process of allopolyploid formation is also addressed. • Methods Nuclear DNA samples from 34 populations of 16 species of the genus Carthamus were assessed by flow cytometry using propidium iodide. • Key Results The 2C values ranged from 2·26 pg for C. leucocaulos to 7·46 pg for C. turkestanicus, and monoploid genome size (1Cx-value) ranged from 1·13 pg in C. leucocaulos to 1·53 pg in C. alexandrinus. Mean genome sizes differed significantly, based on sectional classification. Both allopolyploid species (C. creticus and C. turkestanicus) exhibited nuclear DNA contents in accordance with the sum of the putative parental C-values (in one case with a slight reduction, frequent in polyploids), supporting their hybrid origin. • Conclusions Genome size represents a useful tool in elucidating systematic relationships between closely related species. A considerable reduction in monoploid genome size, possibly due to the hybrid formation, is also reported within these taxa. PMID:16390843

  9. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Influence of photon beam energy on the dose enhancement factor caused by gold and silver nanoparticles: An experimental approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guidelli, Eder José, E-mail: ederguidelli@pg.ffclrp.usp.br; Baffa, Oswaldo

    Purpose: Noble metal nanoparticles have found several medical applications in the areas of radiation detection; x-ray contrast agents and cancer radiation therapy. Based on computational methods, many papers have reported the nanoparticle effect on the dose deposition in the surrounding medium. Here the authors report experimental results on how silver and gold nanoparticles affect the dose deposition in alanine dosimeters containing several concentrations of silver and gold nanoparticles, for five different beam energies, using electron spin resonance spectroscopy (ESR). Methods: The authors produced alanine dosimeters containing several mass percentage of silver and gold nanoparticles. Nanoparticle sizes were measured by dynamicmore » light scattering and by transmission electron microscopy. The authors determined the dose enhancement factor (DEF) theoretically, using a widely accepted method, and experimentally, using ESR spectroscopy. Results: The DEF is governed by nanoparticle concentration, size, and position in the alanine matrix. Samples containing gold nanoparticles afford a DEF higher than 1.0, because gold nanoparticle size is homogeneous for all gold concentrations utilized. For samples containing silver particles, the silver mass percentage governs the nanoparticles size, which, in turns, modifies nanoparticle position in the alanine dosimeters. In this sense, DEF decreases for dosimeters containing large and segregated particles. The influence of nanoparticle size-position is more noticeable for dosimeters irradiated with higher beam energies, and dosimeters containing large and segregated particles become less sensitive than pure alanine (DEF < 1). Conclusions: ESR dosimetry gives the DEF in a medium containing metal nanoparticles, although particle concentration, size, and position are closely related in the system. Because this is also the case as in many real systems of materials containing inorganic nanoparticles, ESR is a valuable tool for investigating DEF. Moreover, these results alert to the importance of controlling the size-position of nanoparticles to enhance DEF.« less

  11. The Physique of Elite Female Artistic Gymnasts: A Systematic Review.

    PubMed

    Bacciotti, Sarita; Baxter-Jones, Adam; Gaya, Adroaldo; Maia, José

    2017-09-01

    It has been suggested that successful young gymnasts are a highly select group in terms of the physique. This review summarizes the available literature on elite female gymnasts' anthropometric characteristics, somatotype, body composition and biological maturation. The main aims were to identify: (i) a common physique and (ii) the differences, if any, among competitive/performance levels. A systematic search was conducted online using five different databases. Of 407 putative papers, 17 fulfilled all criteria and were included in the review. Most studies identified similar physiques based on: physical traits (small size and low body mass), a body type (predominance of ecto-mesomorphy), body composition (low fat mass), and maturity status (late skeletal maturity as well as late age-at-menarche). However, there was no consensus as to whether these features predicted competitive performance, or even differentiated between gymnasts within distinctive competitive levels. In conclusion, gymnasts, as a group, have unique pronounced characteristics. These characteristics are likely due to selection for naturally-occurring inherited traits. However, data available for world class competitions were mostly outdated and sample sizes were small. Thus, it was difficult to make any conclusions about whether physiques differed between particular competitive levels.

  12. Three Dimensional Imaging of Paraffin Embedded Human Lung Tissue Samples by Micro-Computed Tomography

    PubMed Central

    Scott, Anna E.; Vasilescu, Dragos M.; Seal, Katherine A. D.; Keyes, Samuel D.; Mavrogordato, Mark N.; Hogg, James C.; Sinclair, Ian; Warner, Jane A.; Hackett, Tillie-Louise; Lackie, Peter M.

    2015-01-01

    Background Understanding the three-dimensional (3-D) micro-architecture of lung tissue can provide insights into the pathology of lung disease. Micro computed tomography (µCT) has previously been used to elucidate lung 3D histology and morphometry in fixed samples that have been stained with contrast agents or air inflated and dried. However, non-destructive microstructural 3D imaging of formalin-fixed paraffin embedded (FFPE) tissues would facilitate retrospective analysis of extensive tissue archives of lung FFPE lung samples with linked clinical data. Methods FFPE human lung tissue samples (n = 4) were scanned using a Nikon metrology µCT scanner. Semi-automatic techniques were used to segment the 3D structure of airways and blood vessels. Airspace size (mean linear intercept, Lm) was measured on µCT images and on matched histological sections from the same FFPE samples imaged by light microscopy to validate µCT imaging. Results The µCT imaging protocol provided contrast between tissue and paraffin in FFPE samples (15mm x 7mm). Resolution (voxel size 6.7 µm) in the reconstructed images was sufficient for semi-automatic image segmentation of airways and blood vessels as well as quantitative airspace analysis. The scans were also used to scout for regions of interest, enabling time-efficient preparation of conventional histological sections. The Lm measurements from µCT images were not significantly different to those from matched histological sections. Conclusion We demonstrated how non-destructive imaging of routinely prepared FFPE samples by laboratory µCT can be used to visualize and assess the 3D morphology of the lung including by morphometric analysis. PMID:26030902

  13. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  14. Does size really matter? A multisite study assessing the latent structure of the proposed ICD-11 and DSM-5 diagnostic criteria for PTSD

    PubMed Central

    Hansen, Maj; Hyland, Philip; Karstoft, Karen-Inge; Vaegter, Henrik B.; Bramsen, Rikke H.; Nielsen, Anni B. S.; Armour, Cherie; Andersen, Søren B.; Høybye, Mette Terp; Larsen, Simone Kongshøj; Andersen, Tonny E.

    2017-01-01

    ABSTRACT Background: Researchers and clinicians within the field of trauma have to choose between different diagnostic descriptions of posttraumatic stress disorder (PTSD) in the DSM-5 and the proposed ICD-11. Several studies support different competing models of the PTSD structure according to both diagnostic systems; however, findings show that the choice of diagnostic systems can affect the estimated prevalence rates. Objectives: The present study aimed to investigate the potential impact of using a large (i.e. the DSM-5) compared to a small (i.e. the ICD-11) diagnostic description of PTSD. In other words, does the size of PTSD really matter? Methods: The aim was investigated by examining differences in diagnostic rates between the two diagnostic systems and independently examining the model fit of the competing DSM-5 and ICD-11 models of PTSD across three trauma samples: university students (N = 4213), chronic pain patients (N = 573), and military personnel (N = 118). Results: Diagnostic rates of PTSD were significantly lower according to the proposed ICD-11 criteria in the university sample, but no significant differences were found for chronic pain patients and military personnel. The proposed ICD-11 three-factor model provided the best fit of the tested ICD-11 models across all samples, whereas the DSM-5 seven-factor Hybrid model provided the best fit in the university and pain samples, and the DSM-5 six-factor Anhedonia model provided the best fit in the military sample of the tested DSM-5 models. Conclusions: The advantages and disadvantages of using a broad or narrow set of symptoms for PTSD can be debated, however, this study demonstrated that choice of diagnostic system may influence the estimated PTSD rates both qualitatively and quantitatively. In the current described diagnostic criteria only the ICD-11 model can reflect the configuration of symptoms satisfactorily. Thus, size does matter when assessing PTSD. PMID:29201287

  15. Does size really matter? A multisite study assessing the latent structure of the proposed ICD-11 and DSM-5 diagnostic criteria for PTSD.

    PubMed

    Hansen, Maj; Hyland, Philip; Karstoft, Karen-Inge; Vaegter, Henrik B; Bramsen, Rikke H; Nielsen, Anni B S; Armour, Cherie; Andersen, Søren B; Høybye, Mette Terp; Larsen, Simone Kongshøj; Andersen, Tonny E

    2017-01-01

    Background : Researchers and clinicians within the field of trauma have to choose between different diagnostic descriptions of posttraumatic stress disorder (PTSD) in the DSM-5 and the proposed ICD-11. Several studies support different competing models of the PTSD structure according to both diagnostic systems; however, findings show that the choice of diagnostic systems can affect the estimated prevalence rates. Objectives : The present study aimed to investigate the potential impact of using a large (i.e. the DSM-5) compared to a small (i.e. the ICD-11) diagnostic description of PTSD. In other words, does the size of PTSD really matter? Methods: The aim was investigated by examining differences in diagnostic rates between the two diagnostic systems and independently examining the model fit of the competing DSM-5 and ICD-11 models of PTSD across three trauma samples: university students ( N  = 4213), chronic pain patients ( N  = 573), and military personnel ( N  = 118). Results : Diagnostic rates of PTSD were significantly lower according to the proposed ICD-11 criteria in the university sample, but no significant differences were found for chronic pain patients and military personnel. The proposed ICD-11 three-factor model provided the best fit of the tested ICD-11 models across all samples, whereas the DSM-5 seven-factor Hybrid model provided the best fit in the university and pain samples, and the DSM-5 six-factor Anhedonia model provided the best fit in the military sample of the tested DSM-5 models. Conclusions : The advantages and disadvantages of using a broad or narrow set of symptoms for PTSD can be debated, however, this study demonstrated that choice of diagnostic system may influence the estimated PTSD rates both qualitatively and quantitatively. In the current described diagnostic criteria only the ICD-11 model can reflect the configuration of symptoms satisfactorily. Thus, size does matter when assessing PTSD.

  16. Meta-analysis and systematic review of the number of non-syndromic congenitally missing permanent teeth per affected individual and its influencing factors

    PubMed Central

    Rakhshan, Hamid

    2016-01-01

    Summary Background and purpose: Dental aplasia (or hypodontia) is a frequent and challenging anomaly and thus of interest to many dental fields. Although the number of missing teeth (NMT) in each person is a major clinical determinant of treatment need, there is no meta-analysis on this subject. Therefore, we aimed to investigate the relevant literature, including epidemiological studies and research on dental/orthodontic patients. Methods: Among 50 reports, the effects of ethnicities, regions, sample sizes/types, subjects’ minimum ages, journals’ scientific credit, publication year, and gender composition of samples on the number of missing permanent teeth (except the third molars) per person were statistically analysed (α = 0.05, 0.025, 0.01). Limitations: The inclusion of small studies and second-hand information might reduce the reliability. Nevertheless, these strategies increased the meta-sample size and favoured the generalisability. Moreover, data weighting was carried out to account for the effect of study sizes/precisions. Results: The NMT per affected person was 1.675 [95% confidence interval (CI) = 1.621–1.728], 1.987 (95% CI = 1.949–2.024), and 1.893 (95% CI = 1.864–1.923), in randomly selected subjects, dental/orthodontic patients, and both groups combined, respectively. The effects of ethnicities (P > 0.9), continents (P > 0.3), and time (adjusting for the population type, P = 0.7) were not significant. Dental/orthodontic patients exhibited a significantly greater NMT compared to randomly selected subjects (P < 0.012). Larger samples (P = 0.000) and enrolling younger individuals (P = 0.000) might inflate the observed NMT per person. Conclusions: Time, ethnic backgrounds, and continents seem unlikely influencing factors. Subjects younger than 13 years should be excluded. Larger samples should be investigated by more observers. PMID:25840586

  17. Sociobehavioral Factors Associated with Caries Increment: A Longitudinal Study from 24 to 36 Months Old Children in Thailand

    PubMed Central

    Peltzer, Karl; Mongkolchati, Aroonsri; Satchaiyan, Gamon; Rajchagool, Sunsanee; Pimpak, Taksin

    2014-01-01

    The aim of this study is to investigate sociobehavioral risk factors from the prenatal period until 36 months of age, and the caries increment from 24 to 36 months of the child in Thailand. The data utilized in this study come from the prospective cohort study of Thai children (PCTC) from prenatal to 36 months of the child in Mueang Nan district, Northern Thailand. The total sample size recruited was 783 infants. The sample size with dental caries data was 603 and 597, at 24 months and at 36 months, respectively. The sample size of having two assessment points with a dental examination (at 24 months and at 36 months) was 597. Results indicate that the caries increment was 52.9%, meaning from 365 caries free children at 24 months 193 had developed dental caries at 36 months. The prevalence of dental caries was 34.2% at 24 months (n = 206) and 68.5% at 36 months of age (n = 409). In bivariate analysis, higher education of the mother, lower household income, bottle feeding of the infant, frequent sweet candy consumptions, and using rain or well water as drinking water were associated with dental caries increment, while in multivariate conditional logistic regression analysis lower household income, higher education of the mother, and using rain or well water as drinking water remained associated with dental caries increment. In conclusion, a very significant increase in caries development was observed, and oral health may be influenced by sociobehavioural risk factors. PMID:25329535

  18. Assessing differences in macrofaunal assemblages as a factor of sieve mesh size, distance between samples, and time of sampling.

    PubMed

    Hemery, Lenaïg G; Politano, Kristin K; Henkel, Sarah K

    2017-08-01

    With increasing cascading effects of climate change on the marine environment, as well as pollution and anthropogenic utilization of the seafloor, there is increasing interest in tracking changes to benthic communities. Macrofaunal surveys are traditionally conducted as part of pre-incident environmental assessment studies and post-incident monitoring studies when there is a potential impact to the seafloor. These surveys usually characterize the structure and/or spatiotemporal distribution of macrofaunal assemblages collected with sediment cores; however, many different sampling protocols have been used. An assessment of the comparability of past and current survey methods was in need to facilitate future surveys and comparisons. This was the aim of the present study, conducted off the Oregon coast in waters 25-35 m deep. Our results show that the use of a sieve with a 1.0-mm mesh size gives results for community structure comparable to results obtained from a 0.5-mm mesh size, which allows reliable comparisons of recent and past spatiotemporal surveys of macroinfauna. In addition to our primary objective of comparing methods, we also found interacting effects of seasons and depths of collection. Seasonal differences (summer and fall) were seen in infaunal assemblages in the wave-induced sediment motion zone but not deeper. Thus, studies where wave-induced sediment motion can structure the benthic communities, especially during the winter months, should consider this effect when making temporal comparisons. In addition, some macrofauna taxa-like polychaetes and amphipods show high interannual variabilities, so spatiotemporal studies should make sure to cover several years before drawing any conclusions.

  19. Recent trends in the prevalence of under- and overweight among adolescent girls in low- and middle-income countries

    PubMed Central

    Jaacks, Lindsay M.; Slining, Meghan M.; Popkin, Barry M.

    2014-01-01

    Background Most studies of childhood malnutrition in low- and middle-income countries (LMICs) focus on children <5 years, with few focusing on adolescence, a critical stage in development. Objective To evaluate recent trends in the prevalence of under- and overweight among girls (15–18 years) in LMICs. Methods Data are from Demographic and Health Surveys (53 countries) and national surveys conducted in Indonesia, China, Vietnam, Brazil, and Mexico. The most recent surveys with sample sizes ≥50 when stratified by rural-urban status were included: 46.6% of countries had a survey conducted in the past 5 years, while the most recent survey for 10.3% of countries was over 10 years old. The overall rural sample size was 94,857 and urban sample size was 81,025. Under- and overweight were defined using the IOTF sex- and age-specific BMI cut-points. Results South Asia had the highest prevalence of underweight; nearly double that of East Asia & the Pacific and Sub-Saharan Africa, and increasing annually by 0.66% in rural areas. Latin America & the Caribbean had the highest regional prevalence of overweight in both rural and urban settings and this prevalence is increasing annually by about 0.50%. In urban areas, 38% of countries had both an under- and overweight prevalence ≥10%. Conclusions There is substantial variation across and within regions in the burden of under- and overweight, with increasing dual burdens in urban areas. Innovative public health interventions capable of addressing both ends of the malnutrition spectrum are urgently needed. PMID:25558987

  20. Bayesian evaluation of effect size after replicating an original study

    PubMed Central

    van Aert, Robbie C. M.; van Assen, Marcel A. L. M.

    2017-01-01

    The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study’s significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method. PMID:28388646

  1. Proteoglycan depletion and size reduction in lesions of early grade chondromalacia of the patella.

    PubMed Central

    Väätäinen, U; Häkkinen, T; Kiviranta, I; Jaroma, H; Inkinen, R; Tammi, M

    1995-01-01

    OBJECTIVE--To determine the content and molecular size of proteoglycans (PGs) in patellar chondromalacia (CM) and control cartilages as a first step in investigating the role of matrix alterations in the pathogenesis of this disease. METHODS--Chondromalacia tissue from 10 patients was removed with a surgical knife. Using identical techniques, apparently healthy cartilage of the same site was obtained from 10 age matched cadavers (mean age 31 years in both groups). Additional pathological cartilage was collected from 67 patients with grades II-IV CM (classified according to Outerbridge) using a motorised shaver under arthroscopic control. The shaved cartilage chips were collected with a dense net from the irrigation fluid of the shaver. The content of tissue PGs was determined by Safranin O precipitation or uronic acid content, and the molecular size by mobility on agarose gel electrophoresis. RESULTS--The mean PG content of the CM tissue samples with a knife was dramatically reduced, being only 15% of that in controls. The cartilage chips collected from shaving operations of grades II, III, and IV CM showed a decreasing PG content: 9%, 5%, and 1% of controls, respectively. Electrophoretic analysis of PGs extracted with guanidium chloride from the shaved tissue samples suggested a significantly reduced size of aggrecans in the mild (grade II) lesions. CONCLUSION--These data show that there is already a dramatic and progressive depletion of PGs in CM grade II lesions. This explains the softening of cartilage, a typical finding in the arthroscopic examination of CM. The PG size reduction observed in grade II implicates proteolytic attack as a factor in the pathogenesis of CM. Images PMID:7492223

  2. Utility of the AAMC’s Graduation Questionnaire to Study Behavioral and Social Sciences Domains in Undergraduate Medical Education

    PubMed Central

    Carney, Patricia A.; Rdesinski, Rebecca; Blank, Arthur E.; Graham, Mark; Wimmers, Paul; Chen, H. Carrie; Thompson, Britta; Jackson, Stacey A.; Foertsch, Julie; Hollar, David

    2010-01-01

    Purpose The Institute of Medicine (IOM) report on social and behavioral sciences (SBS) indicated that 50% of morbidity and mortality in the United States is associated with SBS factors, which the report also found were inadequately taught in medical school. A multischool collaborative explored whether the Association of American Medical Colleges Graduation Questionnaire (GQ) could be used to study changes in the six SBS domains identified in the IOM report. Method A content analysis conducted with the GQ identified 30 SBS variables, which were narrowed to 24 using a modified Delphi approach. Summary data were pooled from nine medical schools for 2006 and 2007, representing 1,126 students. Data were generated on students’ perceptions of curricular experiences, attitudes related to SBS curricula, and confidence with relevant clinical knowledge and skills. The authors determined the sample sizes required for various effect sizes to assess the utility of the GQ. Results The 24 variables were classified into five of six IOM domains representing a total of nine analytic categories with cumulative scale means ranging from 60.8 to 93.4. Taking into account the correlations among measures over time, and assuming a two-sided test, 80% power, alpha at .05, and standard deviation of 4.1, the authors found that 34 medical schools would be required for inclusion to attain an estimated effect size of 0.50 (50%). With a sample size of nine schools, the ability to detect changes would require a very high effect size of 107%. Conclusions Detecting SBS changes associated with curricular innovations would require a large collaborative of medical schools. Using a national measure (the GQ) to assess curricular innovations in most areas of SBS is possible if enough medical schools were involved in such an effort. PMID:20042845

  3. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  4. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  5. Pentoxifylline for Anemia in Chronic Kidney Disease: A Systematic Review and Meta-Analysis

    PubMed Central

    Bolignano, Davide; D’Arrigo, Graziella; Pisano, Anna; Coppolino, Giuseppe

    2015-01-01

    Background Pentoxifylline (PTX) is a promising therapeutic approach for reducing inflammation and improving anemia associated to various systemic disorders. However, whether this agent may be helpful for anemia management also in CKD patients is still object of debate. Study Design Systematic review and meta-analysis. Population Adults with CKD (any KDOQI stage, including ESKD patients on regular dialysis) and anemia (Hb<13 g/dL in men or < 12 g/dL in women). Search Strategy and Sources Cochrane CENTRAL, EMBASE, Ovid-MEDLINE and PubMed were searched for studies providing data on the effects of PTX on anemia parameters in CKD patients without design or follow-up restriction. Intervention PTX derivatives at any dose regimen. Outcomes Hemoglobin, hematocrit, ESAs dosage and resistance (ERI), iron indexes (ferritin, serum iron, TIBC, transferrin and serum hepcidin) and adverse events. Results We retrieved 11 studies (377 patients) including seven randomized controlled trials (all comparing PTX to placebo or standard therapy) one retrospective case-control study and three prospective uncontrolled studies. Overall, PTX increased hemoglobin in three uncontrolled studies but such improvement was not confirmed in a meta-analysis of seven studies (299 patients) (MD 0.12 g/dL, 95% CI -0.22 to 0.47). Similarly, there were no conclusive effects of PTX on hematocrit, ESAs dose, ferritin and TSAT in pooled analyses. Data on serum iron, ERI, TIBC and hepcidin were based on single studies. No evidence of increased rate of adverse events was also noticed. Limitations Small sample size and limited number of studies. High heterogeneity among studies with respect to CKD and anemia severity, duration of intervention and responsiveness/current therapy with iron or ESAs. Conclusions There is currently no conclusive evidence supporting the utility of pentoxifylline for improving anemia control in CKD patients. Future trials designed on hard, patient-centered outcomes with larger sample size and longer follow-up are advocated. PMID:26237421

  6. Concentrations of selected constituents in surface-water and streambed-sediment samples collected from streams in and near an area of oil and natural-gas development, south-central Texas, 2011-13

    USGS Publications Warehouse

    Opsahl, Stephen P.; Crow, Cassi L.

    2014-01-01

    During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.

  7. Efficacy, effectiveness and cost-effectiveness of acupuncture for allergic rhinitis - An overview about previous and ongoing studies.

    PubMed

    Witt, C M; Brinkhaus, B

    2010-10-28

    In general, allergic rhinitis can be divided into seasonal allergic rhinitis (SAR) and perennial allergic rhinitis (PAR). In the following sections a summary of efficacy and effectiveness studies is presented. For this narrative review we selected studies based on the following parameters: publication in English, sample size ≥30 patients, and at least 6 acupuncture sessions. Most studies aimed to evaluate the specific effects of acupuncture treatment. Only one study evaluated effectiveness and cost-effectiveness of additional acupuncture treatment. The studies which compared acupuncture with sham acupuncture always used a penetrating sham control. A medication control group was used in only two studies and one study combined acupuncture and Chinese herbal medicine. This overview shows that the trials on efficacy and on effectiveness of acupuncture are very heterogeneous. Although penetrating sham controls were used predominantly, these also varied from superficial penetration at acupuncture points to superficial insertion at non-acupuncture points. Although there is some evidence that acupuncture as additional treatment is beneficial and relatively cost-effective, there is insufficient evidence for an acupuncture specific effect in SAR. In contrast, there is some evidence that acupuncture might have specific effects in patients with PAR. However, all of the published efficacy studies are small and conclusions should be made with care. Further studies with a larger sample size are urgently needed to draw more rigorous conclusions and the results of the ongoing trials will provide us with further information within the next two years. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  9. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  10. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  11. Impact of root canal preparation size and taper on coronal-apical micro-leakage using glucose penetration method

    PubMed Central

    Tabrizizadeh, Mehdi; Hekmati-Moghadam, Seyed-Hossein; Hakimian, Roqayeh

    2014-01-01

    Objectives: The purpose of this in vitro study was to assess the effect of root canal preparation size and taper on the amounts of glucose penetration. Material and Methods: For conducting this experimental study, eighty mandibular premolars with single straight canals were divided randomly into 2 experimental groups of 30 samples each and 2 control groups. Using K-files and the balance force technique, canals in group 1 were prepared apically to size 25 and coronally to size 2 Peesoreamer. Group 2 were instrumented apically and coronally to size 40 and size 6 Peesoreamer, respectively. Rotary instrumentation was accomplished in group 1; using size 25 and .04 tapered and in group 2, size 35 and .06 tapered Flex Master files. Canals were then obturated by lateral compaction of cold gutta-percha. Glucose penetration through root canal fillings was measured at 1, 8, 15, 22 and 30 days. Data were recorded as mmol/L and statistically analyzed with Mann-Whitney U test (P value=. 05). Results: In comparison to group 1, group 2 showed significant glucose leakage during the experimental period (P value < .0001). Also, in each experimental group, the amount of micro-leakage was significantly increased at the end of the study. Conclusions: Under the condition of this study, the amounts of micro-leakage through root canal fillings are directly related to the size and taper of root canal preparation and reducing the preparation size may lead to less micro-leakage. Key words:Dental leakage, root canal preparation, endodontics. PMID:25593654

  12. Phosphorus content as a function of soil aggregate size and paddy cultivation in highly weathered soils.

    PubMed

    Li, Baozhen; Ge, Tida; Xiao, Heai; Zhu, Zhenke; Li, Yong; Shibistova, Olga; Liu, Shoulong; Wu, Jinshui; Inubushi, Kazuyuki; Guggenberger, Georg

    2016-04-01

    Red soils are the major land resource in subtropical and tropical areas and are characterized by low phosphorus (P) availability. To assess the availability of P for plants and the potential stability of P in soil, two pairs of subtropical red soil samples from a paddy field and an adjacent uncultivated upland were collected from Hunan Province, China. Analysis of total P and Olsen P and sequential extraction was used to determine the inorganic and organic P fractions in different aggregate size classes. Our results showed that the soil under paddy cultivation had lower proportions of small aggregates and higher proportions of large aggregates than those from the uncultivated upland soil. The portion of >2-mm-sized aggregates increased by 31 and 20 % at Taoyuan and Guiyang, respectively. The total P and Olsen P contents were 50-150 and 50-300 % higher, respectively, in the paddy soil than those in the upland soil. Higher inorganic and organic P fractions tended to be enriched in both the smallest and largest aggregate size classes compared to the middle size class (0.02-0.2 mm). Furthermore, the proportion of P fractions was higher in smaller aggregate sizes (<2 mm) than in the higher aggregate sizes (>2 mm). In conclusion, soils under paddy cultivation displayed improved soil aggregate structure, altered distribution patterns of P fractions in different aggregate size classes, and to some extent had enhanced labile P pools.

  13. Spatial scale and sampling resolution affect measures of gap disturbance in a lowland tropical forest: implications for understanding forest regeneration and carbon storage

    PubMed Central

    Lobo, Elena; Dalling, James W.

    2014-01-01

    Treefall gaps play an important role in tropical forest dynamics and in determining above-ground biomass (AGB). However, our understanding of gap disturbance regimes is largely based either on surveys of forest plots that are small relative to spatial variation in gap disturbance, or on satellite imagery, which cannot accurately detect small gaps. We used high-resolution light detection and ranging data from a 1500 ha forest in Panama to: (i) determine how gap disturbance parameters are influenced by study area size, and the criteria used to define gaps; and (ii) to evaluate how accurately previous ground-based canopy height sampling can determine the size and location of gaps. We found that plot-scale disturbance parameters frequently differed significantly from those measured at the landscape-level, and that canopy height thresholds used to define gaps strongly influenced the gap-size distribution, an important metric influencing AGB. Furthermore, simulated ground surveys of canopy height frequently misrepresented the true location of gaps, which may affect conclusions about how relatively small canopy gaps affect successional processes and contribute to the maintenance of diversity. Across site comparisons need to consider how gap definition, scale and spatial resolution affect characterizations of gap disturbance, and its inferred importance for carbon storage and community composition. PMID:24452032

  14. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...

  15. Neuropsychological tests for predicting cognitive decline in older adults

    PubMed Central

    Baerresen, Kimberly M; Miller, Karen J; Hanson, Eric R; Miller, Justin S; Dye, Richelin V; Hartman, Richard E; Vermeersch, David; Small, Gary W

    2015-01-01

    Summary Aim To determine neuropsychological tests likely to predict cognitive decline. Methods A sample of nonconverters (n = 106) was compared with those who declined in cognitive status (n = 24). Significant univariate logistic regression prediction models were used to create multivariate logistic regression models to predict decline based on initial neuropsychological testing. Results Rey–Osterrieth Complex Figure Test (RCFT) Retention predicted conversion to mild cognitive impairment (MCI) while baseline Buschke Delay predicted conversion to Alzheimer’s disease (AD). Due to group sample size differences, additional analyses were conducted using a subsample of demographically matched nonconverters. Analyses indicated RCFT Retention predicted conversion to MCI and AD, and Buschke Delay predicted conversion to AD. Conclusion Results suggest RCFT Retention and Buschke Delay may be useful in predicting cognitive decline. PMID:26107318

  16. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  17. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  19. On evaluating compliance with air pollution levels not to be exceeded more than once per year

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Sidik, S. M.

    1974-01-01

    The adequacy is considered of currently practiced monitoring and data reduction techniques for assessing compliance with 24-hour Air Quality Standards (AQS) not to be exceeded more than once per year. The present situation for suspended particulates is discussed. The following conclusions are reached: (1) For typical less than daily sampling (i.e., 60 to 120 24-hour samples per year) the deviation from independence of the data set should not be substantial. (2) The interchange of exponentiation and expectation operations in the EPA data reduction model, underestimates the second highest level by about 4 to 8 percent for typical sigma values. (3) Estimates of the second highest pollution level have associated with them a large statistical variability arising from the finite size of the sample. The 0.95 confidence interval ranges from + or - 40 percent for 120 samples per year to + or - 84 percent for 30 samples per year. (4) The design value suggested by EPA for abatement and/or control planning purposes typically gives a margin of safety of 60 to 120 percent.

  20. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  1. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  2. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  4. On the road toward the development of clothing size standards and safety devices for Chilean workers.

    PubMed

    Oñate, Esteban; Meyer, Felipe; Espinoza, Jorge

    2012-01-01

    The range of sizes used in Chile for clothing comes from criteria developed in continental Europe, mainly the EN 13402 standard. Any standard adopted by a country should consider the anthropometric dimensions of the user population, particularly to discern the ratio of garments for different size. Consequently, the purpose of this study was to propose standards for the size of clothing based on anthropometric characteristics of a sample of Chilean miners. The study was conducted in 447 male workers. The age and body weight were measured in each workers as well as their percentage of body fat. Anthropometric measurements for garments were made according to the criteria of the European Community (EN 13402-1) and ISO (8559- 1989). Body dimensions for the design of gloves, shoes, helmets and caps, clothes that cover the upper part of the body and clothes that cover the lower part were measured. The results obtained made it possible to establish the percentage of workers falling within the range of sizes that manufacturers consider as reference. One of the main conclusions of the study is the need to carefully consider a set of complementary anthropometric measures, which can help to improve the comfort of costumes, to the extent that the providers adapt their designs to the characteristics of Chilean workers.

  5. Composition of Metallic Elements and Size Distribution of Fine and Ultrafine Particles in a Steelmaking Factory.

    PubMed

    Marcias, Gabriele; Fostinelli, Jacopo; Catalani, Simona; Uras, Michele; Sanna, Andrea Maurizio; Avataneo, Giuseppe; De Palma, Giuseppe; Fabbri, Daniele; Paganelli, Matteo; Lecca, Luigi Isaia; Buonanno, Giorgio; Campagna, Marcello

    2018-06-07

    The characteristics of aerosol, in particular particle size and chemical composition, can have an impact on human health. Particle size distribution and chemical composition is a necessary parameter in occupational exposure assessment conducted in order to understand possible health effects. The aim of this study was to characterize workplace airborne particulate matter in a metallurgical setting by synergistically using two different approaches; Methodology: Analysis of inhalable fraction concentrations through traditional sampling equipment and ultrafine particles (UFP) concentrations and size distribution was conducted by an Electric Low-Pressure Impactor (ELPI+™). The determination of metallic elements (ME) in particles was carried out by inductively coupled plasma mass spectrometry; Results: Inhalable fraction and ME concentrations were below the limits set by Italian legislation and the American Conference of Governmental Industrial Hygienists (ACGIH, 2017). The median of UFP was between 4.00 × 10⁴ and 2.92 × 10⁵ particles/cm³. ME concentrations determined in the particles collected by ELPI show differences in size range distribution; Conclusions: The adopted synergistic approach enabled a qualitative and quantitative assessment of the particles in steelmaking factories. The results could lead to a better knowledge of occupational exposure characterization, in turn affording a better understanding of occupational health issues due to metal fumes exposure.

  6. Quantitative radiomics: impact of stochastic effects on textural feature analysis implies the need for standards

    PubMed Central

    Nyflot, Matthew J.; Yang, Fei; Byrd, Darrin; Bowen, Stephen R.; Sandison, George A.; Kinahan, Paul E.

    2015-01-01

    Abstract. Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes. PMID:26251842

  7. Quantitative radiomics: impact of stochastic effects on textural feature analysis implies the need for standards.

    PubMed

    Nyflot, Matthew J; Yang, Fei; Byrd, Darrin; Bowen, Stephen R; Sandison, George A; Kinahan, Paul E

    2015-10-01

    Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes.

  8. Highly sensitive molecular diagnosis of prostate cancer using surplus material washed off from biopsy needles

    PubMed Central

    Bermudo, R; Abia, D; Mozos, A; García-Cruz, E; Alcaraz, A; Ortiz, Á R; Thomson, T M; Fernández, P L

    2011-01-01

    Introduction: Currently, final diagnosis of prostate cancer (PCa) is based on histopathological analysis of needle biopsies, but this process often bears uncertainties due to small sample size, tumour focality and pathologist's subjective assessment. Methods: Prostate cancer diagnostic signatures were generated by applying linear discriminant analysis to microarray and real-time RT–PCR (qRT–PCR) data from normal and tumoural prostate tissue samples. Additionally, after removal of biopsy tissues, material washed off from transrectal biopsy needles was used for molecular profiling and discriminant analysis. Results: Linear discriminant analysis applied to microarray data for a set of 318 genes differentially expressed between non-tumoural and tumoural prostate samples produced 26 gene signatures, which classified the 84 samples used with 100% accuracy. To identify signatures potentially useful for the diagnosis of prostate biopsies, surplus material washed off from routine biopsy needles from 53 patients was used to generate qRT–PCR data for a subset of 11 genes. This analysis identified a six-gene signature that correctly assigned the biopsies as benign or tumoural in 92.6% of the cases, with 88.8% sensitivity and 96.1% specificity. Conclusion: Surplus material from prostate needle biopsies can be used for minimal-size gene signature analysis for sensitive and accurate discrimination between non-tumoural and tumoural prostates, without interference with current diagnostic procedures. This approach could be a useful adjunct to current procedures in PCa diagnosis. PMID:22009027

  9. Dental arch dimensions, form and tooth size ratio among a Saudi sample

    PubMed Central

    Omar, Haidi; Alhajrasi, Manar; Felemban, Nayef; Hassan, Ali

    2018-01-01

    Objectives: To determine the dental arch dimensions and arch forms in a sample of Saudi orthodontic patients, to investigate the prevalence of Bolton anterior and overall tooth size discrepancies, and to compare the effect of gender on the measured parameters. Methods: This study is a biometric analysis of dental casts of 149 young adults recruited from different orthodontic centers in Jeddah, Saudi Arabia. The dental arch dimensions were measured. The measured parameters were arch length, arch width, Bolton’s ratio, and arch form. The data were analyzed using IBM SPSS software version 22.0 (IBM Corporation, New York, USA); this cross-sectional study was conducted between April 2015 and May 2016. Results: Dental arch measurements, including inter-canine and inter-molar distance, were found to be significantly greater in males than females (p<0.05). The most prevalent dental arch forms were narrow tapered (50.3%) and narrow ovoid (34.2%), respectively. The prevalence of tooth size discrepancy in all cases was 43.6% for anterior ratio and 24.8% for overall ratio. The mean Bolton’s anterior ratio in all malocclusion classes was 79.81%, whereas the mean Bolton’s overall ratio was 92.21%. There was no significant difference between males and females regarding Bolton’s ratio. Conclusion: The most prevalent arch form was narrow tapered, followed by narrow ovoid. Males generally had larger dental arch measurements than females, and the prevalence of tooth size discrepancy was more in Bolton’s anterior teeth ratio than in overall ratio. PMID:29332114

  10. Pollinator communities in strawberry crops - variation at multiple spatial scales.

    PubMed

    Ahrenfeldt, E J; Klatt, B K; Arildsen, J; Trandem, N; Andersson, G K S; Tscharntke, T; Smith, H G; Sigsgaard, L

    2015-08-01

    Predicting potential pollination services of wild bees in crops requires knowledge of their spatial distribution within fields. Field margins can serve as nesting and foraging habitats for wild bees and can be a source of pollinators. Regional differences in pollinator community composition may affect this spill-over of bees. We studied how regional and local differences affect the spatial distribution of wild bee species richness, activity-density and body size in crop fields. We sampled bees both from the field centre and at two different types of semi-natural field margins, grass strips and hedges, in 12 strawberry fields. The fields were distributed over four regions in Northern Europe, representing an almost 1100 km long north-south gradient. Even over this gradient, daytime temperatures during sampling did not differ significantly between regions and did therefore probably not impact bee activity. Bee species richness was higher in field margins compared with field centres independent of field size. However, there was no difference between centre and margin in body-size or activity-density. In contrast, bee activity-density increased towards the southern regions, whereas the mean body size increased towards the north. In conclusion, our study revealed a general pattern across European regions of bee diversity, but not activity-density, declining towards the field interior which suggests that the benefits of functional diversity of pollinators may be difficult to achieve through spill-over effects from margins to crop. We also identified dissimilar regional patterns in bee diversity and activity-density, which should be taken into account in conservation management.

  11. Rapid screening of the antimicrobial efficacy of Ag zeolites.

    PubMed

    Tosheva, L; Belkhair, S; Gackowski, M; Malic, S; Al-Shanti, N; Verran, J

    2017-09-01

    A semi-quantitative screening method was used to compare the killing efficacy of Ag zeolites against bacteria and yeast as a function of the zeolite type, crystal size and concentration. The method, which substantially reduced labor, consumables and waste and provided an excellent preliminary screen, was further validated by quantitative plate count experiments. Two pairs of zeolite X and zeolite beta with different sizes (ca. 200nm and 2μm for zeolite X and ca. 250 and 500nm for zeolite beta) were tested against Escherichia coli (E. coli) and Candida albicans (C. albicans) at concentrations in the range 0.05-0.5mgml -1 . Reduction of the zeolite crystal size resulted in a decrease in the killing efficacy against both microorganisms. The semi-quantitative tests allowed convenient optimization of the zeolite concentrations to achieve targeted killing times. Zeolite beta samples showed higher activity compared to zeolite X despite their lower Ag content, which was attributed to the higher concentration of silver released from zeolite beta samples. Cytotoxicity measurements using peripheral blood mononuclear cells (PBMCs) indicated that Ag zeolite X was more toxic than Ag zeolite beta. However, the trends for the dependence of cytotoxicity on zeolite crystal size at different zeolite concentrations were different for the two zeolites and no general conclusions about zeolite cytotoxicity could be drawn from these experiments. This result indicates a complex relationship, requiring the necessity for individual cytotoxicity measurements for all antimicrobial applications based on the use of zeolites. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  13. Physical therapy treatments for low back pain in children and adolescents: a meta-analysis

    PubMed Central

    2013-01-01

    Background Low back pain (LBP) in adolescents is associated with LBP in later years. In recent years treatments have been administered to adolescents for LBP, but it is not known which physical therapy treatment is the most efficacious. By means of a meta-analysis, the current study investigated the effectiveness of the physical therapy treatments for LBP in children and adolescents. Methods Studies in English, Spanish, French, Italian and Portuguese, and carried out by March 2011, were selected by electronic and manual search. Two independent researchers coded the moderator variables of the studies, and performed the effect size calculations. The mean effect size index used was the standardized mean change between the pretest and posttest, and it was applied separately for each combination of outcome measures, (pain, disability, flexibility, endurance and mental health) and measurement type (self-reports, and clinician assessments). Results Eight articles that met the selection criteria enabled us to define 11 treatment groups and 5 control groups using the group as the unit of analysis. The 16 groups involved a total sample of 334 subjects at the posttest (221 in the treatment groups and 113 in the control groups). For all outcome measures, the average effect size of the treatment groups was statistically and clinically significant, whereas the control groups had negative average effect sizes that were not statistically significant. Conclusions Of all the physical therapy treatments for LBP in children and adolescents, the combination of therapeutic physical conditioning and manual therapy is the most effective. The low number of studies and control groups, and the methodological limitations in this meta-analysis prevent us from drawing definitive conclusions in relation to the efficacy of physical therapy treatments in LBP. PMID:23374375

  14. Aerosol-Cloud Interactions during Tropical Deep Convection: Evidence for the Importance of Free Tropospheric Aerosols

    NASA Technical Reports Server (NTRS)

    Ackerman, A.; Jensen, E.; Stevens, D.; Wang, D.; Heymsfield, A.; Miloshevich, L.; Twohy, C.; Poellot, M.; VanReken, T.; Fridland, Ann

    2003-01-01

    NASA's 2002 CRYSTAL-FACE field experiment focused on the formation and evolution of tropical cirrus cloud systems in southern Florida. Multiple aircraft extensively sampled cumulonimbus dynamical and microphysical properties, as well as characterizing ambient aerosol populations both inside and outside the full depth of the convective column. On July 18, unique measurements were taken when a powerful updraft was traversed directly by aircraft, providing a window into the primary source region of cumulonimbus anvil crystals. Observations of the updraft, entered at approximately l0 km altitude and -34 C, indicated more than 200 cloud particles per mL at vertical velocities exceeding 20 m/s and the presence of significant condensation nuclei and liquid water within the core. In this work, aerosol and cloud phase observations are integrated by simulating the updraft conditions using a large-eddy resolving model with 3 explicit multiphase microphysics, including treatment of size-resolved aerosol fields, aerosol activation and freezing, and evaporation of cloud particles back to the aerosol phase. Simulations were initialized with observed thermodynamic and aerosol size distributions profiles and convection was driven by surface fluxes assimilated from the ARPS forecast model. Model results are consistent with the conclusions that most crystals are homogeneously frozen droplets and that entrained free tropospheric aerosols may contribute a significant fraction of the crystals. Thus most anvil crystals appear to be formed aloft in updraft cores, well above cloud base. These conclusions are supported by observations of hydrometeor size distribution made while traversing the dore, as well as aerosol and cloud particle size distributions generally observed by aircraft below 4km and crystal properties generally observed by aircraft above 12km.

  15. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  16. Taphonomic bias in pollen and spore record: a review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisk, L.H.

    The high dispersibility and ease of pollen and spore transport have led researchers to conclude erroneously that fossil pollen and spore floras are relatively complete and record unbiased representations of the regional vegetation extant at the time of sediment deposition. That such conclusions are unjustified is obvious when the authors remember that polynomorphs are merely organic sedimentary particles and undergo hydraulic sorting not unlike clastic sedimentary particles. Prior to deposition in the fossil record, pollen and spores can be hydraulically sorted by size, shape, and weight, subtly biasing relative frequencies in fossil assemblages. Sorting during transport results in palynofloras whosemore » composition is environmentally dependent. Therefore, depositional environment is an important consideration to make correct inferences on the source vegetation. Sediment particle size of original rock samples may contain important information on the probability of a taphonomically biased pollen and spore assemblage. In addition, a reasonable test of hydraulic sorting is the distribution of pollen grain sizes and shapes in each assemblage. Any assemblage containing a wide spectrum of grain sizes and shapes has obviously not undergone significant sorting. If unrecognized, taphonomic bias can lead to paleoecologic, paleoclimatic, and even biostratigraphic misinterpretations.« less

  17. Liposome retention in size exclusion chromatography

    PubMed Central

    Ruysschaert, Tristan; Marque, Audrey; Duteyrat, Jean-Luc; Lesieur, Sylviane; Winterhalter, Mathias; Fournier, Didier

    2005-01-01

    Background Size exclusion chromatography is the method of choice for separating free from liposome-encapsulated molecules. However, if the column is not presaturated with lipids this type of chromatography causes a significant loss of lipid material. To date, the mechanism of lipid retention is poorly understood. It has been speculated that lipid binds to the column material or the entire liposome is entrapped inside the void. Results Here we show that intact liposomes and their contents are retained in the exclusion gel. Retention depends on the pore size, the smaller the pores, the higher the retention. Retained liposomes are not tightly fixed to the beads and are slowly released from the gels upon direct or inverted eluent flow, long washing steps or column repacking. Further addition of free liposomes leads to the elution of part of the gel-trapped liposomes, showing that the retention is transitory. Trapping reversibility should be related to a mechanism of partitioning of the liposomes between the stationary phase, water-swelled polymeric gel, and the mobile aqueous phase. Conclusion Retention of liposomes by size exclusion gels is a dynamic and reversible process, which should be accounted for to control lipid loss and sample contamination during chromatography. PMID:15885140

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir

    Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less

  19. Process R&D for Particle Size Control of Molybdenum Oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Sujat; Dzwiniel, Trevor; Pupek, Krzysztof

    The primary goal of this study was to produce MoO 3 powder with a particle size range of 50 to 200 μm for use in targets for production of the medical isotope 99Mo. Molybdenum metal powder is commercially produced by thermal reduction of oxides in a hydrogen atmosphere. The most common source material is MoO 3, which is derived by the thermal decomposition of ammonium heptamolybdate (AHM). However, the particle size of the currently produced MoO 3 is too small, resulting in Mo powder that is too fine to properly sinter and press into the desired target. In this study,more » effects of heating rate, heating temperature, gas type, gas flow rate, and isothermal heating were investigated for the decomposition of AHM. The main conclusions were as follows: lower heating rate (2-10°C/min) minimizes breakdown of aggregates, recrystallized samples with millimeter-sized aggregates are resistant to various heat treatments, extended isothermal heating at >600°C leads to significant sintering, and inert gas and high gas flow rate (up to 2000 ml/min) did not significantly affect particle size distribution or composition. In addition, attempts to recover AHM from an aqueous solution by several methods (spray drying, precipitation, and low temperature crystallization) failed to achieve the desired particle size range of 50 to 200 μm. Further studies are planned.« less

  20. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  1. Sample size calculations for case-control studies

    Cancer.gov

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  2. Replicability and other features of a high-quality science: Toward a balanced and empirical approach.

    PubMed

    Finkel, Eli J; Eastwick, Paul W; Reis, Harry T

    2017-08-01

    Finkel, Eastwick, and Reis (2015; FER2015) argued that psychological science is better served by responding to apprehensions about replicability rates with contextualized solutions than with one-size-fits-all solutions. Here, we extend FER2015's analysis to suggest that much of the discussion of best research practices since 2011 has focused on a single feature of high-quality science-replicability-with insufficient sensitivity to the implications of recommended practices for other features, like discovery, internal validity, external validity, construct validity, consequentiality, and cumulativeness. Thus, although recommendations for bolstering replicability have been innovative, compelling, and abundant, it is difficult to evaluate their impact on our science as a whole, especially because many research practices that are beneficial for some features of scientific quality are harmful for others. For example, FER2015 argued that bigger samples are generally better, but also noted that very large samples ("those larger than required for effect sizes to stabilize"; p. 291) could have the downside of commandeering resources that would have been better invested in other studies. In their critique of FER2015, LeBel, Campbell, and Loving (2016) concluded, based on simulated data, that ever-larger samples are better for the efficiency of scientific discovery (i.e., that there are no tradeoffs). As demonstrated here, however, this conclusion holds only when the replicator's resources are considered in isolation. If we widen the assumptions to include the original researcher's resources as well, which is necessary if the goal is to consider resource investment for the field as a whole, the conclusion changes radically-and strongly supports a tradeoff-based analysis. In general, as psychologists seek to strengthen our science, we must complement our much-needed work on increasing replicability with careful attention to the other features of a high-quality science. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Impact of peer review on reports of randomised trials published in open peer review journals: retrospective before and after study

    PubMed Central

    Collins, Gary S; Boutron, Isabelle; Yu, Ly-Mee; Cook, Jonathan; Shanyinde, Milensu; Wharton, Rose; Shamseer, Larissa; Altman, Douglas G

    2014-01-01

    Objective To investigate the effectiveness of open peer review as a mechanism to improve the reporting of randomised trials published in biomedical journals. Design Retrospective before and after study. Setting BioMed Central series medical journals. Sample 93 primary reports of randomised trials published in BMC-series medical journals in 2012. Main outcome measures Changes to the reporting of methodological aspects of randomised trials in manuscripts after peer review, based on the CONSORT checklist, corresponding peer reviewer reports, the type of changes requested, and the extent to which authors adhered to these requests. Results Of the 93 trial reports, 38% (n=35) did not describe the method of random sequence generation, 54% (n=50) concealment of allocation sequence, 50% (n=46) whether the study was blinded, 34% (n=32) the sample size calculation, 35% (n=33) specification of primary and secondary outcomes, 55% (n=51) results for the primary outcome, and 90% (n=84) details of the trial protocol. The number of changes between manuscript versions was relatively small; most involved adding new information or altering existing information. Most changes requested by peer reviewers had a positive impact on the reporting of the final manuscript—for example, adding or clarifying randomisation and blinding (n=27), sample size (n=15), primary and secondary outcomes (n=16), results for primary or secondary outcomes (n=14), and toning down conclusions to reflect the results (n=27). Some changes requested by peer reviewers, however, had a negative impact, such as adding additional unplanned analyses (n=15). Conclusion Peer reviewers fail to detect important deficiencies in reporting of the methods and results of randomised trials. The number of these changes requested by peer reviewers was relatively small. Although most had a positive impact, some were inappropriate and could have a negative impact on reporting in the final publication. PMID:24986891

  4. Quantifying the extent to which index event biases influence large genetic association studies.

    PubMed

    Yaghootkar, Hanieh; Bancks, Michael P; Jones, Sam E; McDaid, Aaron; Beaumont, Robin; Donnelly, Louise; Wood, Andrew R; Campbell, Archie; Tyrrell, Jessica; Hocking, Lynne J; Tuke, Marcus A; Ruth, Katherine S; Pearson, Ewan R; Murray, Anna; Freathy, Rachel M; Munroe, Patricia B; Hayward, Caroline; Palmer, Colin; Weedon, Michael N; Pankow, James S; Frayling, Timothy M; Kutalik, Zoltán

    2017-03-01

    As genetic association studies increase in size to 100 000s of individuals, subtle biases may influence conclusions. One possible bias is 'index event bias' (IEB) that appears due to the stratification by, or enrichment for, disease status when testing associations between genetic variants and a disease-associated trait. We aimed to test the extent to which IEB influences some known trait associations in a range of study designs and provide a statistical framework for assessing future associations. Analyzing data from 113 203 non-diabetic UK Biobank participants, we observed three (near TCF7L2, CDKN2AB and CDKAL1) overestimated (body mass index (BMI) decreasing) and one (near MTNR1B) underestimated (BMI increasing) associations among 11 type 2 diabetes risk alleles (at P  <  0.05). IEB became even stronger when we tested a type 2 diabetes genetic risk score composed of these 11 variants (-0.010 standard deviations BMI per allele, P  =  5 × 10- 4), which was confirmed in four additional independent studies. Similar results emerged when examining the effect of blood pressure increasing alleles on BMI in normotensive UK Biobank samples. Furthermore, we demonstrated that, under realistic scenarios, common disease alleles would become associated at P <  5 × 10- 8 with disease-related traits through IEB alone, if disease prevalence in the sample differs appreciably from the background population prevalence. For example, some hypertension and type 2 diabetes alleles will be associated with BMI in sample sizes of  >500 000 if the prevalence of those diseases differs by >10% from the background population. In conclusion, IEB may result in false positive or negative genetic associations in very large studies stratified or strongly enriched for/against disease cases. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Effects of a Multidisciplinary Approach to Improve Volume of Diagnostic Material in CT-Guided Lung Biopsies

    PubMed Central

    Ferguson, Philip E.; Sales, Catherine M.; Hodges, Dalton C.; Sales, Elizabeth W.

    2015-01-01

    Background Recent publications have emphasized the importance of a multidisciplinary strategy for maximum conservation and utilization of lung biopsy material for advanced testing, which may determine therapy. This paper quantifies the effect of a multidisciplinary strategy implemented to optimize and increase tissue volume in CT-guided transthoracic needle core lung biopsies. The strategy was three-pronged: (1) once there was confidence diagnostic tissue had been obtained and if safe for the patient, additional biopsy passes were performed to further increase volume of biopsy material, (2) biopsy material was placed in multiple cassettes for processing, and (3) all tissue ribbons were conserved when cutting blocks in the histology laboratory. This study quantifies the effects of strategies #1 and #2. Design This retrospective analysis comparing CT-guided lung biopsies from 2007 and 2012 (before and after multidisciplinary approach implementation) was performed at a single institution. Patient medical records were reviewed and main variables analyzed include biopsy sample size, radiologist, number of blocks submitted, diagnosis, and complications. The biopsy sample size measured was considered to be directly proportional to tissue volume in the block. Results Biopsy sample size increased 2.5 fold with the average total biopsy sample size increasing from 1.0 cm (0.9–1.1 cm) in 2007 to 2.5 cm (2.3–2.8 cm) in 2012 (P<0.0001). The improvement was statistically significant for each individual radiologist. During the same time, the rate of pneumothorax requiring chest tube placement decreased from 15% to 7% (P = 0.065). No other major complications were identified. The proportion of tumor within the biopsy material was similar at 28% (23%–33%) and 35% (30%–40%) for 2007 and 2012, respectively. The number of cases with at least two blocks available for testing increased from 10.7% to 96.4% (P<0.0001). Conclusions The effect of this multidisciplinary strategy to CT-guided lung biopsies was effective in significantly increasing tissue volume and number of blocks available for advanced diagnostic testing. PMID:26479367

  6. mHealth Series: mHealth project in Zhao County, rural China – Description of objectives, field site and methods

    PubMed Central

    van Velthoven, Michelle Helena; Li, Ye; Wang, Wei; Du, Xiaozhen; Wu, Qiong; Chen, Li; Majeed, Azeem; Rudan, Igor; Zhang, Yanfeng; Car, Josip

    2013-01-01

    Background We set up a collaboration between researchers in China and the UK that aimed to explore the use of mHealth in China. This is the first paper in a series of papers on a large mHealth project part of this collaboration. This paper included the aims and objectives of the mHealth project, our field site, and the detailed methods of two studies. Field site The field site for this mHealth project was Zhao County, which lies 280 km south of Beijing in Hebei Province, China. Methods We described the methodology of two studies: (i) a mixed methods study exploring factors influencing sample size calculations for mHealth–based health surveys and (ii) a cross–over study determining validity of an mHealth text messaging data collection tool. The first study used mixed methods, both quantitative and qualitative, including: (i) two surveys with caregivers of young children, (ii) interviews with caregivers, village doctors and participants of the cross–over study, and (iii) researchers’ views. We combined data from caregivers, village doctors and researchers to provide an in–depth understanding of factors influencing sample size calculations for mHealth–based health surveys. The second study, a cross–over study, used a randomised cross–over study design to compare the traditional face–to–face survey method to the new text messaging survey method. We assessed data equivalence (intrarater agreement), the amount of information in responses, reasons for giving different responses, the response rate, characteristics of non–responders, and the error rate. Conclusions This paper described the objectives, field site and methods of a large mHealth project part of a collaboration between researchers in China and the UK. The mixed methods study evaluating factors that influence sample size calculations could help future studies with estimating reliable sample sizes. The cross–over study comparing face–to–face and text message survey data collection could help future studies with developing their mHealth tools. PMID:24363919

  7. Plasminogen Activator Inhibitor-1 4G/5G Gene Polymorphism and Coronary Artery Disease in the Chinese Han Population: A Meta-Analysis

    PubMed Central

    Li, Yan-yan

    2012-01-01

    Background The polymorphism of plasminogen activator inhibitor-1 (PAI-1) 4G/5G gene has been indicated to be correlated with coronary artery disease (CAD) susceptibility, but study results are still debatable. Objective and Methods The present meta-analysis was performed to investigate the association between PAI-1 4G/5G gene polymorphism and CAD in the Chinese Han population. A total of 879 CAD patients and 628 controls from eight separate studies were involved. The pooled odds ratio (OR) for the distribution of the 4G allele frequency of PAI-1 4G/5G gene and its corresponding 95% confidence interval (CI) was assessed by the random effect model. Results The distribution of the 4 G allele frequency was 0.61 for the CAD group and 0.51 for the control group. The association between PAI-1 4G/5G gene polymorphism and CAD in the Chinese Han population was significant under an allelic genetic model (OR = 1.70, 95% CI = 1.18 to 2.44, P = 0.004). The heterogeneity test was also significant (P<0.0001). Meta-regression was performed to explore the heterogeneity source. Among the confounding factors, the heterogeneity could be explained by the publication year (P = 0.017), study region (P = 0.014), control group sample size (P = 0.011), total sample size (P = 0.011), and ratio of the case to the control group sample size (RR) (P = 0.019). In a stratified analysis by the total sample size, significantly increased risk was only detected in subgroup 2 under an allelic genetic model (OR = 1.93, 95% CI = 1.09 to 3.35, P = 0.02). Conclusions In the Chinese Han population, PAI-1 4G/5G gene polymorphism was implied to be associated with increased CAD risk. Carriers of the 4G allele of the PAI-1 4G/5G gene might predispose to CAD. PMID:22496752

  8. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  9. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  10. Designing clinical trials for amblyopia

    PubMed Central

    Holmes, Jonathan M.

    2015-01-01

    Randomized clinical trial (RCT) study design leads to one of the highest levels of evidence, and is a preferred study design over cohort studies, because randomization reduces bias and maximizes the chance that even unknown confounding factors will be balanced between treatment groups. Recent randomized clinical trials and observational studies in amblyopia can be taken together to formulate an evidence-based approach to amblyopia treatment, which is presented in this review. When designing future clinical studies of amblyopia treatment, issues such as regression to the mean, sample size and trial duration must be considered, since each may impact study results and conclusions. PMID:25752747

  11. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  12. SU-G-TeP3-14: Three-Dimensional Cluster Model in Inhomogeneous Dose Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, J; Penagaricano, J; Narayanasamy, G

    2016-06-15

    Purpose: We aim to investigate 3D cluster formation in inhomogeneous dose distribution to search for new models predicting radiation tissue damage and further leading to new optimization paradigm for radiotherapy planning. Methods: The aggregation of higher dose in the organ at risk (OAR) than a preset threshold was chosen as the cluster whose connectivity dictates the cluster structure. Upon the selection of the dose threshold, the fractional density defined as the fraction of voxels in the organ eligible to be part of the cluster was determined according to the dose volume histogram (DVH). A Monte Carlo method was implemented tomore » establish a case pertinent to the corresponding DVH. Ones and zeros were randomly assigned to each OAR voxel with the sampling probability equal to the fractional density. Ten thousand samples were randomly generated to ensure a sufficient number of cluster sets. A recursive cluster searching algorithm was developed to analyze the cluster with various connectivity choices like 1-, 2-, and 3-connectivity. The mean size of the largest cluster (MSLC) from the Monte Carlo samples was taken to be a function of the fractional density. Various OARs from clinical plans were included in the study. Results: Intensive Monte Carlo study demonstrates the inverse relationship between the MSLC and the cluster connectivity as anticipated and the cluster size does not change with fractional density linearly regardless of the connectivity types. An initially-slow-increase to exponential growth transition of the MSLC from low to high density was observed. The cluster sizes were found to vary within a large range and are relatively independent of the OARs. Conclusion: The Monte Carlo study revealed that the cluster size could serve as a suitable index of the tissue damage (percolation cluster) and the clinical outcome of the same DVH might be potentially different.« less

  13. Public Acceptability in the UK and USA of Nudging to Reduce Obesity: The Example of Reducing Sugar-Sweetened Beverages Consumption

    PubMed Central

    Petrescu, Dragos C.; Hollands, Gareth J.; Couturier, Dominique-Laurent; Ng, Yin-Lam; Marteau, Theresa M.

    2016-01-01

    Background “Nudging”—modifying environments to change people’s behavior, often without their conscious awareness—can improve health, but public acceptability of nudging is largely unknown. Methods We compared acceptability, in the United Kingdom (UK) and the United States of America (USA), of government interventions to reduce consumption of sugar-sweetened beverages. Three nudge interventions were assessed: i. reducing portion Size, ii. changing the Shape of the drink containers, iii. changing their shelf Location; alongside two traditional interventions: iv. Taxation and v. Education. We also tested the hypothesis that describing interventions as working through non-conscious processes decreases their acceptability. Predictors of acceptability, including perceived intervention effectiveness, were also assessed. Participants (n = 1093 UK and n = 1082 USA) received a description of each of the five interventions which varied, by randomisation, in how the interventions were said to affect behaviour: (a) via conscious processes; (b) via non-conscious processes; or (c) no process stated. Acceptability was derived from responses to three items. Results Levels of acceptability for four of the five interventions did not differ significantly between the UK and US samples; reducing portion size was less accepted by the US sample. Within each country, Education was rated as most acceptable and Taxation the least, with the three nudge-type interventions rated between these. There was no evidence to support the study hypothesis: i.e. stating that interventions worked via non-conscious processes did not decrease their acceptability in either the UK or US samples. Perceived effectiveness was the strongest predictor of acceptability for all interventions across the two samples. Conclusion In conclusion, nudge interventions to reduce consumption of sugar-sweetened beverages seem similarly acceptable in the UK and USA, being more acceptable than taxation, but less acceptable than education. Contrary to prediction, we found no evidence that highlighting the non-conscious processes by which nudge interventions may work decreases their acceptability. However, highlighting the effectiveness of all interventions has the potential to increase their acceptability. PMID:27276222

  14. Observed intra-cluster correlation coefficients in a cluster survey sample of patient encounters in general practice in Australia

    PubMed Central

    Knox, Stephanie A; Chondros, Patty

    2004-01-01

    Background Cluster sample study designs are cost effective, however cluster samples violate the simple random sample assumption of independence of observations. Failure to account for the intra-cluster correlation of observations when sampling through clusters may lead to an under-powered study. Researchers therefore need estimates of intra-cluster correlation for a range of outcomes to calculate sample size. We report intra-cluster correlation coefficients observed within a large-scale cross-sectional study of general practice in Australia, where the general practitioner (GP) was the primary sampling unit and the patient encounter was the unit of inference. Methods Each year the Bettering the Evaluation and Care of Health (BEACH) study recruits a random sample of approximately 1,000 GPs across Australia. Each GP completes details of 100 consecutive patient encounters. Intra-cluster correlation coefficients were estimated for patient demographics, morbidity managed and treatments received. Intra-cluster correlation coefficients were estimated for descriptive outcomes and for associations between outcomes and predictors and were compared across two independent samples of GPs drawn three years apart. Results Between April 1999 and March 2000, a random sample of 1,047 Australian general practitioners recorded details of 104,700 patient encounters. Intra-cluster correlation coefficients for patient demographics ranged from 0.055 for patient sex to 0.451 for language spoken at home. Intra-cluster correlations for morbidity variables ranged from 0.005 for the management of eye problems to 0.059 for management of psychological problems. Intra-cluster correlation for the association between two variables was smaller than the descriptive intra-cluster correlation of each variable. When compared with the April 2002 to March 2003 sample (1,008 GPs) the estimated intra-cluster correlation coefficients were found to be consistent across samples. Conclusions The demonstrated precision and reliability of the estimated intra-cluster correlations indicate that these coefficients will be useful for calculating sample sizes in future general practice surveys that use the GP as the primary sampling unit. PMID:15613248

  15. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  16. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  17. Determining Cutoff Point of Ensemble Trees Based on Sample Size in Predicting Clinical Dose with DNA Microarray Data.

    PubMed

    Yılmaz Isıkhan, Selen; Karabulut, Erdem; Alpar, Celal Reha

    2016-01-01

    Background/Aim . Evaluating the success of dose prediction based on genetic or clinical data has substantially advanced recently. The aim of this study is to predict various clinical dose values from DNA gene expression datasets using data mining techniques. Materials and Methods . Eleven real gene expression datasets containing dose values were included. First, important genes for dose prediction were selected using iterative sure independence screening. Then, the performances of regression trees (RTs), support vector regression (SVR), RT bagging, SVR bagging, and RT boosting were examined. Results . The results demonstrated that a regression-based feature selection method substantially reduced the number of irrelevant genes from raw datasets. Overall, the best prediction performance in nine of 11 datasets was achieved using SVR; the second most accurate performance was provided using a gradient-boosting machine (GBM). Conclusion . Analysis of various dose values based on microarray gene expression data identified common genes found in our study and the referenced studies. According to our findings, SVR and GBM can be good predictors of dose-gene datasets. Another result of the study was to identify the sample size of n = 25 as a cutoff point for RT bagging to outperform a single RT.

  18. Interactive Video Gaming compared to Health Education in Older Adults with MCI: A Feasibility Study

    PubMed Central

    Hughes, Tiffany F.; Flatt, Jason D.; Fu, Bo; Butters, Meryl A.; Chang, Chung-Chou H.; Ganguli, Mary

    2014-01-01

    Objective We evaluated the feasibility of a trial of Wii interactive video gaming, and its potential efficacy at improving cognitive functioning compared to health education, in a community sample of older adults with neuropsychologically defined mild cognitive impairment (MCI). Methods Twenty older adults were equally randomized to either group-based interactive video gaming or health education for 90 minutes each week for 24 weeks. Although the primary outcomes were related to study feasibility, we also explored the effect of the intervention on neuropsychological performance and other secondary outcomes. Results All 20 participants completed the intervention, and 18 attended at least 80% of the sessions. The majority (80%) of participants were “very much” satisfied with the intervention. Bowling was enjoyed by the most participants, and was also the rated highest among the games for mental, social and physical stimulation. We observed medium effect sizes for cognitive and physical functioning in favor of the interactive video gaming condition, but these effects were not statistically significant in this small sample. Conclusion Interactive video gaming is feasible for older adults with MCI and medium effects sizes in favor of the Wii group warrant a larger efficacy trial. PMID:24452845

  19. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  20. Sample size and power for cost-effectiveness analysis (part 1).

    PubMed

    Glick, Henry A

    2011-03-01

    Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.

  1. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  2. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  3. Food intake and growth of Sarsia tubulosa (SARS, 1835), with quantitative estimates of predation on copepod populations

    NASA Astrophysics Data System (ADS)

    Daan, Rogier

    In laboratory tests food intake by the hydromedusa Sarsia tubulosa, which feeds on copepods, was quantified. Estimates of maximum predation are presented for 10 size classes of Sarsia. Growth rates, too, were determined in the laboratory, at 12°C under ad libitum food conditions. Mean gross food conversion for all size classes averaged 12%. From the results of a frequent sampling programme, carried out in the Texelstroom (a tidal inlet of the Dutch Wadden Sea) in 1983, growth rates of Sarsia in the field equalled maximum growth under experimental conditions, which suggests that Sarsia in situ can feed at an optimum level. Two estimates of predation pressure in the field matched very closely and lead to the conclusion that the impact of Sarsia predation on copepod standing stocks in the Dutch coastal area, including the Wadden Sea, is generally negligible.

  4. A sequential bioequivalence design with a potential ethical advantage.

    PubMed

    Fuglsang, Anders

    2014-07-01

    This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.

  5. Age as a Risk Factor for Burnout Syndrome in Nursing Professionals: A Meta-Analytic Study.

    PubMed

    Gómez-Urquiza, José L; Vargas, Cristina; De la Fuente, Emilia I; Fernández-Castillo, Rafael; Cañadas-De la Fuente, Guillermo A

    2017-04-01

    Although past research has highlighted the possibility of a direct relationship between the age of nursing professionals and burnout syndrome, results have been far from conclusive. The aim of this study was to conduct a wider analysis of the influence of age on the three dimensions of burnout syndrome (emotional exhaustion, depersonalization, and personal accomplishment) in nurses. We performed a meta-analysis of 51 publications extracted from health sciences and psychology databases that fulfilled the inclusion criteria. There were 47 reports of information on emotional exhaustion in 50 samples, 39 reports on depersonalization for 42 samples, and 31 reports on personal accomplishment in 34 samples. The mean effect sizes indicated that younger age was a significant factor in the emotional exhaustion and depersonalization of nurses, although it was somewhat less influential in the dimension of personal accomplishment. Because of heterogeneity in the effect sizes, moderating variables that might explain the association between age and burnout were also analyzed. Gender, marital status, and study characteristics moderated the relationship between age and burnout and may be crucial for the identification of high-risk groups. More research is needed on other variables for which there were only a small number of studies. Identification of burnout risk factors will facilitate establishment of burnout prevention programs for nurses. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Quantification of Organic Porosity and Water Accessibility in Marcellus Shale Using Neutron Scattering

    DOE PAGES

    Gu, Xin; Mildner, David F. R.; Cole, David R.; ...

    2016-04-28

    Pores within organic matter (OM) are a significant contributor to the total pore system in gas shales. These pores contribute most of the storage capacity in gas shales. Here we present a novel approach to characterize the OM pore structure (including the porosity, specific surface area, pore size distribution, and water accessibility) in Marcellus shale. By using ultrasmall and small-angle neutron scattering, and by exploiting the contrast matching of the shale matrix with suitable mixtures of deuterated and protonated water, both total and water-accessible porosity were measured on centimeter-sized samples from two boreholes from the nanometer to micrometer scale withmore » good statistical coverage. Samples were also measured after combustion at 450 °C. Analysis of scattering data from these procedures allowed quantification of OM porosity and water accessibility. OM hosts 24–47% of the total porosity for both organic-rich and -poor samples. This porosity occupies as much as 29% of the OM volume. In contrast to the current paradigm in the literature that OM porosity is organophilic and therefore not likely to contain water, our results demonstrate that OM pores with widths >20 nm exhibit the characteristics of water accessibility. In conclusion, our approach reveals the complex structure and wetting behavior of the OM porosity at scales that are hard to interrogate using other techniques.« less

  7. Quantification of Organic Porosity and Water Accessibility in Marcellus Shale Using Neutron Scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Xin; Mildner, David F. R.; Cole, David R.

    Pores within organic matter (OM) are a significant contributor to the total pore system in gas shales. These pores contribute most of the storage capacity in gas shales. Here we present a novel approach to characterize the OM pore structure (including the porosity, specific surface area, pore size distribution, and water accessibility) in Marcellus shale. By using ultrasmall and small-angle neutron scattering, and by exploiting the contrast matching of the shale matrix with suitable mixtures of deuterated and protonated water, both total and water-accessible porosity were measured on centimeter-sized samples from two boreholes from the nanometer to micrometer scale withmore » good statistical coverage. Samples were also measured after combustion at 450 °C. Analysis of scattering data from these procedures allowed quantification of OM porosity and water accessibility. OM hosts 24–47% of the total porosity for both organic-rich and -poor samples. This porosity occupies as much as 29% of the OM volume. In contrast to the current paradigm in the literature that OM porosity is organophilic and therefore not likely to contain water, our results demonstrate that OM pores with widths >20 nm exhibit the characteristics of water accessibility. In conclusion, our approach reveals the complex structure and wetting behavior of the OM porosity at scales that are hard to interrogate using other techniques.« less

  8. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  9. The cost of large numbers of hypothesis tests on power, effect size and sample size.

    PubMed

    Lazzeroni, L C; Ray, A

    2012-01-01

    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.

  10. Increasing Complexity of Clinical Research in Gastroenterology: Implications for Training Clinician-Scientists

    PubMed Central

    Scott, Frank I.; McConnell, Ryan A.; Lewis, Matthew E.; Lewis, James D.

    2014-01-01

    Background Significant advances have been made in clinical and epidemiologic research methods over the past 30 years. We sought to demonstrate the impact of these advances on published research in gastroenterology from 1980 to 2010. Methods Three journals (Gastroenterology, Gut, and American Journal of Gastroenterology) were selected for evaluation given their continuous publication during the study period. Twenty original clinical articles were randomly selected from each journal from 1980, 1990, 2000, and 2010. Each article was assessed for topic studied, whether the outcome was clinical or physiologic, study design, sample size, number of authors and centers collaborating, and reporting of statistical methods such as sample size calculations, p-values, confidence intervals, and advanced techniques such as bioinformatics or multivariate modeling. Research support with external funding was also recorded. Results A total of 240 articles were included in the study. From 1980 to 2010, there was a significant increase in analytic studies (p<0.001), clinical outcomes (p=0.003), median number of authors per article (p<0.001), multicenter collaboration (p<0.001), sample size (p<0.001), and external funding (p<0.001)). There was significantly increased reporting of p-values (p=0.01), confidence intervals (p<0.001), and power calculations (p<0.001). There was also increased utilization of large multicenter databases (p=0.001), multivariate analyses (p<0.001), and bioinformatics techniques (p=0.001). Conclusions There has been a dramatic increase in complexity in clinical research related to gastroenterology and hepatology over the last three decades. This increase highlights the need for advanced training of clinical investigators to conduct future research. PMID:22475957

  11. Monitoring disease progression with plasma creatinine in amyotrophic lateral sclerosis clinical trials

    PubMed Central

    van Eijk, Ruben P A; Eijkemans, Marinus J C; Ferguson, Toby A; Nikolakopoulos, Stavros; Veldink, Jan H; van den Berg, Leonard H

    2018-01-01

    Objectives Plasma creatinine is a predictor of survival in amyotrophic lateral sclerosis (ALS). It remains, however, to be established whether it can monitor disease progression and serve as surrogate endpoint in clinical trials. Methods We used clinical trial data from three cohorts of clinical trial participants in the LITRA, EMPOWER and PROACT studies. Longitudinal associations between functional decline, muscle strength and survival with plasma creatinine were assessed. Results were translated to trial design in terms of sample size and power. Results A total of 13 564 measurements were obtained for 1241 patients. The variability between patients in rate of decline was lower in plasma creatinine than in ALS functional rating scale–Revised (ALSFRS-R; p<0.001). The average rate of decline was faster in the ALSFRS-R, with less between-patient variability at baseline (p<0.001). Plasma creatinine had strong longitudinal correlations with the ALSFRS-R (0.43 (0.39–0.46), p<0.001), muscle strength (0.55 (0.51–0.58), p<0.001) and overall mortality (HR 0.88 (0.86–0.91, p<0.001)). Using plasma creatinine as outcome could reduce the sample size in trials by 21.5% at 18 months. For trials up to 10 months, the ALSFRS-R required a lower sample size. Conclusions Plasma creatinine is an inexpensive and easily accessible biomarker that exhibits less variability between patients with ALS over time and is predictive for the patient’s functional status, muscle strength and mortality risk. Plasma creatinine may, therefore, increase the power to detect treatment effects and could be incorporated in future ALS clinical trials as potential surrogate outcome. PMID:29084868

  12. Social protection systems in vulnerable families: their importance for the public health

    PubMed Central

    Arcos, Estela; Sanchez, Ximena; Toffoletto, Maria Cecilia; Baeza, Margarita; Gazmuri, Patricia; Muñoz, Luz Angélica; Vollrath, Antonia

    2014-01-01

    OBJECTIVE To analyze the effectiveness of the Chilean System of Childhood Welfare in transferring benefits to socially vulnerable families. METHODS A cross-sectional study with a sample of 132 families from the Metropolitan Region, Chile, stratified according to degree of social vulnerability, between September 2011 and January 2012. Semi-structured interviews were conducted with mothers of the studied families in public health facilities or their households. The variables studied were family structure, psychosocial risk in the family context and integrated benefits from the welfare system in families that fulfill the necessary requirements for transfer of benefits. Descriptive statistics to measure location and dispersion were calculated. A binary logistic regression, which accounts for the sample size of the study, was carried out. RESULTS The groups were homogenous regarding family size, the presence of biological father in the household, the number of relatives living in the same dwelling, income generation capacity and the rate of dependency and psychosocial risk (p ≥ 0.05). The transfer of benefits was low in all three groups of the sample (≤ 23.0%). The benefit with the best coverage in the system was the Single Family Subsidy, whose transfer was associated with the size of the family, the presence of relatives in the dwelling, the absence of the father in the household, a high rate of dependency and a high income generation capacity (p ≤ 0.10). CONCLUSIONS The effectiveness of benefit transfer was poor, especially in families that were extremely socially vulnerable. Further explanatory studies of benefit transfers to the vulnerable population, of differing intensity and duration, are required in order to reduce health disparities and inequalities. PMID:25119935

  13. Rethinking non-inferiority: a practical trial design for optimising treatment duration.

    PubMed

    Quartagno, Matteo; Walker, A Sarah; Carpenter, James R; Phillips, Patrick Pj; Parmar, Mahesh Kb

    2018-06-01

    Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm.

  14. Conjunctival Expansion Using a Subtenon's Silicone Implant in New Zealand White Rabbits

    PubMed Central

    Yoon, Ie-na; Lee, Dong-hoon

    2007-01-01

    Purpose In the field of ophthalmology, the conjunctival autograft is a useful therapeutic material in many cases, but the small size of the autograft is a disadvantage. Therefore, we evaluated the feasibility of taking an expanded sample of conjunctival tissue using a subtenon's silicone implant. Materials and Methods We included a total of nine rabbits; eight rabbits were operative cases, and one was a control. A portion of conjunctival tissue from the control rabbit, which did not undergo surgery, was dissected and examined to determine whether it was histologically different from the experimental group. The surgical procedure was performed on eight rabbits via a subtenon's insertion of a silicone sponge in the left superior-temporal portion; after surgery, we dropped antibiotics into the eyes. We sacrificed a pair of rabbits every three days (on days 3, 6, 9, and 12) after surgery, removed the expanded conjunctival tissues with the silicone sponge implants, and measured their sizes. Results The mean size of the expanded conjunctival tissues was 194.4 mm2. On the third day, we were able to harvest a 223.56 mm2 section of conjunctival tissue, which was the most expanded sample of tissue in the study. On the twelfth day, we removed a 160.38 mm2 section of conjunctival tissue, which was the least expanded sample of tissue. Statistically, there were no significant differences in the mean dimensions of the expanded conjunctival tissues for each time period. Microscopic examinations showed no histological differences between the expanded conjunctival tissues and the normal conjunctival tissues. Conclusion The results reveal that this procedure is a useful method to expand the conjunctiva for grafting and transplantation. PMID:18159586

  15. Variable size computer-aided detection prompts and mammography film reader decisions

    PubMed Central

    Gilbert, Fiona J; Astley, Susan M; Boggis, Caroline RM; McGee, Magnus A; Griffiths, Pamela M; Duffy, Stephen W; Agbaje, Olorunsola F; Gillan, Maureen GC; Wilson, Mary; Jain, Anil K; Barr, Nicola; Beetles, Ursula M; Griffiths, Miriam A; Johnson, Jill; Roberts, Rita M; Deans, Heather E; Duncan, Karen A; Iyengar, Geeta

    2008-01-01

    Introduction The purpose of the present study was to investigate the effect of computer-aided detection (CAD) prompts on reader behaviour in a large sample of breast screening mammograms by analysing the relationship of the presence and size of prompts to the recall decision. Methods Local research ethics committee approval was obtained; informed consent was not required. Mammograms were obtained from women attending routine mammography at two breast screening centres in 1996. Films, previously double read, were re-read by a different reader using CAD. The study material included 315 cancer cases comprising all screen-detected cancer cases, all subsequent interval cancers and 861 normal cases randomly selected from 10,267 cases. Ground truth data were used to assess the efficacy of CAD prompting. Associations between prompt attributes and tumour features or reader recall decisions were assessed by chi-squared tests. Results There was a highly significant relationship between prompting and a decision to recall for cancer cases and for a random sample of normal cases (P < 0.001). Sixty-four per cent of all cases contained at least one CAD prompt. In cancer cases, larger prompts were more likely to be recalled (P = 0.02) for masses but there was no such association for calcifications (P = 0.9). In a random sample of 861 normal cases, larger prompts were more likely to be recalled (P = 0.02) for both mass and calcification prompts. Significant associations were observed with prompting and breast density (p = 0.009) for cancer cases but not for normal cases (P = 0.05). Conclusions For both normal cases and cancer cases, prompted mammograms were more likely to be recalled and the prompt size was also associated with a recall decision. PMID:18724867

  16. Quality of reporting of pilot and feasibility cluster randomised trials: a systematic review

    PubMed Central

    Chan, Claire L; Leyrat, Clémence; Eldridge, Sandra M

    2017-01-01

    Objectives To systematically review the quality of reporting of pilot and feasibility of cluster randomised trials (CRTs). In particular, to assess (1) the number of pilot CRTs conducted between 1 January 2011 and 31 December 2014, (2) whether objectives and methods are appropriate and (3) reporting quality. Methods We searched PubMed (2011–2014) for CRTs with ‘pilot’ or ‘feasibility’ in the title or abstract; that were assessing some element of feasibility and showing evidence the study was in preparation for a main effectiveness/efficacy trial. Quality assessment criteria were based on the Consolidated Standards of Reporting Trials (CONSORT) extensions for pilot trials and CRTs. Results Eighteen pilot CRTs were identified. Forty-four per cent did not have feasibility as their primary objective, and many (50%) performed formal hypothesis testing for effectiveness/efficacy despite being underpowered. Most (83%) included ‘pilot’ or ‘feasibility’ in the title, and discussed implications for progression from the pilot to the future definitive trial (89%), but fewer reported reasons for the randomised pilot trial (39%), sample size rationale (44%) or progression criteria (17%). Most defined the cluster (100%), and number of clusters randomised (94%), but few reported how the cluster design affected sample size (17%), whether consent was sought from clusters (11%), or who enrolled clusters (17%). Conclusions That only 18 pilot CRTs were identified necessitates increased awareness of the importance of conducting and publishing pilot CRTs and improved reporting. Pilot CRTs should primarily be assessing feasibility, avoiding formal hypothesis testing for effectiveness/efficacy and reporting reasons for the pilot, sample size rationale and progression criteria, as well as enrolment of clusters, and how the cluster design affects design aspects. We recommend adherence to the CONSORT extensions for pilot trials and CRTs. PMID:29122791

  17. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    PubMed

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  18. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  19. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  20. Development of size reduction equations for calculating power input for grinding pine wood chips using hammer mill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naimi, Ladan J.; Collard, Flavien; Bi, Xiaotao

    Size reduction is an unavoidable operation for preparing biomass for biofuels and bioproduct conversion. Yet, there is considerable uncertainty in power input requirement and the uniformity of ground biomass. Considerable gains are possible if the required power input for a size reduction ratio is estimated accurately. In this research three well-known mechanistic equations attributed to Rittinger, Kick, and Bond available for predicting energy input for grinding pine wood chips were tested against experimental grinding data. Prior to testing, samples of pine wood chips were conditioned to 11.7% wb, moisture content. The wood chips were successively ground in a hammer millmore » using screen sizes of 25.4 mm, 10 mm, 6.4 mm, and 3.2 mm. The input power and the flow of material into the grinder were recorded continuously. The recorded power input vs. mean particle size showed that the Rittinger equation had the best fit to the experimental data. The ground particle sizes were 4 to 7 times smaller than the size of installed screen. Geometric mean size of particles were calculated using two methods (1) Tyler sieves and using particle size analysis and (2) Sauter mean diameter calculated from the ratio of volume to surface that were estimated from measured length and width. The two mean diameters agreed well, pointing to the fact that either mechanical sieving or particle imaging can be used to characterize particle size. In conclusion, specific energy input to the hammer mill increased from 1.4 kWh t –1 (5.2 J g –1) for large 25.1-mm screen to 25 kWh t –1 (90.4 J g –1) for small 3.2-mm screen.« less

  1. Development of size reduction equations for calculating power input for grinding pine wood chips using hammer mill

    DOE PAGES

    Naimi, Ladan J.; Collard, Flavien; Bi, Xiaotao; ...

    2016-01-05

    Size reduction is an unavoidable operation for preparing biomass for biofuels and bioproduct conversion. Yet, there is considerable uncertainty in power input requirement and the uniformity of ground biomass. Considerable gains are possible if the required power input for a size reduction ratio is estimated accurately. In this research three well-known mechanistic equations attributed to Rittinger, Kick, and Bond available for predicting energy input for grinding pine wood chips were tested against experimental grinding data. Prior to testing, samples of pine wood chips were conditioned to 11.7% wb, moisture content. The wood chips were successively ground in a hammer millmore » using screen sizes of 25.4 mm, 10 mm, 6.4 mm, and 3.2 mm. The input power and the flow of material into the grinder were recorded continuously. The recorded power input vs. mean particle size showed that the Rittinger equation had the best fit to the experimental data. The ground particle sizes were 4 to 7 times smaller than the size of installed screen. Geometric mean size of particles were calculated using two methods (1) Tyler sieves and using particle size analysis and (2) Sauter mean diameter calculated from the ratio of volume to surface that were estimated from measured length and width. The two mean diameters agreed well, pointing to the fact that either mechanical sieving or particle imaging can be used to characterize particle size. In conclusion, specific energy input to the hammer mill increased from 1.4 kWh t –1 (5.2 J g –1) for large 25.1-mm screen to 25 kWh t –1 (90.4 J g –1) for small 3.2-mm screen.« less

  2. Exercise as Treatment for Anxiety: Systematic Review and Analysis

    PubMed Central

    Stonerock, Gregory L.; Hoffman, Benson M.; Smith, Patrick J.; Blumenthal, James A.

    2015-01-01

    Background Exercise has been shown to reduce symptoms of anxiety, but few studies have studied exercise in individuals pre-selected because of their high anxiety. Purpose To review and critically evaluate studies of exercise training in adults with either high levels of anxiety or an anxiety disorder. Methods We conducted a systematic review of randomized clinical trials (RCTs) in which anxious adults were randomized to an exercise or non-exercise control condition. Data were extracted concerning anxiety outcomes and study design. Existing meta-analyses were also reviewed. Results Evidence from 12 RCTs suggested benefits of exercise, for select groups, similar to established treatments and greater than placebo. However, most studies had significant methodological limitations, including small sample sizes, concurrent therapies, and inadequate assessment of adherence and fitness levels. Conclusions Exercise may be a useful treatment for anxiety, but lack of data from rigorous, methodologically sound RCTs precludes any definitive conclusions about its effectiveness. PMID:25697132

  3. Effects of pre-analytical variables on flow cytometric diagnosis of canine lymphoma: A retrospective study (2009-2015).

    PubMed

    Comazzi, S; Cozzi, M; Bernardi, S; Zanella, D R; Aresu, L; Stefanello, D; Marconato, L; Martini, V

    2018-02-01

    Flow cytometry (FC) is increasingly being used for immunophenotyping and staging of canine lymphoma. The aim of this retrospective study was to assess pre-analytical variables that might influence the diagnostic utility of FC of lymph node (LN) fine needle aspirate (FNA) specimens from dogs with lymphoproliferative diseases. The study included 987 cases with LN FNA specimens sent for immunophenotyping that were submitted to a diagnostic laboratory in Italy from 2009 to 2015. Cases were grouped into 'diagnostic' and 'non-diagnostic'. Pre-analytical factors analysed by univariate and multivariate analyses were animal-related factors (breed, age, sex, size), operator-related factors (year, season, shipping method, submitting veterinarian) and sample-related factors (type of sample material, cellular concentration, cytological smears, artefacts). The submitting veterinarian, sample material, sample cellularity and artefacts affected the likelihood of having a diagnostic sample. The availability of specimens from different sites and of cytological smears increased the odds of obtaining a diagnostic result. Major artefacts affecting diagnostic utility included poor cellularity and the presence of dead cells. Flow cytometry on LN FNA samples yielded conclusive results in more than 90% of cases with adequate sample quality and sampling conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Feasibility of Recruiting a Diverse Sample of Men Who Have Sex with Men: Observation from Nanjing, China

    PubMed Central

    Tang, Weiming; Yang, Haitao; Mahapatra, Tanmay; Huan, Xiping; Yan, Hongjing; Li, Jianjun; Fu, Gengfeng; Zhao, Jinkou; Detels, Roger

    2013-01-01

    Background Respondent-driven-sampling (RDS) has well been recognized as a method for sampling from most hard-to-reach populations like commercial sex workers, drug users and men who have sex with men. However the feasibility of this sampling strategy in terms of recruiting a diverse spectrum of these hidden populations has not been understood well yet in developing countries. Methods In a cross sectional study in Nanjing city of Jiangsu province of China, 430 MSM were recruited including 9 seeds in 14 weeks of study period using RDS. Information regarding socio-demographic characteristics and sexual risk behavior were collected and testing was done for HIV and syphilis. Duration, completion, participant characteristics and the equilibrium of key factors were used for assessing feasibility of RDS. Homophily of key variables, socio-demographic distribution and social network size were used as the indicators of diversity. Results In the study sample, adjusted HIV and syphilis prevalence were 6.6% and 14.6% respectively. Majority (96.3%) of the participants were recruited by members of their own social network. Although there was a tendency for recruitment within the same self-identified group (homosexuals recruited 60.0% homosexuals), considerable cross-group recruitment (bisexuals recruited 52.3% homosexuals) was also seen. Homophily of the self-identified sexual orientations was 0.111 for homosexuals. Upon completion of the recruitment process, participant characteristics and the equilibrium of key factors indicated that RDS was feasible for sampling MSM in Nanjing. Participants recruited by RDS were found to be diverse after assessing the homophily of key variables in successive waves of recruitment, the proportion of characteristics after reaching equilibrium and the social network size. The observed design effects were nearly the same or even better than the theoretical design effect of 2. Conclusion RDS was found to be an efficient and feasible sampling method for recruiting a diverse sample of MSM in a reasonable time. PMID:24244280

  5. Quasi-static acoustic tweezing thromboelastometry.

    PubMed

    Holt, R G; Luo, D; Gruver, N; Khismatullin, D B

    2017-07-01

    Essentials Blood coagulation measurement during contact with an artificial surface leads to unreliable data. Acoustic tweezing thromboelastometry is a novel non-contact method for coagulation monitoring. This method detects differences in the blood coagulation state within 10 min. Coagulation data were obtained using a much smaller sample volume (4 μL) than currently used. Background Thromboelastography is widely used as a tool to assess the coagulation status of critical care patients. It allows observation of changes in material properties of whole blood, beginning with early stages of clot formation and ending with clot lysis. However, the contact activation of the coagulation cascade at surfaces of thromboelastographic systems leads to inherent variability and unreliability in predicting bleeding or thrombosis risks. Objectives To develop acoustic tweezing thromboelastometry as a non-contact method for perioperative assessment of blood coagulation. Methods Acoustic tweezing is used to levitate microliter drops of biopolymer and human blood samples. By quasi-statically changing the acoustic pressure we control the sample drop location and deformation. Sample size, deformation and location are determined by digital imaging at each pressure. Results Simple Newtonian liquid solutions maintain a constant, reversible location vs. deformation curve. In contrast, the location/deformation curves for gelatin, alginate, whole blood and blood plasma uniquely change as the samples solidify. Increasing elasticity causes the sample to deform less, leading to steeper stress/strain curves. By extracting a linear regime slope, we show that whole blood or blood plasma exhibits a unique slope profile as it begins to clot. By exposing blood samples to pro- or antithrombotic agents, the slope profile changes, allowing detection of hyper- or hypocoagulable states. Conclusions We demonstrate that quasi-static acoustic tweezing can yield information about clotting onset, maturation and strength. The advantages of small sample size, non-contact and rapid measurement make this technique desirable for real-time monitoring of blood coagulation. © 2017 International Society on Thrombosis and Haemostasis.

  6. 76 FR 56141 - Notice of Intent To Request New Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... level surveys of similar scope and size. The sample for each selected community will be strategically... of 2 hours per sample community. Full Study: The maximum sample size for the full study is 2,812... questionnaires. The initial sample size for this phase of the research is 100 respondents (10 respondents per...

  7. Comparison of microstructure of superplastically deformed synthetic materials and ultramylonite: Coalescence of secondary mineral grains via grain boundary sliding

    NASA Astrophysics Data System (ADS)

    Hiraga, T.; Miyazaki, T.; Tasaka, M.; Yoshida, H.

    2011-12-01

    Using very fine-grained aggregates of forsterite containing ~10vol% secondary mineral phase such as periclase and enstatite, we have been able to demonstrate their superplascity, that is, achievement of more than a few 100 % tensile strain (Hiraga et al. 2010). Superplastic deformation is commonly considered to proceed via grain boundary sliding (GBS) which results in grain switching in the samples. Hiraga et al. (2010) succeeded in detecting the operation of GBS from observing the coalescence of grains of secondary phase in superplastically deformed samples. The secondary phase pins the motion of grain boundaries of the primary phase; however, the reduction of the number of the grains of secondary phase due to their coalescence allows grain growth of the primary phase. We analyzed the relationships between grain size of the primary and secondary phases, between strain and grain size, and between strain and the number of coalesced grains in the superplastically deformed samples. The results supports participation of all the grains of the primary phase in grain switching process indicating that the grain boundary sliding accommodates almost entire strain during the deformation. Mechanical properties of these materials such as their stress and grain size exponents of 1-2 do not conflict this conclusion. We applied the relationships obtained from analyzing superplastic materials to the microstructure of the natural samples, which has been considered to have deformed via grain boundary sliding, that is, ultramylonite. The microstructure of greenschist-grade ultramylonite reported by Fliervoet et al. (1997) was analyzed. Distributions of the mineral phases (i.e., quartz, plagioclase, K-feldspar and biotite) show distinct coalescence of the same mineral phases in the direction almost perpendicular to the foliation of the rock. The number of coalesced grains indicates that the strain that rock experienced is > 2. [reference] Hiraga et al. (2010) Nature 468, 1091-1094; Fliervoet et al. (1997) Journal of Structural Geology 19, 1495-1520

  8. Merging National Forest and National Forest Health Inventories to Obtain an Integrated Forest Resource Inventory – Experiences from Bavaria, Slovenia and Sweden

    PubMed Central

    Kovač, Marko; Bauer, Arthur; Ståhl, Göran

    2014-01-01

    Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120

  9. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    ERIC Educational Resources Information Center

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  10. Penrose's law: Methodological challenges and call for data.

    PubMed

    Kalapos, Miklós Péter

    The investigation of the relationship between the sizes of the mental health population and the prison population, outlined in Penrose's Law, has received renewed interest in recent decades. The problems that arise in the course of the deinstitutionalization have repeatedly drawn attention to this issue. This article presents methodological challenges to the examination of Penrose's Law and retrospectively reviews historical data from empirical studies. A critical element of surveys is the sampling method; longitudinal studies seem appropriate here. The relationship between the numbers of psychiatric beds and the size of the prison population is inverse in most cases. However, a serious failure is that almost all of the data were collected in countries historically belonging to a Christian or Jewish cultural community. Only very limited conclusions can be drawn from these sparse and non-comprehensive data: a reduction in the number of psychiatric beds seems to be accompanied by increases in the numbers of involuntary admissions and forensic treatments and an accumulation of mentally ill persons in prisons. A kind of transinstitutionalization is currently ongoing. A pragmatic balance between academic epidemiological numbers and cultural narratives should be found in order to confirm or refute the validity of Penrose's Law. Unless comprehensive research is undertaken, it is impossible to draw any real conclusion. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Perceived racism and mental health among Black American adults: a meta-analytic review.

    PubMed

    Pieterse, Alex L; Todd, Nathan R; Neville, Helen A; Carter, Robert T

    2012-01-01

    The literature indicates that perceived racism tends to be associated with adverse psychological and physiological outcomes; however, findings in this area are not yet conclusive. In this meta-analysis, we systematically reviewed 66 studies (total sample size of 18,140 across studies), published between January 1996 and April 2011, on the associations between racism and mental health among Black Americans. Using a random-effects model, we found a positive association between perceived racism and psychological distress (r = .20). We found a moderation effect for psychological outcomes, with anxiety, depression, and other psychiatric symptoms having a significantly stronger association than quality of life indicators. We did not detect moderation effects for type of racism scale, measurement precision, sample type, or type of publication. Implications for research and practice are discussed. (c) 2012 APA, all rights reserved).

  12. Corporate Health and Wellness and the Financial Bottom Line

    PubMed Central

    Conradie, Christina Susanna; van der Merwe Smit, Eon; Malan, Daniel Pieter

    2016-01-01

    Objective: The research objective was to test the hypothesis that corporate health and wellness contributed positively to South African companies’ financial results. Methods: The past share market performance of eligible healthy companies, based on Discovery's Healthy Company Index, was tracked under three investment scenarios and compared with the market performance on the basis of the JSE FTSE All Share Index. Results: The evidence supports the hypothesis that a culture of health and wellness provides a financial advantage, in so far as the portfolio of healthy companies consistently outperformed the market over the selected simulations. Conclusions: Given the limitations of the investigation, namely small sample size, the brevity of the period of investigation, and the reliance on accessibility sampling, the research provides the first and preliminary evidence supportive of the direct financial benefits of companies’ wellness programs. PMID:26849271

  13. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    PubMed Central

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  14. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    PubMed

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  15. VNIR reflectance spectroscopy of natural carbonate rocks: implication for remote sensing identification of fault damage zones

    NASA Astrophysics Data System (ADS)

    Traforti, Anna; Mari, Giovanna; Carli, Cristian; Demurtas, Matteo; Massironi, Matteo; Di Toro, Giulio

    2017-04-01

    Reflectance spectroscopy in the visible and near-infrared (VNIR) is a common technique used to study the mineral composition of Solar System bodies from remote sensed and in-situ robotic exploration. In the VNIR spectral range, both crystal field and vibrational overtone absorptions can be present with spectral characteristics (i.e. albedo, slopes, absorption band with different positions and depths) that vary depending on composition and texture (e.g. grain size, roughness) of the sensed materials. The characterization of the spectral variability related to the rock texture, especially in terms of grain size (i.e., both the size of rock components and the size of particulates), commonly allows to obtain a wide range of information about the different geological processes modifying the planetary surfaces. This work is aimed at characterizing how the grain size reduction associated to fault zone development produces reflectance variations in rock and mineral spectral signatures. To achieve this goal we present VNIR reflectance analysis of a set of fifteen rock samples collected at increasing distances from the fault core of the Vado di Corno fault zone (Campo Imperatore Fault System - Italian Central Apennines). The selected samples had similar content of calcite and dolomite but different grain size (X-Ray Powder Diffraction, optical and scanning electron microscopes analysis). Consequently, differences in the spectral signature of the fault rocks should not be ascribed to mineralogical composition. For each sample, bidirectional reflectance spectra were acquired with a Field-Pro Spectrometer mounted on a goniometer, on crushed rock slabs reduced to grain size <800, <200, <63, <10 μm and on intact fault zone rock slabs. The spectra were acquired on dry samples, at room temperature and normal atmospheric pressure. The source used was a Tungsten Halogen lamp with an illuminated spot area of ca. 0.5 cm2and incidence and emission angles of 30˚ and 0˚ respectively. The spectral analysis of the crushed and intact rock slabs in the VNIR spectral range revealed that in both cases, with increasing grain size: (i) the reflectance decreases (ii) VNIR spectrum slopes (i.e. calculated between wavelengths of 0.425 - 0.605 μm and 2.205 - 2.33 μm, respectively) and (iii) carbonate main absorption band depth (i.e. vibrational absorption band at wavelength of ˜2.3 μm) increase. In conclusion, grain size variations resulting from the fault zone evolution (e.g., cumulated slip or development of thick damage zones) produce reflectance variations in rocks and mineral spectral signatures. The remote sensing analysis in the VNIR spectral range can be applied to identify the spatial distribution and extent of fault core and damage zone domains for industrial and seismic hazard applications. Moreover, the spectral characterization of carbonate-built rocks can be of great interest for the surface investigation of inner planets (e.g. Earth and Mars) and outer bodies (e.g. Galilean icy satellites). On these surfaces, carbonate minerals at different grain sizes are common and usually related to water and carbon distribution, with direct implications for potential life outside Earth (e.g. Mars).

  16. Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests

    Treesearch

    Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford

    1995-01-01

    To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...

  17. Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods

    DTIC Science & Technology

    2016-11-01

    ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample

  18. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  19. Efficacy and safety of Suanzaoren decoction for primary insomnia: a systematic review of randomized controlled trials

    PubMed Central

    2013-01-01

    Background Insomnia is a widespread human health problem, but there currently are the limitations of conventional therapies available. Suanzaoren decoction (SZRD) is a well known classic Chinese herbal prescription for insomnia and has been treating people’s insomnia for more than thousand years. The objective of this study was to evaluate the efficacy and safety of SZRD for insomnia. Methods A systematic literature search was performed for 6 databases up to July of 2012 to identify randomized control trials (RCTs) involving SZRD for insomniac patients. The methodological quality of RCTs was assessed independently using the Cochrane Handbook for Systematic Reviews of Interventions. Results Twelve RCTs with total of 1376 adult participants were identified. The methodological quality of all included trials are no more than 3/8 score. Majority of the RCTs concluded that SZRD was more significantly effective than benzodiazepines for treating insomnia. Despite these positive outcomes, there were many methodological shortcomings in the studies reviewed, including insufficient information about randomization generation and absence of allocation concealment, lack of blinding and no placebo control, absence of intention-to-treat analysis and lack of follow-ups, selective publishing and reporting, and small number of sample sizes. A number of clinical heterogeneity such as diagnosis, intervention, control, and outcome measures were also reviewed. Only 3 trials reported adverse events, whereas the other 9 trials did not provide the safety information. Conclusions Despite the apparent reported positive findings, there is insufficient evidence to support efficacy of SZRD for insomnia due to the poor methodological quality and the small number of trials of the included studies. SZRD seems generally safe, but is insufficient evidence to make conclusions on the safety because fewer studies reported the adverse events. Further large sample-size and well-designed RCTs are needed. PMID:23336848

  20. Changing Names with Changed Address: Integrated Taxonomy and Species Delimitation in the Holarctic Colymbetes paykulli Group (Coleoptera: Dytiscidae)

    PubMed Central

    Drotz, Marcus K.; Brodin, Tomas; Nilsson, Anders N.

    2015-01-01

    Species delimitation of geographically isolated forms is a long-standing problem in less studied insect groups. Often taxonomic decisions are based directly on morphologic variation, and lack a discussion regarding sample size and the efficiency of migration barriers or dispersal/migration capacity of the studied species. These problems are here exemplified in a water beetle complex from the Bering Sea region that separates North America from Eurasia. Only a few sampled specimens occur from this particular area and they are mostly found in museum and private collections. Here we utilize the theory of integrated taxonomy to discuss the speciation of the Holarctic Colymbetes paykulli water beetle complex, which historically has included up to five species of which today only two are recognized. Three delimitation methods are used; landmark based morphometry of body shape, variation in reticulation patterns of the pronotum exo-skeleton and sequence variation of the partial mitochondrial gene Cyt b. Our conclusion is that the Palearctic and Nearctic populations of C. paykulli are given the status of separate species, based on the fact that all methods showed significant separation between populations. As a consequence the name of the Palearctic species is C. paykulli Erichson and the Nearctic species should be known as C. longulus LeConte. There is no clear support for delineation between Palearctic and Nearctic populations of C. dahuricus based on mtDNA. However, significant difference in size and reticulation patterns from the two regions is shown. The combined conclusion is that the C. dahuricus complex needs a more thorough investigation to fully disentangle its taxonomic status. Therefore it is here still regarded as a Holarctic species. This study highlights the importance to study several diagnosable characters that has the potential to discriminate evolutionary lineage during speciation. PMID:26619278

  1. Screening for coronary artery disease in patients with type 2 diabetes: a meta-analysis and trial sequential analysis

    PubMed Central

    Leitão, Cristiane B; Gross, Jorge L

    2017-01-01

    Objective To evaluate the efficacy of coronary artery disease screening in asymptomatic patients with type 2 diabetes and assess the statistical reliability of the findings. Methods Electronic databases (MEDLINE, EMBASE, Cochrane Library and clinicaltrials.org) were reviewed up to July 2016. Randomised controlled trials evaluating coronary artery disease screening in asymptomatic patients with type 2 diabetes and reporting cardiovascular events and/or mortality were included. Data were summarised with Mantel-Haenszel relative risk. Trial sequential analysis (TSA) was used to evaluate the optimal sample size to detect a 40% reduction in outcomes. Main outcomes were all-cause mortality and cardiac events (non-fatal myocardial infarction and cardiovascular death); secondary outcomes were non-fatal myocardial infarction, myocardial revascularisations and heart failure. Results One hundred thirty-five references were identified and 5 studies fulfilled the inclusion criteria and totalised 3315 patients, 117 all-cause deaths and 100 cardiac events. Screening for coronary artery disease was not associated with decrease in risk for all-cause deaths (RR 0.95(95% CI 0.66 to 1.35)) or cardiac events (RR 0.72(95% CI 0.49 to 1.06)). TSA shows that futility boundaries were reached for all-cause mortality and a relative risk reduction of 40% between treatments could be discarded. However, there is not enough information for firm conclusions for cardiac events. For secondary outcomes no benefit or harm was identified; optimal sample sizes were not reached. Conclusion Current available data do not support screening for coronary artery disease in patients with type 2 diabetes for preventing fatal events. Further studies are needed to assess the effects on cardiac events. PROSPERO CRD42015026627. PMID:28490559

  2. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  3. Electrical and magnetic properties of nano-sized magnesium ferrite

    NASA Astrophysics Data System (ADS)

    T, Smitha; X, Sheena; J, Binu P.; Mohammed, E. M.

    2015-02-01

    Nano-sized magnesium ferrite was synthesized using sol-gel techniques. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. Vibration Sample Magnetometer was used to record the magnetic measurements. XRD analysis reveals the prepared sample is single phasic without any impurity. Particle size calculation shows the average crystallite size of the sample is 19nm. FTIR analysis confirmed spinel structure of the prepared samples. Magnetic measurement study shows that the sample is ferromagnetic with high degree of isotropy. Hysterisis loop was traced at temperatures 100K and 300K. DC electrical resistivity measurements show semiconducting nature of the sample.

  4. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  5. Comparative analysis of sustainable consumption and production in Visegrad region - conclusions for textile and clothing sector

    NASA Astrophysics Data System (ADS)

    Koszewska, M.; Militki, J.; Mizsey, P.; Benda-Prokeinova, R.

    2017-10-01

    Gradual environmental degradation, shrinking of non-renewable resources, and lower quality of life are directly or indirectly arising from snowballing consumption. These unfavorable processes concern increasingly textile and clothing sector and are increasingly being felt in Visegrad Region (V4). The objective of the article was to access current consumption patterns in V4 countries, identify the factors that influence those patterns and finally to draw the conclusions for more sustainable consumption and production models as well as to make a comparative analysis of the results across V4 countries. A consumer survey was conducted to examine V4 citizens’ attitudes and behaviors in the context of sustainable consumption. To ensure sample size and comparability across countries 2000 randomly-selected V4 citizens, aged 18 and over, were interviewed. To analyze the supply side of the market and legal framework, the desk research was used. The results allowed to give some guidelines for the joint V4 strategy for solving ecological and social problems of V4 countries as well as the conclusions for textile and clothing sector.

  6. Evaluation of the Effectiveness of Chemical Dependency Counseling Course Based on Patrick and Partners

    PubMed Central

    Keshavarz, Yousef; Ghaedi, Sina; Rahimi-Kashani, Mansure

    2012-01-01

    Background The twelve step program is one of the programs that are administered for overcoming abuse of drugs. In this study, the effectiveness of chemical dependency counseling course was investigated using a hybrid model. Methods In a survey with sample size of 243, participants were selected using stratified random sampling method. A questionnaire was used for collecting data and one sample t-test employed for data analysis. Findings Chemical dependency counseling courses was effective from the point of view of graduates, chiefs of rehabilitation centers, rescuers and their families and ultimately managers of rebirth society, but it was not effective from the point of view of professors and lecturers. The last group evaluated the effectiveness of chemical dependency counseling courses only in performance level. Conclusion It seems that the chemical dependency counseling courses had appropriate effectiveness and led to change in attitudes, increase awareness, knowledge and experience combination and ultimately increased the efficiency of counseling. PMID:24494132

  7. Health sciences librarians' attitudes toward the Academy of Health Information Professionals

    PubMed Central

    Baker, Lynda M.; Kars, Marge; Petty, Janet

    2004-01-01

    Objectives: The purpose of the study was to ascertain health sciences librarians' attitudes toward the Academy of Health Information Professionals (AHIP). Sample: Systematic sampling was used to select 210 names from the list of members of the Midwest Chapter of the Medical Library Association. Methods: A questionnaire containing open- and closed-ended questions was used to collect the data. Results: A total of 135 usable questionnaires were returned. Of the respondents, 34.8% are members of the academy and most are at the senior or distinguished member levels. The academy gives them a sense of professionalism and helps them to keep current with new trends. The majority of participants (65.2%) are not members of the academy. Among the various reasons proffered are that neither institutions nor employers require it and that there is no obvious benefit to belonging to the academy. Conclusions: More research needs to be done with a larger sample size to determine the attitudes of health sciences librarians, nationwide, toward the academy. PMID:15243638

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbertson, Robert D.; Patterson, Brian M.; Smith, Zachary

    An accelerated aging study of BKC 44306-10 rigid polyurethane foam was carried out. Foam samples were aged in a nitrogen atmosphere at three different temperatures: 50 °C, 65 °C, and 80 °C. Foam samples were periodically removed from the aging canisters at 1, 3, 6, 9, 12, and 15 month intervals when FT-IR spectroscopy, dimensional analysis, and mechanical testing experiments were performed. Micro Computed Tomography imaging was also employed to study the morphology of the foams. Over the course of the aging study the foams the decreased in size by a magnitude of 0.001 inches per inch of foam. Micromore » CT showed the heterogeneous nature of the foam structure likely resulting from flow effects during the molding process. The effect of aging on the compression and tensile strength of the foam was minor and no cause for concern. FT-IR spectroscopy was used to follow the foam chemistry. However, it was difficult to draw definitive conclusions about the changes in chemical nature of the materials due to large variability throughout the samples.« less

  9. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    PubMed

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  10. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  11. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    NASA Astrophysics Data System (ADS)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  12. Size-assortative mating and sexual size dimorphism are predictable from simple mechanics of mate-grasping behavior

    PubMed Central

    2010-01-01

    Background A major challenge in evolutionary biology is to understand the typically complex interactions between diverse counter-balancing factors of Darwinian selection for size assortative mating and sexual size dimorphism. It appears that rarely a simple mechanism could provide a major explanation of these phenomena. Mechanics of behaviors can predict animal morphology, such like adaptations to locomotion in animals from various of taxa, but its potential to predict size-assortative mating and its evolutionary consequences has been less explored. Mate-grasping by males, using specialized adaptive morphologies of their forelegs, midlegs or even antennae wrapped around female body at specific locations, is a general mating strategy of many animals, but the contribution of the mechanics of this wide-spread behavior to the evolution of mating behavior and sexual size dimorphism has been largely ignored. Results Here, we explore the consequences of a simple, and previously ignored, fact that in a grasping posture the position of the male's grasping appendages relative to the female's body is often a function of body size difference between the sexes. Using an approach taken from robot mechanics we model coercive grasping of females by water strider Gerris gracilicornis males during mating initiation struggles. We determine that the male optimal size (relative to the female size), which gives the males the highest grasping force, properly predicts the experimentally measured highest mating success. Through field sampling and simulation modeling of a natural population we determine that the simple mechanical model, which ignores most of the other hypothetical counter-balancing selection pressures on body size, is sufficient to account for size-assortative mating pattern as well as species-specific sexual dimorphism in body size of G. gracilicornis. Conclusion The results indicate how a simple and previously overlooked physical mechanism common in many taxa is sufficient to account for, or importantly contribute to, size-assortative mating and its consequences for the evolution of sexual size dimorphism. PMID:21092131

  13. Neurocognitive performance in family-based and case-control studies of schizophrenia

    PubMed Central

    Gur, Ruben C.; Braff, David L.; Calkins, Monica E.; Dobie, Dorcas J.; Freedman, Robert; Green, Michael F.; Greenwood, Tiffany A.; Lazzeroni, Laura C.; Light, Gregory A.; Nuechterlein, Keith H.; Olincy, Ann; Radant, Allen D.; Seidman, Larry J.; Siever, Larry J.; Silverman, Jeremy M.; Sprock, Joyce; Stone, William S.; Sugar, Catherine A.; Swerdlow, Neal R.; Tsuang, Debby W.; Tsuang, Ming T.; Turetsky, Bruce I.; Gur, Raquel E.

    2014-01-01

    Background Neurocognitive deficits in schizophrenia (SZ) are established and the Consortium on the Genetics of Schizophrenia (COGS) investigated such measures as endophenotypes in family-based (COGS-1) and case-control (COGS-2) studies. By requiring family participation, family-based sampling may result in samples that vary demographically and perform better on neurocognitive measures. Methods The Penn computerized neurocognitive battery (CNB) evaluates accuracy and speed of performance for several domains and was administered across sites in COGS-1 and COGS-2. Most tests were included in both studies. COGS-1 included 328 patients with SZ and 497 healthy comparison subjects (HCS) and COGS-2 included 1195 patients and 1009 HCS. Results Demographically, COGS-1 participants were younger, more educated, with more educated parents and higher estimated IQ compared to COGS-2 participants. After controlling for demographics, the two samples produced very similar performance profiles compared to their respective controls. As expected, performance was better and with smaller effect sizes compared to controls in COGS-1 relative to COGS-2. Better performance was most pronounced for spatial processing while emotion identification had large effect sizes for both accuracy and speed in both samples. Performance was positively correlated with functioning and negatively with negative and positive symptoms in both samples, but correlations were attenuated in COGS-2, especially with positive symptoms. Conclusions Patients ascertained through family-based design have more favorable demographics and better performance on some neurocognitive domains. Thus, studies that use case-control ascertainment may tap into populations with more severe forms of illness that are exposed to less favorable factors compared to those ascertained with family-based designs. PMID:25432636

  14. Is High Resolution Melting Analysis (HRMA) Accurate for Detection of Human Disease-Associated Mutations? A Meta Analysis

    PubMed Central

    Ma, Feng-Li; Jiang, Bo; Song, Xiao-Xiao; Xu, An-Gao

    2011-01-01

    Background High Resolution Melting Analysis (HRMA) is becoming the preferred method for mutation detection. However, its accuracy in the individual clinical diagnostic setting is variable. To assess the diagnostic accuracy of HRMA for human mutations in comparison to DNA sequencing in different routine clinical settings, we have conducted a meta-analysis of published reports. Methodology/Principal Findings Out of 195 publications obtained from the initial search criteria, thirty-four studies assessing the accuracy of HRMA were included in the meta-analysis. We found that HRMA was a highly sensitive test for detecting disease-associated mutations in humans. Overall, the summary sensitivity was 97.5% (95% confidence interval (CI): 96.8–98.5; I2 = 27.0%). Subgroup analysis showed even higher sensitivity for non-HR-1 instruments (sensitivity 98.7% (95%CI: 97.7–99.3; I2 = 0.0%)) and an eligible sample size subgroup (sensitivity 99.3% (95%CI: 98.1–99.8; I2 = 0.0%)). HRMA specificity showed considerable heterogeneity between studies. Sensitivity of the techniques was influenced by sample size and instrument type but by not sample source or dye type. Conclusions/Significance These findings show that HRMA is a highly sensitive, simple and low-cost test to detect human disease-associated mutations, especially for samples with mutations of low incidence. The burden on DNA sequencing could be significantly reduced by the implementation of HRMA, but it should be recognized that its sensitivity varies according to the number of samples with/without mutations, and positive results require DNA sequencing for confirmation. PMID:22194806

  15. Relation of average and highest solvent vapor concentrations in workplaces in small to medium enterprises and large enterprises.

    PubMed

    Ukai, Hirohiko; Ohashi, Fumiko; Samoto, Hajime; Fukui, Yoshinari; Okamoto, Satoru; Moriguchi, Jiro; Ezaki, Takafumi; Takada, Shiro; Ikeda, Masayuki

    2006-04-01

    The present study was initiated to examine the relationship between the workplace concentrations and the estimated highest concentrations in solvent workplaces (SWPs), with special references to enterprise size and types of solvent work. Results of survey conducted in 1010 SWPs in 156 enterprises were taken as a database. Workplace air was sampled at > or = 5 crosses in each SWP following a grid sampling strategy. An additional air was grab-sampled at the site where the worker's exposure was estimated to be highest (estimated highest concentration or EHC). The samples were analyzed for 47 solvents designated by regulation, and solvent concentrations in each sample were summed up by use of additiveness formula. From the workplace concentrations at > or = 5 points, geometric mean and geometric standard deviations were calculated as the representative workplace concentration (RWC) and the indicator of variation in workplace concentration (VWC). Comparison between RWC and EHC in the total of 1010 SWPs showed that EHC was 1.2 (in large enterprises with>300 employees) to 1.7 times [in small to medium (SM) enterprises with < or = 300 employees] greater than RWC. When SWPs were classified into SM enterprises and large enterprises, both RWC and EHC were significantly higher in SM enterprises than in large enterprises. Further comparison by types of solvent work showed that the difference was more marked in printing, surface coating and degreasing/cleaning/wiping SWPs, whereas it was less remarkable in painting SWPs and essentially nil in testing/research laboratories. In conclusion, the present observation as discussed in reference to previous publications suggests that RWC, EHC and the ratio of EHC/WRC varies substantially among different types of solvent work as well as enterprise size, and are typically higher in printing SWPs in SM enterprises.

  16. A cross sectional study on factors associated with harmful traditional practices among children less than 5 years in Axum town, north Ethiopia, 2013

    PubMed Central

    2014-01-01

    Background Every social grouping in the world has its own cultural practices and beliefs which guide its members on how they should live or behave. Harmful traditional practices that affect children are Female genital mutilation, Milk teeth extraction, Food taboo, Uvula cutting, keeping babies out of exposure to sun, and Feeding fresh butter to new born babies. The objective of this study was to assess factors associated with harmful traditional practices among children less than 5 years of age in Axum town, North Ethiopia. Methods Community based cross sectional study was conducted in 752 participants who were selected using multi stage sampling; Simple random sampling method was used to select ketenas from all kebelles of Axum town. After proportional allocation of sample size, systematic random sampling method was used to get the study participants. Data was collected using interviewer administered Tigrigna version questionnaire, it was entered and analyzed using SPSS version 16. Descriptive statistics was calculated and logistic regressions were used to analyze the data. Results Out of the total sample size 50.7% children were females, the mean age of children was 26.28 months and majority of mothers had no formal education. About 87.8% mothers had performed at least one traditional practice to their children; uvula cutting was practiced on 86.9% children followed by milk teeth extraction 12.5% and eye borrows incision 2.4% children. Fear of swelling, pus and rapture of the uvula was the main reason to perform uvula cutting. Conclusion The factors associated with harmful traditional practices were educational status, occupation, religion of mothers and harmful traditional practices performed on the mothers. PMID:24952584

  17. qFibrosis: A fully-quantitative innovative method incorporating histological features to facilitate accurate fibrosis scoring in animal model and chronic hepatitis B patients

    PubMed Central

    Tai, Dean C.S.; Wang, Shi; Cheng, Chee Leong; Peng, Qiwen; Yan, Jie; Chen, Yongpeng; Sun, Jian; Liang, Xieer; Zhu, Youfu; Rajapakse, Jagath C.; Welsch, Roy E.; So, Peter T.C.; Wee, Aileen; Hou, Jinlin; Yu, Hanry

    2014-01-01

    Background & Aims There is increasing need for accurate assessment of liver fibrosis/cirrhosis. We aimed to develop qFibrosis, a fully-automated assessment method combining quantification of histopathological architectural features, to address unmet needs in core biopsy evaluation of fibrosis in chronic hepatitis B (CHB) patients. Methods qFibrosis was established as a combined index based on 87 parameters of architectural features. Images acquired from 25 Thioacetamide-treated rat samples and 162 CHB core biopsies were used to train and test qFibrosis and to demonstrate its reproducibility. qFibrosis scoring was analyzed employing Metavir and Ishak fibrosis staging as standard references, and collagen proportionate area (CPA) measurement for comparison. Results qFibrosis faithfully and reliably recapitulates Metavir fibrosis scores, as it can identify differences between all stages in both animal samples (p <0.001) and human biopsies (p <0.05). It is robust to sampling size, allowing for discrimination of different stages in samples of different sizes (area under the curve (AUC): 0.93–0.99 for animal samples: 1–16 mm2; AUC: 0.84–0.97 for biopsies: 10–44 mm in length). qFibrosis can significantly predict staging underestimation in suboptimal biopsies (<15 mm) and under- and over-scoring by different pathologists (p <0.001). qFibrosis can also differentiate between Ishak stages 5 and 6 (AUC: 0.73, p = 0.008), suggesting the possibility of monitoring intra-stage cirrhosis changes. Best of all, qFibrosis demonstrates superior performance to CPA on all counts. Conclusions qFibrosis can improve fibrosis scoring accuracy and throughput, thus allowing for reproducible and reliable analysis of efficacies of anti-fibrotic therapies in clinical research and practice. PMID:24583249

  18. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  19. China's Only Children and Psychopathology: A Quantitative Synthesis

    PubMed Central

    Falbo, Toni; Hooper, Sophia Y.

    2015-01-01

    The goal of this study is to synthesize quantitatively the results of studies of psychopathology among Chinese only children. Since 1979, China's one-child policy has generated large numbers of only children, especially in large urban centers, where the one-child family has become a social norm. Motivated by concern for mental health, 22 studies, based on the SCL-90, have been published that compare the scores of only children to their peers with siblings. The raw effect sizes generated by each study underwent adjustments in order to enhance the reliability of the findings, including the identification and replacement of outliers, and weighting by inverse-sample size. In addition, analyses were conducted to evaluate the degree of publication bias exhibited by this collection of studies and the results from the SCL-90 studies were compared to studies using alternative measures of anxiety and depression. Overall, the synthesis found small, but significant advantages for only children compared to their peers with siblings, regardless of subscale. However, moderators of this only-child effect were also found: only children as college students reported significantly fewer symptoms, regardless of subscale; while only children as military recruits reported more symptoms, although the findings about military recruits received less support from the analyses. Furthermore, the size of the only-child advantage was found to be greater for only children born after the policy. Conclusions based on this synthesis are limited by the fact that this body of studies is based on convenience samples of relatively successful youth. PMID:25894306

  20. Microcephaly genes evolved adaptively throughout the evolution of eutherian mammals

    PubMed Central

    2014-01-01

    Background Genes associated with the neurodevelopmental disorder microcephaly display a strong signature of adaptive evolution in primates. Comparative data suggest a link between selection on some of these loci and the evolution of primate brain size. Whether or not either positive selection or this phenotypic association are unique to primates is unclear, but recent studies in cetaceans suggest at least two microcephaly genes evolved adaptively in other large brained mammalian clades. Results Here we analyse the evolution of seven microcephaly loci, including three recently identified loci, across 33 eutherian mammals. We find extensive evidence for positive selection having acted on the majority of these loci not just in primates but also across non-primate mammals. Furthermore, the patterns of selection in major mammalian clades are not significantly different. Using phylogenetically corrected comparative analyses, we find that the evolution of two microcephaly loci, ASPM and CDK5RAP2, are correlated with neonatal brain size in Glires and Euungulata, the two most densely sampled non-primate clades. Conclusions Together with previous results, this suggests that ASPM and CDK5RAP2 may have had a consistent role in the evolution of brain size in mammals. Nevertheless, several limitations of currently available data and gene-phenotype tests are discussed, including sparse sampling across large evolutionary distances, averaging gene-wide rates of evolution, potential phenotypic variation and evolutionary reversals. We discuss the implications of our results for studies of the genetic basis of brain evolution, and explicit tests of gene-phenotype hypotheses. PMID:24898820

  1. The legibility of prescription medication labelling in Canada

    PubMed Central

    Ahrens, Kristina; Krishnamoorthy, Abinaya; Gold, Deborah; Rojas-Fernandez, Carlos H.

    2014-01-01

    Introduction: The legibility of medication labelling is a concern for all Canadians, because poor or illegible labelling may lead to miscommunication of medication information and poor patient outcomes. There are currently few guidelines and no regulations regarding print standards on medication labels. This study analyzed sample prescription labels from Ontario, Canada, and compared them with print legibility guidelines (both generic and specific to medication labels). Methods: Cluster sampling was used to randomly select a total of 45 pharmacies in the tri-cities of Kitchener, Waterloo and Cambridge. Pharmacies were asked to supply a regular label with a hypothetical prescription. The print characteristics of patient-critical information were compared against the recommendations for prescription labels by pharmaceutical and health organizations and for print accessibility by nongovernmental organizations. Results: More than 90% of labels followed the guidelines for font style, contrast, print colour and nonglossy paper. However, only 44% of the medication instructions met the minimum guideline of 12-point print size, and none of the drug or patient names met this standard. Only 5% of the labels were judged to make the best use of space, and 51% used left alignment. None of the instructions were in sentence case, as is recommended. Discussion: We found discrepancies between guidelines and current labels in print size, justification, spacing and methods of emphasis. Conclusion: Improvements in pharmacy labelling are possible without moving to new technologies or changing the size of labels and would be expected to enhance patient outcomes. PMID:24847371

  2. Grey literature in meta-analyses.

    PubMed

    Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J

    2003-01-01

    In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.

  3. Chemomechanical preparation by hand instrumentation and by Mtwo engine-driven rotary files, an ex vivo study

    PubMed Central

    Krajczár, Károly; Tigyi, Zoltán; Papp, Viktória; Sára, Jeges; Tóth, Vilmos

    2012-01-01

    Objective: To compare the disinfecting efficacy of the sodium hypochlorite irrigation by root canal preparation with stainless steel hand files, taper 0.02 and nickel-titanium Mtwo files with taper 0.04-0.06. Study Design: 40 extracted human teeth were sterilized, and then inoculated with Enterococcus faecalis (ATCC 29212). After 6 day incubation time the root canals were prepared by hand with K-files (n=20) and by engine-driven Mtwo files (VDW, Munich, Germany) (n=20). Irrigation was carried out with 2.5% NaOCl in both cases. Samples were taken and determined in colony forming units (CFU) from the root canals before and after the preparation with instruments #25 and #35. Results: Significant reduction in bacterial count was determined after filing at both groups. The number of bacteria kept on decreasing with the extension of apical preparation diameter. There was no significant difference between the preparation sizes in the bacterial counts after hand or engine-driven instrumentation at the same apical size. Statistical analysis was carried out with Mann-Whitney test, paired t-test and independent sample t-test. Conclusions: Significant reduction in CFU was achieved after the root canal preparation completed with 2.5% NaOCl irrigation, both with stainless steel hand or nickel-titanium rotary files. The root canal remained slightly infected after chemo mechanical preparation in both groups. Key words:Chemomechanical preparation, root canal disinfection, nickel-titanium, conicity, greater taper, apical size. PMID:24558545

  4. A novel method to detect unlabeled inorganic nanoparticles and submicron particles in tissue by sedimentation field-flow fractionation

    PubMed Central

    Deering, Cassandra E; Tadjiki, Soheyl; Assemi, Shoeleh; Miller, Jan D; Yost, Garold S; Veranth, John M

    2008-01-01

    A novel methodology to detect unlabeled inorganic nanoparticles was experimentally demonstrated using a mixture of nano-sized (70 nm) and submicron (250 nm) silicon dioxide particles added to mammalian tissue. The size and concentration of environmentally relevant inorganic particles in a tissue sample can be determined by a procedure consisting of matrix digestion, particle recovery by centrifugation, size separation by sedimentation field-flow fractionation (SdFFF), and detection by light scattering. Background Laboratory nanoparticles that have been labeled by fluorescence, radioactivity, or rare elements have provided important information regarding nanoparticle uptake and translocation, but most nanomaterials that are commercially produced for industrial and consumer applications do not contain a specific label. Methods Both nitric acid digestion and enzyme digestion were tested with liver and lung tissue as well as with cultured cells. Tissue processing with a mixture of protease enzymes is preferred because it is applicable to a wide range of particle compositions. Samples were visualized via fluorescence microscopy and transmission electron microscopy to validate the SdFFF results. We describe in detail the tissue preparation procedures and discuss method sensitivity compared to reported levels of nanoparticles in vivo. Conclusion Tissue digestion and SdFFF complement existing techniques by precisely identifying unlabeled metal oxide nanoparticles and unambiguously distinguishing nanoparticles (diameter<100 nm) from both soluble compounds and from larger particles of the same nominal elemental composition. This is an exciting capability that can facilitate epidemiological and toxicological research on natural and manufactured nanomaterials. PMID:19055780

  5. Identification of Martian Regolith Sulfur Components in Shergottites Using Sulfur K Xanes and Fe/S Ratios

    NASA Technical Reports Server (NTRS)

    Sutton, S. R.; Ross, D. K.; Rao, M. N.; Nyquist, L. E.

    2014-01-01

    Based on isotopic anomalies in Kr and Sm, Sr-isotopes, S-isotopes, XANES results on S-speciation, Fe/S ratios in sulfide immiscible melts [5], and major element correlations with S determined in impact glasses in EET79001 Lith A & Lith B and Tissint, we have provided very strong evidence for the occurrence of a Martian regolith component in some impact melt glasses in shergottites. Using REE measurements by LA-ICP-MS in shergottite impact glasses, Barrat and co-workers have recently reported conflicting conclusions about the occurrence of Martian regolith components: (a) Positive evidence was reported for a Tissint impact melt, but (b) Negative evidence for impact melt in EET79001 and another impact melt in Tissint. Here, we address some specific issues related to sulfur speciation and their relevance to identifying Martian regolith components in impact glasses in EET79001 and Tissint using sulfur K XANES and Fe/S ratios in sulfide immiscible melts. XANES and FE-SEM measurements in approx. 5 micron size individual sulfur blebs in EET79001 and Tissint glasses are carried out by us using sub-micron size beams, whereas Barrat and coworkers used approx. 90 micron size laser spots for LA- ICP-MS to determine REE abundances in bulk samples of the impact melt glasses. We contend that Martian regolith components in some shergottite impact glasses are present locally, and that studying impact melts in various shergottites can give evidence both for and against regolith components because of sample heterogeneity.

  6. A computer program for sample size computations for banding studies

    USGS Publications Warehouse

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  7. Probability of coincidental similarity among the orbits of small bodies - I. Pairing

    NASA Astrophysics Data System (ADS)

    Jopek, Tadeusz Jan; Bronikowska, Małgorzata

    2017-09-01

    Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.

  8. The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

    PubMed Central

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana

    2011-01-01

    Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777

  9. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  10. Synthesis of nanocomposites comprising iron and barium hexaferrites

    NASA Astrophysics Data System (ADS)

    Pal, M.; Bid, S.; Pradhan, S. K.; Nath, B. K.; Das, D.; Chakravorty, D.

    2004-02-01

    Composites of nanometre-sized α-iron and barium hexaferrite phases, respectively, have been synthesized by the ceramic processing route. Pure barium hexaferrite (BaO·6Fe 2O 3) was first of all prepared by calcinations of the precursor oxides at a maximum temperature of 1200°C for 4 h. By subjecting the resulting powder having particle size of the order of 1 μm to a reduction treatment in the temperature range 500-650°C for a period varying from 10 to 15 min it was possible to obtain a composite consisting of nanosized barium hexaferrite and α-Fe. At reduction temperature of 650°C for a period greater than 15 min all the ferrite phase was converted to α-Fe and Ba—the particle sizes being 59.4 and 43.6 nm, respectively. These conclusions are based on X-ray diffraction and Mossbauer studies of different samples. During reduction H + ions are introduced into the hexaferrite crystallite. It is believed that due to a tensile stress the crystals are broken up into smaller dimensions and the reduction brings about the growth of nanosized α-Fe and barium, respectively, around the hexaferrite particles. Magnetic measurements show coercivity values for the reduced samples in the range 120-440 Oe and saturation magnetization varying from 158 to 53.7 emu/g. These values have been ascribed to the formation and growth of α-Fe particles as the reduction treatment is increased. By heating the nanocomposites at a temperature of 1000°C for 1 h in ordinary atmosphere it was found that they were reconverted to the barium hexaferrite phase with a particle size ˜182.3 nm. The reaction described in this study is thus reversible.

  11. Increased Pouch Sizes and Resulting Changes in the Amounts of Nicotine and Tobacco-Specific N-Nitrosamines in Single Pouches of Camel Snus and Marlboro Snus

    PubMed Central

    Jensen, Joni; Biener, Lois; Bliss, Robin L.; Hecht, Stephen S.; Hatsukami, Dorothy K.

    2012-01-01

    Introduction: Initial analyses of the novel smokeless tobacco products Camel Snus and Marlboro Snus demonstrated that these products contain relatively low amounts of nicotine and the carcinogenic tobacco-specific nitrosamines N’-nitrosonornicotine (NNN) and 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), as compared with traditional smokeless products. It is unknown whether the modifications in packaging, flavors, and pouch sizes that occurred for both Camel Snus and Marlboro Snus since their first introduction to the market were accompanied by any changes in nicotine or nitrosamine levels. Methods: We examined the available data on nicotine and NNN and NNK levels in 60 samples of Camel Snus and 87 samples of Marlboro Snus that were analyzed in our laboratory between 2006 and 2010. Results: Due to the increase in pouch size, the amounts of total nicotine, unprotonated nicotine, and the sum of NNN and NNK present in the large Camel Snus pouches released in 2010 are 1.9-fold, 2.4-fold, and 3.3-fold higher, respectively, than in the original smaller pouches that entered the market in 2006. Total and unprotonated nicotine content in the current version of Marlboro Snus pouches are 2.1-fold and 1.9-fold higher, respectively, and the sum of NNN and NNK is 1.5-fold lower than in the original version. Conclusions: We observed an increase in nicotine content in single portions of Camel Snus and Marlboro Snus, and an increase in tobacco-specific N-nitrosamine content in single portions of Camel Snus, due to the increases in pouch size that occurred between 2006 and 2010. This finding stresses the importance of tobacco product regulation and ingredient disclosures. PMID:22259150

  12. Sample size considerations using mathematical models: an example with Chlamydia trachomatis infection and its sequelae pelvic inflammatory disease.

    PubMed

    Herzog, Sereina A; Low, Nicola; Berghold, Andrea

    2015-06-19

    The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.

  13. Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples

    NASA Astrophysics Data System (ADS)

    Petit, Johan; Lallemant, Lucile

    2017-05-01

    In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.

  14. The relationship between national-level carbon dioxide emissions and population size: an assessment of regional and temporal variation, 1960-2005.

    PubMed

    Jorgenson, Andrew K; Clark, Brett

    2013-01-01

    This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region.

  15. Body Size Estimation from Early to Middle Childhood: Stability of Underestimation, BMI, and Gender Effects.

    PubMed

    Steinsbekk, Silje; Klöckner, Christian A; Fildes, Alison; Kristoffersen, Pernille; Rognsås, Stine L; Wichstrøm, Lars

    2017-01-01

    Individuals who are overweight are more likely to underestimate their body size than those who are normal weight, and overweight underestimators are less likely to engage in weight loss efforts. Underestimation of body size might represent a barrier to prevention and treatment of overweight; thus insight in how underestimation of body size develops and tracks through the childhood years is needed. The aim of the present study was therefore to examine stability in children's underestimation of body size, exploring predictors of underestimation over time. The prospective path from underestimation to BMI was also tested. In a Norwegian cohort of 6 year olds, followed up at ages 8 and 10 (analysis sample: n = 793) body size estimation was captured by the Children's Body Image Scale, height and weight were measured and BMI calculated. Overall, children were more likely to underestimate than overestimate their body size. Individual stability in underestimation was modest, but significant. Higher BMI predicted future underestimation, even when previous underestimation was adjusted for, but there was no evidence for the opposite direction of influence. Boys were more likely than girls to underestimate their body size at ages 8 and 10 (age 8: 38.0% vs. 24.1%; Age 10: 57.9% vs. 30.8%) and showed a steeper increase in underestimation with age compared to girls. In conclusion, the majority of 6, 8, and 10-year olds correctly estimate their body size (prevalence ranging from 40 to 70% depending on age and gender), although a substantial portion perceived themselves to be thinner than they actually were. Higher BMI forecasted future underestimation, but underestimation did not increase the risk for excessive weight gain in middle childhood.

  16. Sample size calculation for a proof of concept study.

    PubMed

    Yin, Yin

    2002-05-01

    Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.

  17. The influence of secondary processing on the structural relaxation dynamics of fluticasone propionate.

    PubMed

    Depasquale, Roberto; Lee, Sau L; Saluja, Bhawana; Shur, Jagdeep; Price, Robert

    2015-06-01

    This study investigated the structural relaxation of micronized fluticasone propionate (FP) under different lagering conditions and its influence on aerodynamic particle size distribution (APSD) of binary and tertiary carrier-based dry powder inhaler (DPI) formulations. Micronized FP was lagered under low humidity (LH 25 C, 33% RH [relative humidity]), high humidity (HH 25°C, 75% RH) for 30, 60, and 90 days, respectively, and high temperature (HT 60°C, 44% RH) for 14 days. Physicochemical, surface interfacial properties via cohesive-adhesive balance (CAB) measurements and amorphous disorder levels of the FP samples were characterized. Particle size, surface area, and rugosity suggested minimal morphological changes of the lagered FP samples, with the exception of the 90-day HH (HH90) sample. HH90 FP samples appeared to undergo surface reconstruction with a reduction in surface rugosity. LH and HH lagering reduced the levels of amorphous content over 90-day exposure, which influenced the CAB measurements with lactose monohydrate and salmeterol xinafoate (SX). CAB analysis suggested that LH and HH lagering led to different interfacial interactions with lactose monohydrate but an increasing adhesive affinity with SX. HT lagering led to no detectable levels of the amorphous disorder, resulting in an increase in the adhesive interaction with lactose monohydrate. APSD analysis suggested that the fine particle mass of FP and SX was affected by the lagering of the FP. In conclusion, environmental conditions during the lagering of FP may have a profound effect on physicochemical and interfacial properties as well as product performance of binary and tertiary carrier-based DPI formulations.

  18. EXACT DISTRIBUTIONS OF INTRACLASS CORRELATION AND CRONBACH'S ALPHA WITH GAUSSIAN DATA AND GENERAL COVARIANCE.

    PubMed

    Kistner, Emily O; Muller, Keith E

    2004-09-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

  19. Choosing a design to fit the situation: how to improve specificity and positive predictive values using Bayesian lot quality assurance sampling

    PubMed Central

    Olives, Casey; Pagano, Marcello

    2013-01-01

    Background Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined. Methods We present Bayesian-LQAS (B-LQAS), an approach to incorporating prior information into the choice of the LQAS sample size and decision rule, and explore its properties through a numerical study. Additionally, we analyse vaccination coverage data from UNICEF’s State of the World’s Children in 1968–1989 and 2008 to exemplify the performance of LQAS and B-LQAS. Results Results of our numerical study show that the choice of LQAS sample size and decision rule is sensitive to the distribution of prior information, as well as to individual beliefs about the importance of correct classification. Application of the B-LQAS approach to the UNICEF data improves specificity and PPV in both time periods (1968–1989 and 2008) with minimal reductions in sensitivity and negative predictive value. Conclusions LQAS is shown to be a robust tool that is not necessarily prone to poor specificity and PPV as previously alleged. In situations where prior or historical data are available, B-LQAS can lead to improvements in expected performance. PMID:23378151

  20. A screening of persistent organohalogenated contaminants in hair of East Greenland polar bears.

    PubMed

    Jaspers, Veerle L B; Dietz, Rune; Sonne, Christian; Letcher, Robert J; Eens, Marcel; Neels, Hugo; Born, Erik W; Covaci, Adrian

    2010-10-15

    In this pilot study, we report on levels of persistent organohalogenated contaminants (OHCs) in hair of polar bears (Ursus maritimus) from East Greenland sampled between 1999 and 2001. To our knowledge, this is the first study on the validation of polar bear hair as a non-invasive matrix representative of concentrations and profiles in internal organs and blood plasma. Because of low sample weights (13-140mg), only major bioaccumulative OHCs were detected above the limit of quantification: five polychlorinated biphenyl (PCB) congeners (CB 99, 138, 153, 170 and 180), one polybrominated diphenyl ether (PBDE) congener (BDE 47), oxychlordane, trans-nonachlor and β-hexachlorocyclohexane. The PCB profile in hair was similar to that of internal tissues (i.e. adipose, liver, brain and blood), with CB 153 and 180 as the major congeners in all matrices. A gender difference was found for concentrations in hair relative to concentrations in internal tissues. Females (n=6) were found to display negative correlations, while males (n=5) showed positive correlations, although p-values were not found significant. These negative correlations in females may reflect seasonal OHC mobilisation from periphery adipose tissue due to, for example, lactation and fasting. The lack of significance in most correlations may be due to small sample sizes and seasonal variability of concentrations in soft tissues. Further research with larger sample weights and sizes is therefore necessary to draw more definitive conclusions on the usefulness of hair for biomonitoring OHCs in polar bears and other fur mammals. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. Ultrasound detection of simulated intra-ocular foreign bodies by minimally trained personnel.

    PubMed

    Sargsyan, Ashot E; Dulchavsky, Alexandria G; Adams, James; Melton, Shannon; Hamilton, Douglas R; Dulchavsky, Scott A

    2008-01-01

    To test the ability of non-expert ultrasound operators of divergent backgrounds to detect the presence, size, location, and composition of foreign bodies in an ocular model. High school students (N = 10) and NASA astronauts (N = 4) completed a brief ultrasound training session which focused on basic ultrasound principles and the detection of foreign bodies. The operators used portable ultrasound devices to detect foreign objects of varying location, size (0.5-2 mm), and material (glass, plastic, metal) in a gelatinous ocular model. Operator findings were compared to known foreign object parameters and ultrasound experts (N = 2) to determine accuracy across and between groups. Ultrasound had high sensitivity (astronauts 85%, students 87%, and experts 100%) and specificity (astronauts 81%, students 83%, and experts 95%) for the detection of foreign bodies. All user groups were able to accurately detect the presence of foreign bodies in this model (astronauts 84%, students 81%, and experts 97%). Astronaut and student sensitivity results for material (64% vs. 48%), size (60% vs. 46%), and position (77% vs. 64%) were not statistically different. Experts' results for material (85%), size (90%), and position (98%) were higher; however, the small sample size precluded statistical conclusions. Ultrasound can be used by operators with varying training to detect the presence, location, and composition of intraocular foreign bodies with high sensitivity, specificity, and accuracy.

  2. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  3. Variation in aluminum, iron, and particle concentrations in oxic groundwater samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    NASA Astrophysics Data System (ADS)

    Szabo, Zoltan; Oden, Jeannette H.; Gibs, Jacob; Rice, Donald E.; Ding, Yuan

    2002-02-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering.

  4. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    PubMed

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  5. Using lod scores to detect sex differences in male-female recombination fractions.

    PubMed

    Feenstra, B; Greenberg, D A; Hodge, S E

    2004-01-01

    Human recombination fraction (RF) can differ between males and females, but investigators do not always know which disease genes are located in genomic areas of large RF sex differences. Knowledge of RF sex differences contributes to our understanding of basic biology and can increase the power of a linkage study, improve gene localization, and provide clues to possible imprinting. One way to detect these differences is to use lod scores. In this study we focused on detecting RF sex differences and answered the following questions, in both phase-known and phase-unknown matings: (1) How large a sample size is needed to detect a RF sex difference? (2) What are "optimal" proportions of paternally vs. maternally informative matings? (3) Does ascertaining nonoptimal proportions of paternally or maternally informative matings lead to ascertainment bias? Our results were as follows: (1) We calculated expected lod scores (ELODs) under two different conditions: "unconstrained," allowing sex-specific RF parameters (theta(female), theta(male)); and "constrained," requiring theta(female) = theta(male). We then examined the DeltaELOD (identical with difference between maximized constrained and unconstrained ELODs) and calculated minimum sample sizes required to achieve statistically significant DeltaELODs. For large RF sex differences, samples as small as 10 to 20 fully informative matings can achieve statistical significance. We give general sample size guidelines for detecting RF differences in informative phase-known and phase-unknown matings. (2) We defined p as the proportion of paternally informative matings in the dataset; and the optimal proportion p(circ) as that value of p that maximizes DeltaELOD. We determined that, surprisingly, p(circ) does not necessarily equal (1/2), although it does fall between approximately 0.4 and 0.6 in most situations. (3) We showed that if p in a sample deviates from its optimal value, no bias is introduced (asymptotically) to the maximum likelihood estimates of theta(female) and theta(male), even though ELOD is reduced (see point 2). This fact is important because often investigators cannot control the proportions of paternally and maternally informative families. In conclusion, it is possible to reliably detect sex differences in recombination fraction. Copyright 2004 S. Karger AG, Basel

  6. Sample size determination for equivalence assessment with multiple endpoints.

    PubMed

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  7. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    PubMed

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  8. Innovative Recruitment Using Online Networks: Lessons Learned From an Online Study of Alcohol and Other Drug Use Utilizing a Web-Based, Respondent-Driven Sampling (webRDS) Strategy

    PubMed Central

    Bauermeister, José A.; Zimmerman, Marc A.; Johns, Michelle M.; Glowacki, Pietreck; Stoddard, Sarah; Volz, Erik

    2012-01-01

    Objective: We used a web version of Respondent-Driven Sampling (webRDS) to recruit a sample of young adults (ages 18–24) and examined whether this strategy would result in alcohol and other drug (AOD) prevalence estimates comparable to national estimates (National Survey on Drug Use and Health [NSDUH]). Method: We recruited 22 initial participants (seeds) via Facebook to complete a web survey examining AOD risk correlates. Sequential, incentivized recruitment continued until our desired sample size was achieved. After correcting for webRDS clustering effects, we contrasted our AOD prevalence estimates (past 30 days) to NSDUH estimates by comparing the 95% confidence intervals of prevalence estimates. Results: We found comparable AOD prevalence estimates between our sample and NSDUH for the past 30 days for alcohol, marijuana, cocaine, Ecstasy (3,4-methylenedioxymethamphetamine, or MDMA), and hallucinogens. Cigarette use was lower than NSDUH estimates. Conclusions: WebRDS may be a suitable strategy to recruit young adults online. We discuss the unique strengths and challenges that may be encountered by public health researchers using webRDS methods. PMID:22846248

  9. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size.

    PubMed

    Heidel, R Eric

    2016-01-01

    Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  10. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    PubMed

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  11. 40 CFR 80.127 - Sample size guidelines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...

  12. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome.

    PubMed

    Kristunas, Caroline A; Smith, Karen L; Gray, Laura J

    2017-03-07

    The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.

  13. Milling of rice grains: effects of starch/flour structures on gelatinization and pasting properties.

    PubMed

    Hasjim, Jovin; Li, Enpeng; Dhital, Sushil

    2013-01-30

    Starch gelatinization and flour pasting properties were determined and correlated with four different levels of starch structures in rice flour, i.e. flour particle size, degree of damaged starch granules, whole molecular size, and molecular branching structure. Onset starch-gelatinization temperatures were not significantly different among all flour samples, but peak and conclusion starch-gelatinization temperatures were significantly different and were strongly correlated with the flour particle size, indicating that rice flour with larger particle size has a greater barrier for heat transfer. There were slight differences in the enthalpy of starch gelatinization, which are likely associated with the disruption of crystalline structure in starch granules by the milling processes. Flours with volume-median diameter ≥56 μm did not show a defined peak viscosity in the RVA viscogram, possibly due to the presence of native protein and/or cell-wall structure stabilizing the swollen starch granules against the rupture caused by shear during heating. Furthermore, RVA final viscosity of flour was strongly correlated with the degree of damage to starch granules, suggesting the contribution of granular structure, possibly in swollen form. The results from this study allow the improvement in the manufacture and the selection criteria of rice flour with desirable gelatinization and pasting properties. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Relationship of serum CEA levels to tumour size and CEA content in nude mice bearing colonic-tumour xenografts.

    PubMed Central

    Lewis, J. C.; Keep, P. A.

    1981-01-01

    The relationship of serum carcinoembryonic antigen (CEA) levels to tumour size and antigen content was studied in nude mice bearing well differentiated, mucinous human colonic-tumour xenografts. Blood samples were taken from normal nude mice and others bearing xenografts, whose size had been calculated from in vivo measurements; saline and KCl extracts were made of a proportion of these tumours. Sera and tissue extracts were assayed for CEA activity by double-antibody radioimmunoassay. Extracts were also made from the livers and spleens of tumour-bearing and normal nude mice. All normal sera and 78% of sera from tumour-bearing animals had CEA values less than 11.4 ng/ml. No clear correlation was found between serum CEA levels greater than 11.4 ng/ml and tumour size or weight, or between serum CEA and tumour CEA concentrations or total CEA burden. The concentration of CEA in those tumours tested varied from 1 to 22 microgram/g. Our results confirm and extend the conclusions reached by others (Stragand et al., 1980) studying the significance of serum CEA levels with xenograft model systems. The complexity of factors contributing to circulating CEA is discussed in the light of our findings. Images Fig. 1 PMID:7284235

  15. A facile approach to manufacturing non-ionic surfactant nanodipsersions using proniosome technology and high-pressure homogenization.

    PubMed

    Najlah, Mohammad; Hidayat, Kanar; Omer, Huner K; Mwesigwa, Enosh; Ahmed, Waqar; AlObaidy, Kais G; Phoenix, David A; Elhissi, Abdelbary

    2015-03-01

    In this study, a niosome nanodispersion was manufactured using high-pressure homogenization following the hydration of proniosomes. Using beclometasone dipropionate (BDP) as a model drug, the characteristics of the homogenized niosomes were compared with vesicles prepared via the conventional approach of probe-sonication. Particle size, zeta potential, and the drug entrapment efficiency were similar for both size reduction mechanisms. However, high-pressure homogenization was much more efficient than sonication in terms of homogenization output rate, avoidance of sample contamination, offering a greater potential for a large-scale manufacturing of noisome nanodispersions. For example, high-pressure homogenization was capable of producing small size niosomes (209 nm) using a short single-step of size reduction (6 min) as compared with the time-consuming process of sonication (237 nm in >18 min) and the BDP entrapment efficiency was 29.65% ± 4.04 and 36.4% ± 2.8. In addition, for homogenization, the output rate of the high-pressure homogenization was 10 ml/min compared with 0.83 ml/min using the sonication protocol. In conclusion, a facile, applicable, and highly efficient approach for preparing niosome nanodispersions has been established using proniosome technology and high-pressure homogenization.

  16. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach. [Kansas

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  17. Pilot study on infant swimming classes and early motor development.

    PubMed

    Dias, Jorge A B de S; Manoel, Edison de J; Dias, Roberta B de M; Okazaki, Victor H A

    2013-12-01

    Alberta Infant Motor Scale (AIMS) scores were examined before and after four months of swimming classes in 12 babies (ages 7 to 9 mo.) assigned to Experimental (n = 6) and Control (n = 6) groups matched on age and developmental status. Infants from both groups improved their developmental status from pre- to post-test; the Experimental group improved on mean percentile rank. The sample size and the discriminative power of the AIMS do not allow conclusive judgments on these group differences, hence on the effect of infant swimming classes. Nevertheless, a number of recommendations are made for future studies on the effect of swimming classes on infant motor development.

  18. Room Temperature Magnetic Behavior In Nanocrystalline Ni-Doped Zro2 By Microwave-Assisted Polyol Synthesis

    NASA Astrophysics Data System (ADS)

    Parimita Rath, Pragyan; Parhi, Pankaj Kumar; Ranjan Panda, Sirish; Priyadarshini, Barsharani; Ranjan Sahoo, Tapas

    2017-08-01

    This article, deals with a microwave-assisted polyol method to demonstrate a low temperature route < 250°C, to prepare a high temperature cubic zirconia phase. Powder XRD pattern shows broad diffraction peaks suggesting nanometric size of the particles. Magnetic behavior of 1-5 at% Ni doped samples show a threshold for substitutional induced room temperature ferromagnetism up to 3 at% of Ni. TGA data reveals that Ni-doped ZrO2 polyol precursors decompose exothermically below 300°C. IR data confirms the reduction of Zr(OH)4 precipitates to ZrO2, in agreement with the conclusions drawn from the TGA analysis.

  19. Sexual Functioning and Behavior of Men with Body Dysmorphic Disorder Concerning Penis Size Compared with Men Anxious about Penis Size and with Controls: A Cohort Study

    PubMed Central

    Veale, David; Miles, Sarah; Read, Julie; Troglia, Andrea; Wylie, Kevan; Muir, Gordon

    2015-01-01

    Introduction Little is known about the sexual functioning and behavior of men anxious about the size of their penis and the means that they might use to try to alter the size of their penis. Aim To compare sexual functioning and behavior in men with body dysmorphic disorder (BDD) concerning penis size and in men with small penis anxiety (SPA without BDD) and in a control group of men who do not have any concerns. Methods An opportunistic sample of 90 men from the community were recruited and divided into three groups: BDD (n = 26); SPA (n = 31) and controls (n = 33). Main Outcome Measures The Index of Erectile Function (IEF), sexual identity and history; and interventions to alter the size of their penis. Results Men with BDD compared with controls had reduced erectile dysfunction, orgasmic function, intercourse satisfaction and overall satisfaction on the IEF. Men with SPA compared with controls had reduced intercourse satisfaction. There were no differences in sexual desire, the frequency of intercourse or masturbation across any of the three groups. Men with BDD and SPA were more likely than the controls to attempt to alter the shape or size of their penis (for example jelqing, vacuum pumps or stretching devices) with poor reported success. Conclusion Men with BDD are more likely to have erectile dysfunction and less satisfaction with intercourse than controls but maintain their libido. Further research is required to develop and evaluate a psychological intervention for such men with adequate outcome measures. PMID:26468378

  20. The effect of machine learning regression algorithms and sample size on individualized behavioral prediction with functional connectivity features.

    PubMed

    Cui, Zaixu; Gong, Gaolang

    2018-06-02

    Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Neuromuscular dose-response studies: determining sample size.

    PubMed

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  2. Sample size considerations for paired experimental design with incomplete observations of continuous outcomes.

    PubMed

    Zhu, Hong; Xu, Xiaohan; Ahn, Chul

    2017-01-01

    Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.

  3. Using flow cytometry to estimate pollen DNA content: improved methodology and applications

    PubMed Central

    Kron, Paul; Husband, Brian C.

    2012-01-01

    Background and Aims Flow cytometry has been used to measure nuclear DNA content in pollen, mostly to understand pollen development and detect unreduced gametes. Published data have not always met the high-quality standards required for some applications, in part due to difficulties inherent in the extraction of nuclei. Here we describe a simple and relatively novel method for extracting pollen nuclei, involving the bursting of pollen through a nylon mesh, compare it with other methods and demonstrate its broad applicability and utility. Methods The method was tested across 80 species, 64 genera and 33 families, and the data were evaluated using established criteria for estimating genome size and analysing cell cycle. Filter bursting was directly compared with chopping in five species, yields were compared with published values for sonicated samples, and the method was applied by comparing genome size estimates for leaf and pollen nuclei in six species. Key Results Data quality met generally applied standards for estimating genome size in 81 % of species and the higher best practice standards for cell cycle analysis in 51 %. In 41 % of species we met the most stringent criterion of screening 10 000 pollen grains per sample. In direct comparison with two chopping techniques, our method produced better quality histograms with consistently higher nuclei yields, and yields were higher than previously published results for sonication. In three binucleate and three trinucleate species we found that pollen-based genome size estimates differed from leaf tissue estimates by 1·5 % or less when 1C pollen nuclei were used, while estimates from 2C generative nuclei differed from leaf estimates by up to 2·5 %. Conclusions The high success rate, ease of use and wide applicability of the filter bursting method show that this method can facilitate the use of pollen for estimating genome size and dramatically improve unreduced pollen production estimation with flow cytometry. PMID:22875815

  4. How Large Should a Statistical Sample Be?

    ERIC Educational Resources Information Center

    Menil, Violeta C.; Ye, Ruili

    2012-01-01

    This study serves as a teaching aid for teachers of introductory statistics. The aim of this study was limited to determining various sample sizes when estimating population proportion. Tables on sample sizes were generated using a C[superscript ++] program, which depends on population size, degree of precision or error level, and confidence…

  5. Size and modal analyses of fines and ultrafines from some Apollo 17 samples

    NASA Technical Reports Server (NTRS)

    Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.

    1975-01-01

    Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.

  6. Sample size, confidence, and contingency judgement.

    PubMed

    Clément, Mélanie; Mercier, Pierre; Pastò, Luigi

    2002-06-01

    According to statistical models, the acquisition function of contingency judgement is due to confidence increasing with sample size. According to associative models, the function reflects the accumulation of associative strength on which the judgement is based. Which view is right? Thirty university students assessed the relation between a fictitious medication and a symptom of skin discoloration in conditions that varied sample size (4, 6, 8 or 40 trials) and contingency (delta P = .20, .40, .60 or .80). Confidence was also collected. Contingency judgement was lower for smaller samples, while confidence level correlated inversely with sample size. This dissociation between contingency judgement and confidence contradicts the statistical perspective.

  7. How Much Can Remotely-Sensed Natural Resource Inventories Benefit from Finer Spatial Resolutions?

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Xu, Q.; McRoberts, R. E.; Ståhl, G.; Greenberg, J. A.

    2017-12-01

    For remote sensing facilitated natural resource inventories, the effects of spatial resolution in the form of pixel size and the effects of subpixel information on estimates of population parameters were evaluated by comparing results obtained using Landsat 8 and RapidEye auxiliary imagery. The study area was in Burkina Faso, and the variable of interest was the stem volume (m3/ha) convertible to the woodland aboveground biomass. A sample consisting of 160 field plots was selected and measured from the population following a two-stage sampling design. Models were fit using weighted least squares; the population mean, mu, and the variance of the estimator of the population mean, Var(mu.hat), were estimated in two inferential frameworks, model-based and model-assisted, and compared; for each framework, Var(mu.hat) was estimated both analytically and empirically. Empirical variances were estimated with bootstrapping that for resampling takes clustering effects into account. The primary results were twofold. First, for the effects of spatial resolution and subpixel information, four conclusions are relevant: (1) finer spatial resolution imagery indeed contributes to greater precision for estimators of population parameter, but this increase is slight at a maximum rate of 20% considering that RapidEye data are 36 times finer resolution than Landsat 8 data; (2) subpixel information on texture is marginally beneficial when it comes to making inference for population of large areas; (3) cost-effectiveness is more favorable for the free of charge Landsat 8 imagery than RapidEye imagery; and (4) for a given plot size, candidate remote sensing auxiliary datasets are more cost-effective when their spatial resolutions are similar to the plot size than with much finer alternatives. Second, for the comparison between estimators, three conclusions are relevant: (1) model-based variance estimates are consistent with each other and about half as large as stabilized model-assisted estimates, suggesting superior effectiveness of model-based inference to model-assisted inference; (2) bootstrapping is an effective alternative to analytical variance estimators; and (3) prediction accuracy expressed by RMSE is useful for screening candidate models to be used for population inferences.

  8. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    PubMed

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    NASA Astrophysics Data System (ADS)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  10. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    PubMed

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  11. Sample allocation balancing overall representativeness and stratum precision.

    PubMed

    Diaz-Quijano, Fredi Alexander

    2018-05-07

    In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Effect of roll hot press temperature on crystallite size of PVDF film

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartono, Ambran, E-mail: ambranhartono@yahoo.com; Sanjaya, Edi; Djamal, Mitra

    2014-03-24

    Fabrication PVDF films have been made using Hot Roll Press. Preparation of samples carried out for nine different temperatures. This condition is carried out to see the effect of Roll Hot Press temperature on the size of the crystallite of PVDF films. To obtain the diffraction pattern of sample characterization is performed using X-Ray Diffraction. Furthermore, from the diffraction pattern is obtained, the calculation to determine the crystallite size of the sample by using the Scherrer equation. From the experimental results and the calculation of crystallite sizes obtained for the samples with temperature 130 °C up to 170 °C respectivelymore » increased from 7.2 nm up to 20.54 nm. These results show that increasing temperatures will also increase the size of the crystallite of the sample. This happens because with the increasing temperature causes the higher the degree of crystallization of PVDF film sample is formed, so that the crystallite size also increases. This condition indicates that the specific volume or size of the crystals depends on the magnitude of the temperature as it has been studied by Nakagawa.« less

  13. Assessment of sampling stability in ecological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.

    1988-01-01

    A simulation study was undertaken to assess the sampling stability of the variable loadings in linear discriminant function analysis. A factorial design was used for the factors of multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. A review of 60 published studies and 142 individual analyses indicated that sample sizes in ecological studies often have met that requirement. However, individual group sample sizes frequently were very unequal, and checks of assumptions usually were not reported. The authors recommend that ecologists obtain group sample sizes that are at least three times as large as the number of variables measured.

  14. Relationships between media use, body fatness and physical activity in children and youth: a meta-analysis.

    PubMed

    Marshall, S J; Biddle, S J H; Gorely, T; Cameron, N; Murdey, I

    2004-10-01

    To review the empirical evidence of associations between television (TV) viewing, video/computer game use and (a) body fatness, and (b) physical activity. Meta-analysis. Published English-language studies were located from computerized literature searches, bibliographies of primary studies and narrative reviews, and manual searches of personal archives. Included studies presented at least one empirical association between TV viewing, video/computer game use and body fatness or physical activity among samples of children and youth aged 3-18 y. The mean sample-weighted corrected effect size (Pearson r). Based on data from 52 independent samples, the mean sample-weighted effect size between TV viewing and body fatness was 0.066 (95% CI=0.056-0.078; total N=44,707). The sample-weighted fully corrected effect size was 0.084. Based on data from six independent samples, the mean sample-weighted effect size between video/computer game use and body fatness was 0.070 (95% CI=-0.048 to 0.188; total N=1,722). The sample-weighted fully corrected effect size was 0.128. Based on data from 39 independent samples, the mean sample-weighted effect size between TV viewing and physical activity was -0.096 (95% CI=-0.080 to -0.112; total N=141,505). The sample-weighted fully corrected effect size was -0.129. Based on data from 10 independent samples, the mean sample-weighted effect size between video/computer game use and physical activity was -0.104 (95% CI=-0.080 to -0.128; total N=119,942). The sample-weighted fully corrected effect size was -0.141. A statistically significant relationship exists between TV viewing and body fatness among children and youth although it is likely to be too small to be of substantial clinical relevance. The relationship between TV viewing and physical activity is small but negative. The strength of these relationships remains virtually unchanged even after correcting for common sources of bias known to impact study outcomes. While the total amount of time per day engaged in sedentary behavior is inevitably prohibitive of physical activity, media-based inactivity may be unfairly implicated in recent epidemiologic trends of overweight and obesity among children and youth. Relationships between sedentary behavior and health are unlikely to be explained using single markers of inactivity, such as TV viewing or video/computer game use.

  15. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    PubMed

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Methods for Specifying the Target Difference in a Randomised Controlled Trial: The Difference ELicitation in TriAls (DELTA) Systematic Review

    PubMed Central

    Hislop, Jenni; Adewuyi, Temitope E.; Vale, Luke D.; Harrild, Kirsten; Fraser, Cynthia; Gurung, Tara; Altman, Douglas G.; Briggs, Andrew H.; Fayers, Peter; Ramsay, Craig R.; Norrie, John D.; Harvey, Ian M.; Buckley, Brian; Cook, Jonathan A.

    2014-01-01

    Background Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation. Methods and Findings A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified—anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size. Conclusions A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts. Please see later in the article for the Editors' Summary PMID:24824338

  18. Photoluminescence of Gallium Phosphide-Based Nanostructures with Germanium Quantum Dots, Grown by Liquid-Phase Epitaxy

    NASA Astrophysics Data System (ADS)

    Maronchuk, I. I.; Sanikovich, D. D.; Velchenko, A. A.

    2017-11-01

    We have used liquid-phase epitaxy with pulsed substrate cooling using two structural designs to grow samples of nanoheteroepitaxial structures with Ge quantum dots in a GaP matrix on Si substrates. We have measured the photoluminescence spectra of the samples at temperatures of 77 K and 300 K with excitation by laser emission at λ = 4880 Å and 5145 Å. We draw conclusions concerning the factors influencing the spectrum and intensity of emission for nanostructures with quantum dots. It was found that in order to reduce nonradiative recombination in multilayer p-n structures, we need to create quantum dot arrays inside p and n regions rather than in the central portion of the depletion layer of the p-n junction. We show that the theoretical energies for Ge quantum dots of the calculated sizes are comparable with the energies of their photoluminescence maxima.

  19. Liquid chromatographic analysis of a formulated ester from a gas-turbine engine test

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.; Morales, W.

    1983-01-01

    Size exclusion chromatography (SEC) utilizing mu-Bondagel and mu-Styragel columns with a tetrahydrofuran mobile phase was used to determine the chemical degradation of lubricant samples from a gas-turbine engine test. A MIL-L-27502 candidate, ester-based lubricant was run in a J57-29 engine at a bulk oil temperature of 216 C. In general, the analyses indicated a progressive loss of primary ester, additive depletion, and formation of higher molecular weight material. An oil sample taken at the conclusion of the test showed a reversal of this trend because of large additions of new oil. The high-molecular-weight product from the degraded ester absorbed strongly in the ultraviolet region at 254 nanometers. This would indicate the presence of chromophoric groups. An analysis of a similar ester lubricant from a separate high-temperature bearing test yielded qualitatively similar results.

  20. Use of COTS Batteries on ISS and Shuttle

    NASA Technical Reports Server (NTRS)

    Jeevarajan, Judith A.

    2004-01-01

    This presentation focuses on COTS Battery testing for energy content, toxicity, hazards, failures modes and controls for different battery chemistries. It also discusses the current program requirements, challenges with COTS Batteries in manned vehicle COTS methodology, JSC test details, and gives a list of incidents from consumer protection safety commissions. The Battery test process involved testing new batteries for engineering certification, qualification of batteries, flight acceptance, cell and battery, environment, performance and abuse. Their conclusions and recommendations were that: high risk is undertaken with the use of COTS batteries, hazard control verification is required to allow the use of these batteries on manned space flights, failures during use cannot be understood if different scenarios of failure are not tested on the ground, and that testing is performed on small sample numbers due to restrictions on cost and time. They recommend testing of large sample size to gain more confidence in the operation of the hazard controls.

  1. Responsibility and burden from the perspective of seniors’ family caregivers: a qualitative study in Shanghai, China

    PubMed Central

    Zeng, Li; Zhu, Xiaoping; Meng, Xianmei; Mao, Yafen; Wu, Qian; Shi, Yan; Zhou, Lanshu

    2014-01-01

    Objectives: This study aimed to explore the experience of seniors’ family caregivers with regarding the responsibility, burden and support needs during caregiving in Shanghai, China. Materials and methods: An exploratory, descriptive, qualitative design was used and a semi-structure interview was conducted. A convenience sample of 11 participants in two community service centers in Shanghai was recruited. Data saturation guided the size of the sample. The Colaizzi method of empirical phenomenology was used for interviewing and analyzing data obtained from 11 caregivers. Results: Three major themes were found: It is a hard work; It is my responsibility; Social support is not enough. Conclusion: The findings of the study are practical and helpful for health care providers to develop appropriate caregiver support services, to balance the responsibility and burden of caregivers, and to consider the factors influencing the utility of support services. PMID:25126186

  2. Mineral Element Contents in Commercially Valuable Fish Species in Spain

    PubMed Central

    Peña-Rivas, Luis; Ortega, Eduardo; López-Martínez, Concepción; Olea-Serrano, Fátima; Lorenzo, Maria Luisa

    2014-01-01

    The aim of this study was to measure selected metal concentrations in Trachurus trachurus, Trachurus picturatus, and Trachurus mediterraneus, which are widely consumed in Spain. Principal component analysis suggested that the variable Cr was the main responsible variable for the identification of T. trachurus, the variables As and Sn for T. mediterraneus, and the rest of variables for T. picturatus. This well-defined discrimination between fish species provided by mineral element allows us to distinguish them on the basis of their metal content. Based on the samples collected, and recognizing the inferential limitation of the sample size of this study, the metal concentrations found are below the proposed limit values for human consumption. However, it should be taken into consideration that there are other dietary sources of these metals. In conclusion, metal contents in the fish species analyzed are acceptable for human consumption from a nutritional and toxicity point of view. PMID:24895678

  3. Effect of chronic low level manganese exposure on postural balance: A pilot study of residents in southwest Ohio

    PubMed Central

    Standridge, J. S.; Bhattacharya, Amit; Succop, Paul; Cox, Cyndy; Haynes, Erin

    2009-01-01

    OBJECTIVE The objective of this study was to determine the effect of non-occupational exposure to manganese on postural balance. METHODS Residents living near a ferromanganese refinery provided hair and blood samples after postural balance testing. The relationship between hair manganese and postural balance was analyzed with logistic regression. Following covariate adjustment, postural balance was compared with control data by analysis of covariance. RESULTS Mean hair manganese was 4.4 µg/g. A significantly positive association was found between hair manganese and sway area (EO, p=0.05; EC, p=0.04) and sway length (EO, p=0.05; EC, p=0.04). Postural balance of residents was significantly larger than controls in 5 out of 8 postural balance outcomes. CONCLUSION Preliminary findings suggest subclinical impairment in postural balance among residents chronically exposed to ambient Mn. A prospective study with a larger sample size is warranted. PMID:19092498

  4. Delayed reward discounting and addictive behavior: a meta-analysis

    PubMed Central

    Amlung, Michael T.; Few, Lauren R.; Ray, Lara A.; Sweet, Lawrence H.; Munafò, Marcus R.

    2011-01-01

    Rationale Delayed reward discounting (DRD) is a behavioral economic index of impulsivity and numerous studies have examined DRD in relation to addictive behavior. To synthesize the findings across the literature, the current review is a meta-analysis of studies comparing DRD between criterion groups exhibiting addictive behavior and control groups. Objectives The meta-analysis sought to characterize the overall patterns of findings, systematic variability by sample and study type, and possible small study (publication) bias. Methods Literature reviews identified 310 candidate articles from which 46 studies reporting 64 comparisons were identified (total N=56,013). Results From the total comparisons identified, a small magnitude effect was evident (d=.15; p<.00001) with very high heterogeneity of effect size. Based on systematic observed differences, large studies assessing DRD with a small number of self-report items were removed and an analysis of 57 comparisons (n=3,329) using equivalent methods and exhibiting acceptable heterogeneity revealed a medium magnitude effect (d=.58; p<.00001). Further analyses revealed significantly larger effect sizes for studies using clinical samples (d=.61) compared with studies using nonclinical samples (d=.45). Indices of small study bias among the various comparisons suggested varying levels of influence by unpublished findings, ranging from minimal to moderate. Conclusions These results provide strong evidence of greater DRD in individuals exhibiting addictive behavior in general and particularly in individuals who meet criteria for an addictive disorder. Implications for the assessment of DRD and research priorities are discussed. PMID:21373791

  5. Forensic analysis of laser printed ink by X-ray fluorescence and laser-excited plume fluorescence.

    PubMed

    Chu, Po-Chun; Cai, Bruno Yue; Tsoi, Yeuk Ki; Yuen, Ronald; Leung, Kelvin S Y; Cheung, Nai-Ho

    2013-05-07

    We demonstrated a minimally destructive two-tier approach for multielement forensic analysis of laser-printed ink. The printed document was first screened using a portable-X-ray fluorescence (XRF) probe. If the results were not conclusive, a laser microprobe was then deployed. The laser probe was based on a two-pulse scheme: the first laser pulse ablated a thin layer of the printed ink; the second laser pulse at 193 nm induced multianalytes in the desorbed ink to fluoresce. We analyzed four brands of black toners. The toners were printed on paper in the form of patches or letters or overprinted on another ink. The XRF probe could sort the four brands if the printed letters were larger than font 20. It could not tell the printing sequence in the case of overprints. The laser probe was more discriminatory; it could sort the toner brands and reveal the overprint sequence regardless of font size while the sampled area was not visibly different from neighboring areas even under the microscope. In terms of general analytical performance, the laser probe featured tens of micrometer lateral resolution and tens to hundreds of nm depth resolution and atto-mole mass detection limits. It could handle samples of arbitrary size and shape and was air compatible, and no sample pretreatment was necessary. It will prove useful whenever high-resolution and high sensitivity 3D elemental mapping is required.

  6. Reduced Sampling Size with Nanopipette for Tapping-Mode Scanning Probe Electrospray Ionization Mass Spectrometry Imaging

    PubMed Central

    Kohigashi, Tsuyoshi; Otsuka, Yoichi; Shimazu, Ryo; Matsumoto, Takuya; Iwata, Futoshi; Kawasaki, Hideya; Arakawa, Ryuichi

    2016-01-01

    Mass spectrometry imaging (MSI) with ambient sampling and ionization can rapidly and easily capture the distribution of chemical components in a solid sample. Because the spatial resolution of MSI is limited by the size of the sampling area, reducing sampling size is an important goal for high resolution MSI. Here, we report the first use of a nanopipette for sampling and ionization by tapping-mode scanning probe electrospray ionization (t-SPESI). The spot size of the sampling area of a dye molecular film on a glass substrate was decreased to 6 μm on average by using a nanopipette. On the other hand, ionization efficiency increased with decreasing solvent flow rate. Our results indicate the compatibility between a reduced sampling area and the ionization efficiency using a nanopipette. MSI of micropatterns of ink on a glass and a polymer substrate were also demonstrated. PMID:28101441

  7. Sampling stratospheric aerosols with impactors

    NASA Technical Reports Server (NTRS)

    Oberbeck, Verne R.

    1989-01-01

    Derivation of statistically significant size distributions from impactor samples of rarefield stratospheric aerosols imposes difficult sampling constraints on collector design. It is shown that it is necessary to design impactors of different size for each range of aerosol size collected so as to obtain acceptable levels of uncertainty with a reasonable amount of data reduction.

  8. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    PubMed

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  9. Rasch fit statistics and sample size considerations for polytomous data.

    PubMed

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-05-29

    Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire - 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges.

  10. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review.

    PubMed

    Kristunas, Caroline; Morris, Tom; Gray, Laura

    2017-11-15

    To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Any, not limited to healthcare settings. Any taking part in an SW-CRT published up to March 2016. The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22-0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  11. Hierarchical modeling of cluster size in wildlife surveys

    USGS Publications Warehouse

    Royle, J. Andrew

    2008-01-01

    Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).

  12. The choice of sample size: a mixed Bayesian / frequentist approach.

    PubMed

    Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John

    2009-04-01

    Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

  13. Coronary CT angiography using 64 detector rows: methods and design of the multi-centre trial CORE-64.

    PubMed

    Miller, Julie M; Dewey, Marc; Vavere, Andrea L; Rochitte, Carlos E; Niinuma, Hiroyuki; Arbab-Zadeh, Armin; Paul, Narinder; Hoe, John; de Roos, Albert; Yoshioka, Kunihiro; Lemos, Pedro A; Bush, David E; Lardo, Albert C; Texter, John; Brinker, Jeffery; Cox, Christopher; Clouse, Melvin E; Lima, João A C

    2009-04-01

    Multislice computed tomography (MSCT) for the noninvasive detection of coronary artery stenoses is a promising candidate for widespread clinical application because of its non-invasive nature and high sensitivity and negative predictive value as found in several previous studies using 16 to 64 simultaneous detector rows. A multi-centre study of CT coronary angiography using 16 simultaneous detector rows has shown that 16-slice CT is limited by a high number of nondiagnostic cases and a high false-positive rate. A recent meta-analysis indicated a significant interaction between the size of the study sample and the diagnostic odds ratios suggestive of small study bias, highlighting the importance of evaluating MSCT using 64 simultaneous detector rows in a multi-centre approach with a larger sample size. In this manuscript we detail the objectives and methods of the prospective "CORE-64" trial ("Coronary Evaluation Using Multidetector Spiral Computed Tomography Angiography using 64 Detectors"). This multi-centre trial was unique in that it assessed the diagnostic performance of 64-slice CT coronary angiography in nine centres worldwide in comparison to conventional coronary angiography. In conclusion, the multi-centre, multi-institutional and multi-continental trial CORE-64 has great potential to ultimately assess the per-patient diagnostic performance of coronary CT angiography using 64 simultaneous detector rows.

  14. A Direct Comparison of Two Densely Sampled HIV Epidemics: The UK and Switzerland

    NASA Astrophysics Data System (ADS)

    Ragonnet-Cronin, Manon L.; Shilaih, Mohaned; Günthard, Huldrych F.; Hodcroft, Emma B.; Böni, Jürg; Fearnhill, Esther; Dunn, David; Yerly, Sabine; Klimkait, Thomas; Aubert, Vincent; Yang, Wan-Lin; Brown, Alison E.; Lycett, Samantha J.; Kouyos, Roger; Brown, Andrew J. Leigh

    2016-09-01

    Phylogenetic clustering approaches can elucidate HIV transmission dynamics. Comparisons across countries are essential for evaluating public health policies. Here, we used a standardised approach to compare the UK HIV Drug Resistance Database and the Swiss HIV Cohort Study while maintaining data-protection requirements. Clusters were identified in subtype A1, B and C pol phylogenies. We generated degree distributions for each risk group and compared distributions between countries using Kolmogorov-Smirnov (KS) tests, Degree Distribution Quantification and Comparison (DDQC) and bootstrapping. We used logistic regression to predict cluster membership based on country, sampling date, risk group, ethnicity and sex. We analysed >8,000 Swiss and >30,000 UK subtype B sequences. At 4.5% genetic distance, the UK was more clustered and MSM and heterosexual degree distributions differed significantly by the KS test. The KS test is sensitive to variation in network scale, and jackknifing the UK MSM dataset to the size of the Swiss dataset removed the difference. Only heterosexuals varied based on the DDQC, due to UK male heterosexuals who clustered exclusively with MSM. Their removal eliminated this difference. In conclusion, the UK and Swiss HIV epidemics have similar underlying dynamics and observed differences in clustering are mainly due to different population sizes.

  15. Prevalence of HIV and Syphilis Infection among Men Who Have Sex with Men in China: A Meta-Analysis

    PubMed Central

    Zhou, Yunhua; Li, Dongliang; Lu, Dabing; Ruan, Yuhua; Qi, Xiao

    2014-01-01

    Objectives. To figure out the most current prevalence of HIV and syphilis in MSM in China. Methods. A meta-analysis was conducted on the studies searched through PubMed, CNKI, and Wanfang published between 1 January 2009 and 11 April 2013. Results. Eighty-four eligible studies, either in Chinese or in English, were included in this review. The pooled prevalence of HIV and syphilis infection in MSM in China was 6.5% and 11.2%, respectively. The subgroup analyses indicated that the prevalence of HIV infection was higher in the economically less developed cities than that in the developed cities (7.5% versus 6.1%, P < 0.05). In contrast, the prevalence of syphilis infection was lower in less developed cities than in developed cities (8.6% versus 15.1%). Studies with a sample size smaller than 500 had a lower prevalence of HIV and syphilis infection than those with a sample size greater than 500 (5.9% versus 7.2% for HIV; 11.0% versus 11.5% for syphilis, respectively). Conclusions. HIV and syphilis infection are prevalent in MSM in China. The different prevalence of HIV and syphilis infection between developing and developed cities underscores the need to target prevention strategies based on economic conditions. PMID:24868533

  16. Parametric analyses of summative scores may lead to conflicting inferences when comparing groups: A simulation study.

    PubMed

    Khan, Asaduzzaman; Chien, Chi-Wen; Bagraith, Karl S

    2015-04-01

    To investigate whether using a parametric statistic in comparing groups leads to different conclusions when using summative scores from rating scales compared with using their corresponding Rasch-based measures. A Monte Carlo simulation study was designed to examine between-group differences in the change scores derived from summative scores from rating scales, and those derived from their corresponding Rasch-based measures, using 1-way analysis of variance. The degree of inconsistency between the 2 scoring approaches (i.e. summative and Rasch-based) was examined, using varying sample sizes, scale difficulties and person ability conditions. This simulation study revealed scaling artefacts that could arise from using summative scores rather than Rasch-based measures for determining the changes between groups. The group differences in the change scores were statistically significant for summative scores under all test conditions and sample size scenarios. However, none of the group differences in the change scores were significant when using the corresponding Rasch-based measures. This study raises questions about the validity of the inference on group differences of summative score changes in parametric analyses. Moreover, it provides a rationale for the use of Rasch-based measures, which can allow valid parametric analyses of rating scale data.

  17. Testing three pathways to substance use and delinquency among low-income African American adolescents☆

    PubMed Central

    Marotta, Phillip L.; Voisin, Dexter R.

    2017-01-01

    Objective Mounting literature suggests that parental monitoring, risky peer norms, and future orientation correlate with illicit drug use and delinquency. However, few studies have investigated these constructs simultaneously in a single statistical model with low income African American youth. This study examined parental monitoring, peer norms and future orientation as primary pathways to drug use and delinquent behaviors in a large sample of African American urban adolescents. Methods A path model tested direct paths from peer norms, parental monitoring, and future orientation to drug use and delinquency outcomes after adjusting for potential confounders such as age, socioeconomic, and sexual orientation in a sample of 541 African American youth. Results Greater scores on measures of risky peer norms were associated with heightened risk of delinquency with an effect size that was twice in magnitude compared to the protective effects of future orientation. Regarding substance use, greater perceived risky peer norms correlated with the increased likelihood of substance use with a standardized effect size 3.33 times in magnitude compared to the protective effects of parental monitoring. Conclusions Findings from this study suggest that interventions targeting risky peer norms among adolescent African American youth may correlate with a greater impact on reductions in substance use and delinquency than exclusively targeting parental monitoring or future orientation. PMID:28974824

  18. Surveillance for transmissible spongiform encephalopathy in scavengers of white-tailed deer carcasses in the chronic wasting disease area of wisconsin

    USGS Publications Warehouse

    Jennelle, C.S.; Samuel, M.D.; Nolden, C.A.; Keane, D.P.; Barr, D.J.; Johnson, Chad; Vanderloo, J.P.; Aiken, Judd M.; Hamir, A.N.; Hoover, E.A.

    2009-01-01

    Chronic wasting disease (CWD), a class of neurodegenerative transmissible spongiform encephalopathies (TSE) occurring in cervids, is found in a number of states and provinces across North America. Misfolded prions, the infectious agents of CWD, are deposited in the environment via carcass remains and excreta, and pose a threat of cross-species transmission. In this study tissues were tested from 812 representative mammalian scavengers, collected in the CWD-affected area of Wisconsin, for TSE infection using the IDEXX HerdChek enzyme-linked immunosorbent assay (ELISA). Only four of the collected mammals tested positive using the ELISA, but these were negative when tested by Western blot. While our sample sizes permitted high probabilities of detecting TSE assuming 1% population prevalence in several common scavengers (93%, 87%, and 87% for raccoons, opossums, and coyotes, respectively), insufficient sample sizes for other species precluded similar conclusions. One cannot rule out successful cross-species TSE transmission to scavengers, but the results suggest that such transmission is not frequent in the CWD-affected area of Wisconsin. The need for further surveillance of scavenger species, especially those known to be susceptible to TSE (e.g., cat, American mink, raccoon), is highlighted in both a field and laboratory setting.

  19. Effect of ticagrelor with clopidogrel on high on-treatment platelet reactivity in acute stroke or transient ischemic attack (PRINCE) trial: Rationale and design.

    PubMed

    Wang, Yilong; Lin, Yi; Meng, Xia; Chen, Weiqi; Chen, Guohua; Wang, Zhimin; Wu, Jialing; Wang, Dali; Li, Jianhua; Cao, Yibin; Xu, Yuming; Zhang, Guohua; Li, Xiaobo; Pan, Yuesong; Li, Hao; Liu, Liping; Zhao, Xingquan; Wang, Yongjun

    2017-04-01

    Rationale and aim Little is known about the safety and efficacy of the combination of ticagrelor and aspirin in acute ischemic stroke. This study aimed to evaluate whether the combination of ticagrelor and aspirin was superior to that of clopidogrel and aspirin in reducing the 90-day high on-treatment platelet reactivity for acute minor stroke or transient ischemic attack, especially for carriers of cytochrome P450 2C19 loss-of-function allele. Sample size and design This study was designed as a prospective, multicenter, randomized, open-label, active-controlled, and blind-endpoint, phase II b trial. The required sample size was 952 patients. It was registered with ClinicalTrials.gov (NCT02506140). Study outcomes The primary outcome was the proportion of patients with high on-treatment platelet reactivity at 90 days. High on-treatment platelet reactivity is defined as the P2Y12 reaction unit >208 measured using the VerifyNow P2Y12 assay. Conclusion The Platelet Reactivity in Acute Non-disabling Cerebrovascular Events study explored whether ticagrelor combined with aspirin could reduce further the proportion of patients with high on-treatment platelet reactivity at 90 days after acute minor stroke or transient ischemic attack compared with clopidogrel and aspirin.

  20. Stimulus edge effects in the measurement of macular pigment using heterochromatic flicker photometry.

    PubMed

    Smollon, William E; Wooten, Billy R; Hammond, Billy R

    2015-11-01

    Heterochromatic flicker photometry (HFP) is the most common technique of measuring macular pigment optical density (MPOD). Some data strongly suggest that HFP samples MPOD specifically at the edge of center-fixated circular stimuli. Other data have led to the conclusion that HFP samples over the entire area of the stimulus. To resolve this disparity, MPOD was measured using HFP and a series of solid discs of varying radii (0.25 to 2.0 deg) and with thin annuli corresponding to the edge of those discs. MPOD assessed with the two methods yielded excellent correspondence and linearity: Y=0.01+0.98X , r=0.96. A second set of experiments showed that if a disc stimulus is adjusted for no-flicker (the standard procedure) and simply reduced in size, no flicker is observed despite the higher level of MPOD in the smaller area. Taken together, these results confirm that MPOD is determined at the edge of the measuring stimulus when using stimulus sizes in the range that is in dispute (up to a radius of 0.75 deg). The basis for this edge effect can be explained by quantitative differences in the spatial-temporal properties of the visual field as a function of angular distance from the fixation point.

  1. Meta-analysis of genome-wide association studies of HDL cholesterol response to statins

    PubMed Central

    Postmus, Iris; Warren, Helen R; Trompet, Stella; Arsenault, Benoit J; Avery, Christy L; Bis, Joshua C; Chasman, Daniel I; de Keyser, Catherine E; Deshmukh, Harshal A; Evans, Daniel S; Feng, QiPing; Li, Xiaohui; Smit, Roelof AJ; Smith, Albert V; Sun, Fangui; Taylor, Kent D; Arnold, Alice M; Barnes, Michael R; Barratt, Bryan J; Betteridge, John; Boekholdt, S Matthijs; Boerwinkle, Eric; Buckley, Brendan M; Chen, Y-D Ida; de Craen, Anton JM; Cummings, Steven R; Denny, Joshua C; Dubé, Marie Pierre; Durrington, Paul N; Eiriksdottir, Gudny; Ford, Ian; Guo, Xiuqing; Harris, Tamara B; Heckbert, Susan R; Hofman, Albert; Hovingh, G Kees; Kastelein, John JP; Launer, Leonore J; Liu, Ching-Ti; Liu, Yongmei; Lumley, Thomas; McKeigue, Paul M; Munroe, Patricia B; Neil, Andrew; Nickerson, Deborah A; Nyberg, Fredrik; O’Brien, Eoin; O’Donnell, Christopher J; Post, Wendy; Poulter, Neil; Vasan, Ramachandran S; Rice, Kenneth; Rich, Stephen S; Rivadeneira, Fernando; Sattar, Naveed; Sever, Peter; Shaw-Hawkins, Sue; Shields, Denis C; Slagboom, P Eline; Smith, Nicholas L; Smith, Joshua D; Sotoodehnia, Nona; Stanton, Alice; Stott, David J; Stricker, Bruno H; Stürmer, Til; Uitterlinden, André G; Wei, Wei-Qi; Westendorp, Rudi GJ; Whitsel, Eric A; Wiggins, Kerri L; Wilke, Russell A; Ballantyne, Christie M; Colhoun, Helen M; Cupples, L Adrienne; Franco, Oscar H; Gudnason, Vilmundur; Hitman, Graham; Palmer, Colin NA; Psaty, Bruce M; Ridker, Paul M; Stafford, Jeanette M; Stein, Charles M; Tardif, Jean-Claude; Caulfield, Mark J; Jukema, J Wouter; Rotter, Jerome I; Krauss, Ronald M

    2017-01-01

    Background In addition to lowering low density lipoprotein-cholesterol (LDL-C), statin therapy also raises high density lipoprotein-cholesterol (HDL-C) levels. Inter-individual variation in HDL-C response to statins may be partially explained by genetic variation. Methods and Results We performed a meta-analysis of genome-wide association studies (GWAS) to identify variants with an effect on statin-induced HDL-C changes. The 123 most promising signals with P<1×10−4 from the 16,769 statin-treated participants in the first analysis stage were followed up in an independent group of 10,951 statin-treated individuals, providing a total sample size of 27,720 individuals. The only associations of genome-wide significance (P<5×10−8) were between minor alleles at the CETP locus and greater HDL-C response to statin treatment. Conclusion Based on results from this study that included a relatively large sample size, we suggest that CETP may be the only detectable locus with common genetic variants that influence HDL-C response to statins substantially in individuals of European descent. Although CETP is known to be associated with HDL-C, we provide evidence that this pharmacogenetic effect is independent of its association with baseline HDL-C levels. PMID:27587472

  2. Does sampling using random digit dialling really cost more than sampling from telephone directories: Debunking the myths

    PubMed Central

    Yang, Baohui; Eyeson-Annan, Margo

    2006-01-01

    Background Computer assisted telephone interviewing (CATI) is widely used for health surveys. The advantages of CATI over face-to-face interviewing are timeliness and cost reduction to achieve the same sample size and geographical coverage. Two major CATI sampling procedures are used: sampling directly from the electronic white pages (EWP) telephone directory and list assisted random digit dialling (LA-RDD) sampling. EWP sampling covers telephone numbers of households listed in the printed white pages. LA-RDD sampling has a better coverage of households than EWP sampling but is considered to be more expensive due to interviewers dialling more out-of-scope numbers. Methods This study compared an EWP sample and a LA-RDD sample from the New South Wales Population Health Survey in 2003 on demographic profiles, health estimates, coefficients of variation in weights, design effects on estimates, and cost effectiveness, on the basis of achieving the same level of precision of estimates. Results The LA-RDD sample better represented the population than the EWP sample, with a coefficient of variation of weights of 1.03 for LA-RDD compared with 1.21 for EWP, and average design effects of 2.00 for LA-RDD compared with 2.38 for EWP. Also, a LA-RDD sample can save up to 14.2% in cost compared to an EWP sample to achieve the same precision for health estimates. Conclusion A LA-RDD sample better represents the population, which potentially leads to reduced bias in health estimates, and rather than costing more than EWP actually costs less. PMID:16504117

  3. Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size

    PubMed Central

    Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa

    2016-01-01

    Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913

  4. The Relationship between National-Level Carbon Dioxide Emissions and Population Size: An Assessment of Regional and Temporal Variation, 1960–2005

    PubMed Central

    Jorgenson, Andrew K.; Clark, Brett

    2013-01-01

    This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region. PMID:23437323

  5. Influences of sampling size and pattern on the uncertainty of correlation estimation between soil water content and its influencing factors

    NASA Astrophysics Data System (ADS)

    Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua

    2017-12-01

    In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.

  6. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    PubMed

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  7. Sex Differences in DSM-IV Posttraumatic Stress Disorder Symptoms Expression Using Item Response Theory: a Population-based Study

    PubMed Central

    Rivollier, Fabrice; Peyre, Hugo; Hoertel, Nicolas; Blanco, Carlos; Limosin, Frédéric; Delorme, Richard

    2015-01-01

    Background Whether there are systematic sex differences in posttraumatic stress disorder (PTSD) symptom expression remains debated. Using methods based on item response theory (IRT), we aimed at examining differences in the likelihood of reporting DSM-IV symptoms of PTSD between women and men, while stratifying for major trauma type and equating for PTSD severity. Method We compared data from women and men in a large nationally representative adult sample, the National Epidemiologic Survey on Alcohol and Related Conditions. Analyses were conducted in the full population sample of individuals who met the DSM-IV criterion A (n = 23,860) and in subsamples according to trauma types. Results The clinical presentation of the 17 DSM-IV PTSD symptoms in the general population did not substantially differ in women and men in the full population and by trauma type after equating for levels of PTSD severity. The only exception was the symptom “foreshortened future”, which was more likely endorsed by men at equivalent levels of PTSD severity. Limitations The retrospective nature of the assessment of PTSD symptoms could have led to recall bias. Our sample size was too small to draw conclusions among individuals who experienced war-related traumas. Conclusions Our findings suggest that the clinical presentation of PTSD does not differ substantially between women and men. We also provide additional psychometric support to the exclusion of the symptom “foreshortened future” from the diagnostic criteria for PTSD in the DSM-5. PMID:26342916

  8. Estimation of sample size and testing power (part 5).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  9. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    PubMed

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  10. Day and night variation in chemical composition and toxicological responses of size segregated urban air PM samples in a high air pollution situation

    NASA Astrophysics Data System (ADS)

    Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.

    2015-11-01

    Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM samples. Some of the day to night difference may have been caused also by differing wind directions transporting air masses from different emission sources during the day and the night. The present findings indicate the important role of the local particle sources and atmospheric processes on the health related toxicological properties of the PM. The varying toxicological responses evoked by the PM samples showed the importance of examining various particle sizes. Especially the detected considerable toxicological activity by PM0.2 size range suggests they're attributable to combustion sources, new particle formation and atmospheric processes.

  11. An internal pilot design for prospective cancer screening trials with unknown disease prevalence.

    PubMed

    Brinton, John T; Ringham, Brandy M; Glueck, Deborah H

    2015-10-13

    For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.

  12. Using Bayesian Adaptive Trial Designs for Comparative Effectiveness Research: A Virtual Trial Execution.

    PubMed

    Luce, Bryan R; Connor, Jason T; Broglio, Kristine R; Mullins, C Daniel; Ishak, K Jack; Saunders, Elijah; Davis, Barry R

    2016-09-20

    Bayesian and adaptive clinical trial designs offer the potential for more efficient processes that result in lower sample sizes and shorter trial durations than traditional designs. To explore the use and potential benefits of Bayesian adaptive clinical trial designs in comparative effectiveness research. Virtual execution of ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) as if it had been done according to a Bayesian adaptive trial design. Comparative effectiveness trial of antihypertensive medications. Patient data sampled from the more than 42 000 patients enrolled in ALLHAT with publicly available data. Number of patients randomly assigned between groups, trial duration, observed numbers of events, and overall trial results and conclusions. The Bayesian adaptive approach and original design yielded similar overall trial conclusions. The Bayesian adaptive trial randomly assigned more patients to the better-performing group and would probably have ended slightly earlier. This virtual trial execution required limited resampling of ALLHAT patients for inclusion in RE-ADAPT (REsearch in ADAptive methods for Pragmatic Trials). Involvement of a data monitoring committee and other trial logistics were not considered. In a comparative effectiveness research trial, Bayesian adaptive trial designs are a feasible approach and potentially generate earlier results and allocate more patients to better-performing groups. National Heart, Lung, and Blood Institute.

  13. Silicone Oil Microdroplets and Protein Aggregates in Repackaged Bevacizumab and Ranibizumab: Effects of Long-term Storage and Product Mishandling

    PubMed Central

    Liu, Lu; Ammar, David A.; Ross, Lindsey A.; Mandava, Naresh; Kahook, Malik Y.

    2011-01-01

    Purpose. To quantify levels of subvisible particles and protein aggregates in repackaged bevacizumab obtained from compounding pharmacies, as well as in samples of bevacizumab and ranibizumab tested in controlled laboratory experiments. Methods. Repackaged bevacizumab was purchased from four external compounding pharmacies. For controlled laboratory studies, bevacizumab and placebo were drawn into plastic syringes and incubated at −20°C, 4°C, and room temperature (with and without exposure to light) for 12 weeks. In addition, mechanical shock occurring during shipping was mimicked with syringes containing bevacizumab. Particle counts and size distributions were quantified by particle characterization technology. Levels of monomer and soluble aggregates of bevacizumab were determined with size-exclusion high-performance liquid chromatography (SE-HPLC). Results. Repackaged bevacizumab from the compounding pharmacies had a wide range of particle counts (89,006 ± 56,406 to 602,062 ± 18,349/mL). Bevacizumab sampled directly from the original glass vial had particle counts of 63,839 ± 349/mL. There was up to a 10% monomer loss in the repackaged bevacizumab. Laboratory samples of repackaged bevacizumab and placebo had initial particle counts, respectively, of 283,675 ± 60,494/mL and 492,314 ± 389,361/mL. Freeze-thawing of both bevacizumab and placebo samples led to >1.2 million particles/mL. In all repackaged samples, most of the particles were due to silicone oil. SE-HPLC showed no significant differences for repackaged samples incubated in the laboratory under various conditions, compared with bevacizumab directly from vial. However, repeated freeze-thawing caused a more than 10% monomer loss. Conclusions. Bevacizumab repackaged in plastic syringes could contain protein aggregates and is contaminated by silicone oil microdroplets. Freeze-thawing or other mishandling can further increase levels of particle contaminants. PMID:21051703

  14. Damage Accumulation in Silica Glass Nanofibers.

    PubMed

    Bonfanti, Silvia; Ferrero, Ezequiel E; Sellerio, Alessandro L; Guerra, Roberto; Zapperi, Stefano

    2018-06-06

    The origin of the brittle-to-ductile transition, experimentally observed in amorphous silica nanofibers as the sample size is reduced, is still debated. Here we investigate the issue by extensive molecular dynamics simulations at low and room temperatures for a broad range of sample sizes, with open and periodic boundary conditions. Our results show that small sample-size enhanced ductility is primarily due to diffuse damage accumulation, that for larger samples leads to brittle catastrophic failure. Surface effects such as boundary fluidization contribute to ductility at room temperature by promoting necking, but are not the main driver of the transition. Our results suggest that the experimentally observed size-induced ductility of silica nanofibers is a manifestation of finite-size criticality, as expected in general for quasi-brittle disordered networks.

  15. Post-stratified estimation: with-in strata and total sample size recommendations

    Treesearch

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  16. Using the Student's "t"-Test with Extremely Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C .F.

    2013-01-01

    Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…

  17. 40 CFR 1042.310 - Engine selection for Category 1 and Category 2 engines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Category 2 engines. (a) Determine minimum sample sizes as follows: (1) For Category 1 engines, the minimum sample size is one engine or one percent of the projected U.S.-directed production volume for all your Category 1 engine families, whichever is greater. (2) For Category 2 engines, the minimum sample size is...

  18. On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1984-01-01

    Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)

  19. Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety

    ERIC Educational Resources Information Center

    Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W.

    2013-01-01

    Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb.…

  20. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  1. The impact of the 2009/2010 enhancement of cigarette health warning labels in Uruguay: longitudinal findings from the International Tobacco Control (ITC) Uruguay Survey

    PubMed Central

    Gravely, Shannon; Fong, Geoffrey T.; Driezen, Pete; McNally, Mary; Thrasher, James F.; Thompson, Mary E.; Boado, Marcelo; Bianco, Eduardo; Borland, Ron; Hammond, David

    2015-01-01

    Background FCTC Article 11 Guidelines recommend that health warning labels (HWLs) should occupy at least 50% of the package, but the tobacco industry claims that increasing the size would not lead to further benefits. This article reports the first population study to examine the impact of increasing HWL size above 50%. We tested the hypothesis that the 2009/2010 enhancement of the HWLs in Uruguay would be associated with higher levels of effectiveness. Methods Data were drawn from a cohort of adult smokers (≥18 years) participating in the International Tobacco Control (ITC) Uruguay Survey. The probability sample cohort was representative of adult smokers in 5 cities. The surveys included key indicators of HWL effectiveness. Data were collected in 2008/09 (pre-policy: Wave 2) and 2010/11 (post-policy: Wave 3). Results Overall, 1746 smokers participated in the study at Wave 2 (n=1,379) and Wave 3 (n=1,411). Following the 2009/2010 HWL changes in Uruguay (from 50% to 80% in size), all indicators of HWL effectiveness increased significantly [noticing HWLs: odds ratio (OR)=1.44, p=0.015; reading HWLs: OR=1.42, p=0.002; impact of HWLs on thinking about risks of smoking: OR=1.66, p<0.001; HWLs increasing thinking about quitting: OR=1.76, p<0.001; avoiding looking at the HWLs: OR=2.35, p<.001; and reports that HWLs stopped smokers from having a cigarette “many times”: OR=3.42, p<0.001]. Conclusions The 2009/2010 changes to HWLs in Uruguay, including a substantial increment in size, led to increases of key HWL indicators, thus supporting the conclusion that enhancing HWLs beyond minimum guideline recommendations can lead to even higher levels of effectiveness. PMID:25512431

  2. A normative inference approach for optimal sample sizes in decisions from experience

    PubMed Central

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  3. A Typology of Mixed Methods Sampling Designs in Social Science Research

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.

    2007-01-01

    This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…

  4. Morphological and Hemodynamic Discriminators for Rupture Status in Posterior Communicating Artery Aneurysms

    PubMed Central

    Karmonik, Christof; Fang, Yibin; Xu, Jinyu; Yu, Ying; Cao, Wei; Liu, Jianmin; Huang, Qinghai

    2016-01-01

    Background and Purpose The conflicting findings of previous morphological and hemodynamic studies on intracranial aneurysm rupture may be caused by the relatively small sample sizes and the variation in location of the patient-specific aneurysm models. We aimed to determine the discriminators for aneurysm rupture status by focusing on only posterior communicating artery (PCoA) aneurysms. Materials and Methods In 129 PCoA aneurysms (85 ruptured, 44 unruptured), clinical, morphological and hemodynamic characteristics were compared between the ruptured and unruptured cases. Multivariate logistic regression analysis was performed to determine the discriminators for rupture status of PCoA aneurysms. Results While univariate analyses showed that the size of aneurysm dome, aspect ratio (AR), size ratio (SR), dome-to-neck ratio (DN), inflow angle (IA), normalized wall shear stress (NWSS) and percentage of low wall shear stress area (LSA) were significantly associated with PCoA aneurysm rupture status. With multivariate analyses, significance was only retained for higher IA (OR = 1.539, p < 0.001) and LSA (OR = 1.393, p = 0.041). Conclusions Hemodynamics and morphology were related to rupture status of intracranial aneurysms. Higher IA and LSA were identified as discriminators for rupture status of PCoA aneurysms. PMID:26910518

  5. A Double Blind, Placebo- controlled Trial of Rosiglitazone for Clozapine induced Glucose Metabolism Impairment in patients with Schizophrenia

    PubMed Central

    Henderson, David C.; Fan, Xiaoduo; Sharma, Bikash; Copeland, Paul M.; Borba, Christina P; Boxill, Ryan; Freudenreich, Oliver; Cather, Corey; Evins, A. Eden; Goff, Donald C.

    2014-01-01

    Objective The primary purpose of this eight week double blind, placebo-controlled trial of rosiglitazone 4 mg/day was to examine its effect on insulin sensitivity index (SI) and glucose utilization (SG) in clozapine-treated schizophrenia subjects with insulin resistance. Methods Eighteen subjects were randomized and accessed with a Frequently Sampled Intravenous Glucose Tolerance Test (FSIVGTT) at the baseline and week 8 to estimate SG, and SI. Results Controlling for the baseline, comparing the rosiglitazone group to placebo group, there was a non-significant improvement of SG (0.016± 0.006 to 0.018± 0.008, effect size= 0.23, p= 0.05) with a trend of improvement in SI in the rosiglitazone group (4.6± 2.8 to 7.8± 6.7, effect size= 0.18, p= 0.08). There was a significant reduction in small low-density-lipoprotein cholesterol (LDL-C)- particle number (987± 443 to 694± 415, effect size= 0.30, p= 0.04). Conclusion Rosiglitazone may have a role in addressing the insulin resistance and lipid abnormalities associated with clozapine. PMID:19183127

  6. Sample size of the reference sample in a case-augmented study.

    PubMed

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Influence of sampling window size and orientation on parafoveal cone packing density

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Ducoli, Pietro; Lombardo, Giuseppe

    2013-01-01

    We assessed the agreement between sampling windows of different size and orientation on packing density estimates in images of the parafoveal cone mosaic acquired using a flood-illumination adaptive optics retinal camera. Horizontal and vertical oriented sampling windows of different size (320x160 µm, 160x80 µm and 80x40 µm) were selected in two retinal locations along the horizontal meridian in one eye of ten subjects. At each location, cone density tended to decline with decreasing sampling area. Although the differences in cone density estimates were not statistically significant, Bland-Altman plots showed that the agreement between cone density estimated within the different sampling window conditions was moderate. The percentage of the preferred packing arrangements of cones by Voronoi tiles was slightly affected by window size and orientation. The results illustrated the high importance of specifying the size and orientation of the sampling window used to derive cone metric estimates to facilitate comparison of different studies. PMID:24009995

  8. Simulation analyses of space use: Home range estimates, variability, and sample size

    USGS Publications Warehouse

    Bekoff, Marc; Mech, L. David

    1984-01-01

    Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size (number of locations). As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecific variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.

  9. "TNOs are Cool": A survey of the trans-Neptunian region. XIII. Statistical analysis of multiple trans-Neptunian objects observed with Herschel Space Observatory

    NASA Astrophysics Data System (ADS)

    Kovalenko, I. D.; Doressoundiram, A.; Lellouch, E.; Vilenius, E.; Müller, T.; Stansberry, J.

    2017-11-01

    Context. Gravitationally bound multiple systems provide an opportunity to estimate the mean bulk density of the objects, whereas this characteristic is not available for single objects. Being a primitive population of the outer solar system, binary and multiple trans-Neptunian objects (TNOs) provide unique information about bulk density and internal structure, improving our understanding of their formation and evolution. Aims: The goal of this work is to analyse parameters of multiple trans-Neptunian systems, observed with Herschel and Spitzer space telescopes. Particularly, statistical analysis is done for radiometric size and geometric albedo, obtained from photometric observations, and for estimated bulk density. Methods: We use Monte Carlo simulation to estimate the real size distribution of TNOs. For this purpose, we expand the dataset of diameters by adopting the Minor Planet Center database list with available values of the absolute magnitude therein, and the albedo distribution derived from Herschel radiometric measurements. We use the 2-sample Anderson-Darling non-parametric statistical method for testing whether two samples of diameters, for binary and single TNOs, come from the same distribution. Additionally, we use the Spearman's coefficient as a measure of rank correlations between parameters. Uncertainties of estimated parameters together with lack of data are taken into account. Conclusions about correlations between parameters are based on statistical hypothesis testing. Results: We have found that the difference in size distributions of multiple and single TNOs is biased by small objects. The test on correlations between parameters shows that the effective diameter of binary TNOs strongly correlates with heliocentric orbital inclination and with magnitude difference between components of binary system. The correlation between diameter and magnitude difference implies that small and large binaries are formed by different mechanisms. Furthermore, the statistical test indicates, although not significant with the sample size, that a moderately strong correlation exists between diameter and bulk density. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  10. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  11. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    PubMed

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. A comparative analysis of whole genome sequencing of esophageal adenocarcinoma pre- and post-chemotherapy

    PubMed Central

    Noorani, Ayesha; Lynch, Andy G.; Achilleos, Achilleas; Eldridge, Matthew; Bower, Lawrence; Weaver, Jamie M.J.; Crawte, Jason; Ong, Chin-Ann; Shannon, Nicholas; MacRae, Shona; Grehan, Nicola; Nutzinger, Barbara; O'Donovan, Maria; Hardwick, Richard; Tavaré, Simon; Fitzgerald, Rebecca C.

    2017-01-01

    The scientific community has avoided using tissue samples from patients that have been exposed to systemic chemotherapy to infer the genomic landscape of a given cancer. Esophageal adenocarcinoma is a heterogeneous, chemoresistant tumor for which the availability and size of pretreatment endoscopic samples are limiting. This study compares whole-genome sequencing data obtained from chemo-naive and chemo-treated samples. The quality of whole-genomic sequencing data is comparable across all samples regardless of chemotherapy status. Inclusion of samples collected post-chemotherapy increased the proportion of late-stage tumors. When comparing matched pre- and post-chemotherapy samples from 10 cases, the mutational signatures, copy number, and SNV mutational profiles reflect the expected heterogeneity in this disease. Analysis of SNVs in relation to allele-specific copy-number changes pinpoints the common ancestor to a point prior to chemotherapy. For cases in which pre- and post-chemotherapy samples do show substantial differences, the timing of the divergence is near-synchronous with endoreduplication. Comparison across a large prospective cohort (62 treatment-naive, 58 chemotherapy-treated samples) reveals no significant differences in the overall mutation rate, mutation signatures, specific recurrent point mutations, or copy-number events in respect to chemotherapy status. In conclusion, whole-genome sequencing of samples obtained following neoadjuvant chemotherapy is representative of the genomic landscape of esophageal adenocarcinoma. Excluding these samples reduces the material available for cataloging and introduces a bias toward the earlier stages of cancer. PMID:28465312

  13. Does Self-Selection Affect Samples’ Representativeness in Online Surveys? An Investigation in Online Video Game Research

    PubMed Central

    van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-01-01

    Background The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Objective Our objective was to explore the representativeness of a self-selected sample of online gamers using online players’ virtual characters (avatars). Methods All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars’ characteristics were defined using various games’ scores, reported on the WoW’s official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. Results We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Conclusions Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted. PMID:25001007

  14. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  15. An alternative method for determining particle-size distribution of forest road aggregate and soil with large-sized particles

    Treesearch

    Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese

    2014-01-01

    Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...

  16. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  17. Suitability of river delta sediment as proppant, Missouri and Niobrara Rivers, Nebraska and South Dakota, 2015

    USGS Publications Warehouse

    Zelt, Ronald B.; Hobza, Christopher M.; Burton, Bethany L.; Schaepe, Nathaniel J.; Piatak, Nadine

    2017-11-16

    Sediment management is a challenge faced by reservoir managers who have several potential options, including dredging, for mitigation of storage capacity lost to sedimentation. As sediment is removed from reservoir storage, potential use of the sediment for socioeconomic or ecological benefit could potentially defray some costs of its removal. Rivers that transport a sandy sediment load will deposit the sand load along a reservoir-headwaters reach where the current of the river slackens progressively as its bed approaches and then descends below the reservoir water level. Given a rare combination of factors, a reservoir deposit of alluvial sand has potential to be suitable for use as proppant for hydraulic fracturing in unconventional oil and gas development. In 2015, the U.S. Geological Survey began a program of researching potential sources of proppant sand from reservoirs, with an initial focus on the Missouri River subbasins that receive sand loads from the Nebraska Sand Hills. This report documents the methods and results of assessments of the suitability of river delta sediment as proppant for a pilot study area in the delta headwaters of Lewis and Clark Lake, Nebraska and South Dakota. Results from surface-geophysical surveys of electrical resistivity guided borings to collect 3.7-meter long cores at 25 sites on delta sandbars using the direct-push method to recover duplicate, 3.8-centimeter-diameter cores in April 2015. In addition, the U.S. Geological Survey collected samples of upstream sand sources in the lower Niobrara River valley.At the laboratory, samples were dried, weighed, washed, dried, and weighed again. Exploratory analysis of natural sand for determining its suitability as a proppant involved application of a modified subset of the standard protocols known as American Petroleum Institute (API) Recommended Practice (RP) 19C. The RP19C methods were not intended for exploration-stage evaluation of raw materials. Results for the washed samples are not directly applicable to evaluations of suitability for use as fracture sand because, except for particle-size distribution, the API-recommended practices for assessing proppant properties (sphericity, roundness, bulk density, and crush resistance) require testing of specific proppant size classes. An optical imaging particle-size analyzer was used to make measurements of particle-size distribution and particle shape. Measured samples were sieved to separate the dominant-size fraction, and the separated subsample was further tested for roundness, sphericity, bulk density, and crush resistance.For the bulk washed samples collected from the Missouri River delta, the geometric mean size averaged 0.27 millimeters (mm), 80 percent of the samples were predominantly sand in the API 40/70 size class, and 17 percent were predominantly sand in the API 70/140 size class. Distributions of geometric mean size among the four sandbar complexes were similar, but samples collected from sandbar complex B were slightly coarser sand than those from the other three complexes. The average geometric mean sizes among the four sandbar complexes ranged only from 0.26 to 0.30 mm. For 22 main-stem sampling locations along the lower Niobrara River, geometric mean size averaged 0.26 mm, an average of 61 percent was sand in the API 40/70 size class, and 28 percent was sand in the API 70/140 size class. Average composition for lower Niobrara River samples was 48 percent medium sand, 37 percent fine sand, and about 7 percent each very fine sand and coarse sand fractions. On average, samples were moderately well sorted.Particle shape and strength were assessed for the dominant-size class of each sample. For proppant strength, crush resistance was tested at a predetermined level of stress (34.5 megapascals [MPa], or 5,000 pounds-force per square inch). To meet the API minimum requirement for proppant, after the crush test not more than 10 percent of the tested sample should be finer than the precrush dominant-size class. For particle shape, all samples surpassed the recommended minimum criteria for sphericity and roundness, with most samples being well-rounded. For proppant strength, of 57 crush-resistance tested Missouri River delta samples of 40/70-sized sand, 23 (40 percent) were interpreted as meeting the minimum criterion at 34.5 MPa, or 5,000 pounds-force per square inch. Of 12 tested samples of 70/140-sized sand, 9 (75 percent) of the Missouri River delta samples had less than 10 percent fines by volume following crush testing, achieving the minimum criterion at 34.5 MPa. Crush resistance for delta samples was strongest at sandbar complex A, where 67 percent of tested samples met the 10-percent fines criterion at the 34.5-MPa threshold. This frequency was higher than was indicated by samples from sandbar complexes B, C, and D that had rates of 50, 46, and 42 percent, respectively. The group of sandbar complex A samples also contained the largest percentages of samples dominated by the API 70/140 size class, which overall had a higher percentage of samples meeting the minimum criterion compared to samples dominated by coarser size classes; however, samples from sandbar complex A that had the API 40/70 size class tested also had a higher rate for meeting the minimum criterion (57 percent) than did samples from sandbar complexes B, C, and D (50, 43, and 40 percent, respectively). For samples collected along the lower Niobrara River, of the 25 tested samples of 40/70-sized sand, 9 samples passed the API minimum criterion at 34.5 MPa, but only 3 samples passed the more-stringent criterion of 8 percent postcrush fines. All four tested samples of 70/140 sand passed the minimum criterion at 34.5 MPa, with postcrush fines percentage of at most 4.1 percent.For two reaches of the lower Niobrara River, where hydraulic sorting was energized artificially by the hydraulic head drop at and immediately downstream from Spencer Dam, suitability of channel deposits for potential use as fracture sand was confirmed by test results. All reach A washed samples were well-rounded and had sphericity scores above 0.65, and samples for 80 percent of sampled locations met the crush-resistance criterion at the 34.5-MPa stress level. A conservative lower-bound estimate of sand volume in the reach A deposits was about 86,000 cubic meters. All reach B samples were well-rounded but sphericity averaged 0.63, a little less than the average for upstream reaches A and SP. All four samples tested passed the crush-resistance test at 34.5 MPa. Of three reach B sandbars, two had no more than 3 percent fines after the crush test, surpassing more stringent criteria for crush resistance that accept a maximum of 6 percent fines following the crush test for the API 70/140 size class.Relative to the crush-resistance test results for the API 40/70 size fraction of two samples of mine output from Loup River settling-basin dredge spoils near Genoa, Nebr., four of five reach A sample locations compared favorably. The four samples had increases in fines composition of 1.6–5.9 percentage points, whereas fines in the two mine-output samples increased by an average 6.8 percentage points.

  18. Puma (Puma concolor) epididymal sperm morphometry

    PubMed Central

    Cucho, Hernán; Alarcón, Virgilio; Ordóñez, César; Ampuero, Enrique; Meza, Aydee; Soler, Carles

    2016-01-01

    The Andean puma (Puma concolor) has not been widely studied, particularly in reference to its semen characteristics. The aim of the present study was to define the morphometry of puma sperm heads and classify their subpopulations by cluster analysis. Samples were recovered postmortem from two epididymides from one animal and prepared for morphological observation after staining with the Hemacolor kit. Morphometric data were obtained from 581 spermatozoa using a CASA-Morph system, rendering 13 morphometric parameters. The principal component (PC) analysis was performed followed by cluster analysis for the establishment of subpopulations. Two PC components were obtained, the first related to size and the second to shape. Three subpopulations were observed, corresponding to elongated and intermediate-size sperm heads and acrosomes, to large heads with large acrosomes, and to small heads with short acrosomes. In conclusion, puma spermatozoa showed no uniform sperm morphology but three clear subpopulations. These results should be used for future work in the establishment of an adequate germplasm bank of this species. PMID:27678466

  19. Puma (Puma concolor) epididymal sperm morphometry.

    PubMed

    Cucho, Hernán; Alarcón, Virgilio; Ordóñez, César; Ampuero, Enrique; Meza, Aydee; Soler, Carles

    2016-01-01

    The Andean puma (Puma concolor) has not been widely studied, particularly in reference to its semen characteristics. The aim of the present study was to define the morphometry of puma sperm heads and classify their subpopulations by cluster analysis. Samples were recovered postmortem from two epididymides from one animal and prepared for morphological observation after staining with the Hemacolor kit. Morphometric data were obtained from 581 spermatozoa using a CASA-Morph system, rendering 13 morphometric parameters. The principal component (PC) analysis was performed followed by cluster analysis for the establishment of subpopulations. Two PC components were obtained, the first related to size and the second to shape. Three subpopulations were observed, corresponding to elongated and intermediate-size sperm heads and acrosomes, to large heads with large acrosomes, and to small heads with short acrosomes. In conclusion, puma spermatozoa showed no uniform sperm morphology but three clear subpopulations. These results should be used for future work in the establishment of an adequate germplasm bank of this species.

  20. High Resolution Size Analysis of Fetal DNA in the Urine of Pregnant Women by Paired-End Massively Parallel Sequencing

    PubMed Central

    Tsui, Nancy B. Y.; Jiang, Peiyong; Chow, Katherine C. K.; Su, Xiaoxi; Leung, Tak Y.; Sun, Hao; Chan, K. C. Allen; Chiu, Rossa W. K.; Lo, Y. M. Dennis

    2012-01-01

    Background Fetal DNA in maternal urine, if present, would be a valuable source of fetal genetic material for noninvasive prenatal diagnosis. However, the existence of fetal DNA in maternal urine has remained controversial. The issue is due to the lack of appropriate technology to robustly detect the potentially highly degraded fetal DNA in maternal urine. Methodology We have used massively parallel paired-end sequencing to investigate cell-free DNA molecules in maternal urine. Catheterized urine samples were collected from seven pregnant women during the third trimester of pregnancies. We detected fetal DNA by identifying sequenced reads that contained fetal-specific alleles of the single nucleotide polymorphisms. The sizes of individual urinary DNA fragments were deduced from the alignment positions of the paired reads. We measured the fractional fetal DNA concentration as well as the size distributions of fetal and maternal DNA in maternal urine. Principal Findings Cell-free fetal DNA was detected in five of the seven maternal urine samples, with the fractional fetal DNA concentrations ranged from 1.92% to 4.73%. Fetal DNA became undetectable in maternal urine after delivery. The total urinary cell-free DNA molecules were less intact when compared with plasma DNA. Urinary fetal DNA fragments were very short, and the most dominant fetal sequences were between 29 bp and 45 bp in length. Conclusions With the use of massively parallel sequencing, we have confirmed the existence of transrenal fetal DNA in maternal urine, and have shown that urinary fetal DNA was heavily degraded. PMID:23118982

  1. Measuring sperm backflow following female orgasm: a new method

    PubMed Central

    King, Robert; Dempsey, Maria; Valentine, Katherine A.

    2016-01-01

    Background Human female orgasm is a vexed question in the field while there is credible evidence of cryptic female choice that has many hallmarks of orgasm in other species. Our initial goal was to produce a proof of concept for allowing females to study an aspect of infertility in a home setting, specifically by aligning the study of human infertility and increased fertility with the study of other mammalian fertility. In the latter case - the realm of oxytocin-mediated sperm retention mechanisms seems to be at work in terms of ultimate function (differential sperm retention) while the proximate function (rapid transport or cervical tenting) remains unresolved. Method A repeated measures design using an easily taught technique in a natural setting was used. Participants were a small (n=6), non-representative sample of females. The introduction of a sperm-simulant combined with an orgasm-producing technique using a vibrator/home massager and other easily supplied materials. Results The sperm flowback (simulated) was measured using a technique that can be used in a home setting. There was a significant difference in simulant retention between the orgasm (M=4.08, SD=0.17) and non-orgasm (M=3.30, SD=0.22) conditions; t (5)=7.02, p=0.001. Cohen's d=3.97, effect size r=0.89. This indicates a medium to small effect size. Conclusions This method could allow females to test an aspect of sexual response that has been linked to lowered fertility in a home setting with minimal training. It needs to be replicated with a larger sample size. PMID:27799082

  2. Association of the Catechol-O-Methyltransferase (COMT) Val158Met Polymorphism and Anxiety-Related Traits: A Meta-Analysis

    PubMed Central

    Lee, Lewina O.; Prescott, Carol A.

    2014-01-01

    Objectives The main goals of this study were: (i) to examine genotypic association of the COMT val158met polymorphism with anxiety-related traits via a meta-analysis; (ii) to examine sex and ethnicity as moderators of the association, and (iii) to evaluate whether the association differed by particular anxiety traits. Methods Association studies of the COMT val18met polymorphism and anxiety traits were identified from the PubMed or PsycInfo databases, conference abstracts and listserv postings. Exclusion criteria were: (a) pediatric samples, (b) exclusively clinical samples, and (c) samples selected for a non-anxiety phenotype. Standardized mean differences in anxiety between genotypes were aggregated to produce mean effect sizes across all available samples, and for subgroups stratified by sex and ethnicity (Caucasians vs. Asians). Construct-specific analysis was conducted to evaluate the association of COMT with neuroticism, harm avoidance, and behavioral inhibition. Results Twenty seven eligible studies (N=15,979) with available data were identified. Overall findings indicate sex-specific and ethnic-specific effects: Val homozygotes had higher neuroticism than Met homozygotes in studies of Caucasian males ( ES¯=0.13, 95%CI: 0.02 – 0.25, p = 0.03), and higher harm avoidance in studies of Asian males ( ES¯=0.43, 95%CI: 0.14 – 0.72, p = 0.004). No significant associations were found in women and effect sizes were diminished when studies were aggregated across ethnicity or anxiety traits. Conclusions: This meta-analysis provides evidence for sex and ethnicity differences in the association of the COMT val158met polymorphism with anxiety traits. Our findings contribute to current knowledge on the relation between prefrontal dopaminergic transmission and anxiety. PMID:24300663

  3. Evaluation of hygiene practices and microbiological quality of cooked meat products during slicing and handling at retail.

    PubMed

    Pérez-Rodríguez, F; Castro, R; Posada-Izquierdo, G D; Valero, A; Carrasco, E; García-Gimeno, R M; Zurera, G

    2010-10-01

    Cooked meat ready-to-eat products are recognized to be contaminated during slicing which, in the last years, has been associated with several outbreaks. This work aimed to find out possible relation between the hygiene practice taking place at retail point during slicing of cooked meat products in small and medium-sized establishments (SMEs) and large-sized establishments (LEs) and the microbiological quality of sliced cooked meat products. For that, a checklist was drawn up and filled in based on scoring handling practice during slicing in different establishments in Cordoba (Southern Spain). In addition, sliced cooked meats were analyzed for different microbiological indicators and investigated for the presence of Listeria spp. and Listeria monocytogenes. Results indicated that SMEs showed a more deficient handling practices compared to LEs. In spite of these differences, microbiological counts indicated similar microbiological quality in cooked meat samples for both types of establishments. On the other hand, Listeria monocytogenes and Listeria inocua were isolated from 7.35% (5/68) and 8.82% (6/68) of analyzed samples, respectively. Positive samples for Listeria spp. were found in establishments which showed acceptable hygiene levels, though contamination could be associated to the lack of exclusiveness of slicers at retail points. Moreover, Listeria spp presence could not be statistically linked to any microbiological parameters; however, it was observed that seasonality influenced significantly (P<0.05) L. monocytogenes presence, being all samples found during warm season (5/5). As a conclusion, results suggested that more effort should be made to adequately educate handlers in food hygiene practices, focused specially on SMEs. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  4. Sampling the structure and chemical order in assemblies of ferromagnetic nanoparticles by nuclear magnetic resonance

    PubMed Central

    Liu, Yuefeng; Luo, Jingjie; Shin, Yooleemi; Moldovan, Simona; Ersen, Ovidiu; Hébraud, Anne; Schlatter, Guy; Pham-Huu, Cuong; Meny, Christian

    2016-01-01

    Assemblies of nanoparticles are studied in many research fields from physics to medicine. However, as it is often difficult to produce mono-dispersed particles, investigating the key parameters enhancing their efficiency is blurred by wide size distributions. Indeed, near-field methods analyse a part of the sample that might not be representative of the full size distribution and macroscopic methods give average information including all particle sizes. Here, we introduce temperature differential ferromagnetic nuclear resonance spectra that allow sampling the crystallographic structure, the chemical composition and the chemical order of non-interacting ferromagnetic nanoparticles for specific size ranges within their size distribution. The method is applied to cobalt nanoparticles for catalysis and allows extracting the size effect from the crystallographic structure effect on their catalytic activity. It also allows sampling of the chemical composition and chemical order within the size distribution of alloyed nanoparticles and can thus be useful in many research fields. PMID:27156575

  5. In vitro toxicity of particulate matter (PM) collected at different sites in the Netherlands is associated with PM composition, size fraction and oxidative potential - the RAPTES project

    PubMed Central

    2011-01-01

    Background Ambient particulate matter (PM) exposure is associated with respiratory and cardiovascular morbidity and mortality. To what extent such effects are different for PM obtained from different sources or locations is still unclear. This study investigated the in vitro toxicity of ambient PM collected at different sites in the Netherlands in relation to PM composition and oxidative potential. Method PM was sampled at eight sites: three traffic sites, an underground train station, as well as a harbor, farm, steelworks, and urban background location. Coarse (2.5-10 μm), fine (< 2.5 μm) and quasi ultrafine PM (qUF; < 0.18 μm) were sampled at each site. Murine macrophages (RAW 264.7 cells) were exposed to increasing concentrations of PM from these sites (6.25-12.5-25-50-100 μg/ml; corresponding to 3.68-58.8 μg/cm2). Following overnight incubation, MTT-reduction activity (a measure of metabolic activity) and the release of pro-inflammatory markers (Tumor Necrosis Factor-alpha, TNF-α; Interleukin-6, IL-6; Macrophage Inflammatory Protein-2, MIP-2) were measured. The oxidative potential and the endotoxin content of each PM sample were determined in a DTT- and LAL-assay respectively. Multiple linear regression was used to assess the relationship between the cellular responses and PM characteristics: concentration, site, size fraction, oxidative potential and endotoxin content. Results Most PM samples induced a concentration-dependent decrease in MTT-reduction activity and an increase in pro-inflammatory markers with the exception of the urban background and stop & go traffic samples. Fine and qUF samples of traffic locations, characterized by a high concentration of elemental and organic carbon, induced the highest pro-inflammatory activity. The pro-inflammatory response to coarse samples was associated with the endotoxin level, which was found to increase dramatically during a three-day sample concentration procedure in the laboratory. The underground samples, characterized by a high content of transition metals, showed the largest decrease in MTT-reduction activity. PM size fraction was not related to MTT-reduction activity, whereas there was a statistically significant difference in pro-inflammatory activity between Fine and qUF PM. Furthermore, there was a statistically significant negative association between PM oxidative potential and MTT-reduction activity. Conclusion The response of RAW264.7 cells to ambient PM was markedly different using samples collected at various sites in the Netherlands that differed in their local PM emission sources. Our results are in support of other investigations showing that the chemical composition as well as oxidative potential are determinants of PM induced toxicity in vitro. PMID:21888644

  6. Effect of Mechanical Impact Energy on the Sorption and Diffusion of Moisture in Reinforced Polymer Composite Samples on Variation of Their Sizes

    NASA Astrophysics Data System (ADS)

    Startsev, V. O.; Il'ichev, A. V.

    2018-05-01

    The effect of mechanical impact energy on the sorption and diffusion of moisture in polymer composite samples on variation of their sizes was investigated. Square samples, with sides of 40, 60, 80, and 100 mm, made of a KMKU-2m-120.E0,1 carbon-fiber and KMKS-2m.120.T10 glass-fiber plastics with different resistances to calibrated impacts, were compared. Impact loading diagrams of the samples in relation to their sizes and impact energy were analyzed. It is shown that the moisture saturation and moisture diffusion coefficient of the impact-damaged materials can be modeled by Fick's second law with account of impact energy and sample sizes.

  7. Volatiles in interplanetary dust particles: A comparison with CI and CM chondrites

    NASA Technical Reports Server (NTRS)

    Bustin, Roberta

    1992-01-01

    In an effort to classify and determine the origin of interplanetary dust particles (IDPs), 14 of these particles were studied using a laser microprobe/mass spectrometer. The mass spectra for these particles varied dramatically. Some particles released hydroxide or water which probably originated in hydroxide-bearing minerals or hydrates. Others produced spectra which included a number of hydrocarbons and resembled meteorite spectra. However, none of the individual IDPs gave spectra which could be matched identically with a particular meteorite type such as a CI or CM carbonaceous chondrite. We believe this was due to the fact that 10-20 micron size IDPs are too small to be representative of the parent body. To verify that the diversity was due primarily to the small particle sizes, small grains of approximately the same size range as the IDPs were obtained from two primitive meteorites, Murchison and Orgueil, and these small meteorite particles were treated exactly like the IDPs. Considerable diversity was observed among individual grains, but a composite spectrum of all the grains from one meteorite closely resembled the spectrum obtained from a much larger sample of that meteorite. A composite spectrum of the 14 IDPs also resembled the spectra of the CM and CI meteorites, pointing to a possible link between IDPs and carbonaceous chondrites. This also illustrates that despite the inherent diversity in particles as small as 10-20 micron, conclusions can be drawn about the possible origin and overall composition of such particles by looking not only at results from individual particles but also by including many particles in a study and basing conclusions on some kind of composite data.

  8. Is postural tremor size controlled by interstitial potassium concentration in muscle?

    PubMed Central

    Lakie, M; Hayes, N; Combes, N; Langford, N

    2004-01-01

    Objectives: To determine whether factors associated with postural tremor operate by altering muscle interstitial K+. Methods: An experimental approach was used to investigate the effects of procedures designed to increase or decrease interstitial K+. Postural physiological tremor was measured by conventional means. Brief periods of ischaemic muscle activity were used to increase muscle interstitial K+. Infusion of the ß2 agonist terbutaline was used to decrease plasma (and interstitial) K+. Blood samples were taken for the determination of plasma K+. Results: Ischaemia rapidly reduced tremor size, but only when the muscle was active. The ß2 agonist produced a slow and progressive rise in tremor size that was almost exactly mirrored by a slow and progressive decrease in plasma K+. Conclusions: Ischaemic reduction of postural tremor has been attributed to effects on muscle spindles or an unexplained effect on muscle. This study showed that ischaemia did not reduce tremor size unless there was accompanying muscular activity. An accumulation of K+ in the interstitium of the ischaemic active muscle may blunt the response of the muscle and reduce its fusion frequency, so that the force output becomes less pulsatile and tremor size decreases. When a ß2 agonist is infused, the rise in tremor mirrors the resultant decrease in plasma K+. Decreased plasma K+ reduces interstitial K+ concentration and may produce greater muscular force fluctuation (more tremor). Many other factors that affect postural tremor size may exert their effect by altering plasma K+ concentration, thereby changing the concentration of K+ in the interstitial fluid. PMID:15201362

  9. Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.

    PubMed

    Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra

    2016-11-20

    The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Large sample area and size are needed for forest soil seed bank studies to ensure low discrepancy with standing vegetation.

    PubMed

    Shen, You-xin; Liu, Wei-li; Li, Yu-hui; Guan, Hui-lin

    2014-01-01

    A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1) Does this conventional sampling strategy limit the detection of seeds of woody species? 2) Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length) × 10 cm (width) × 10 cm (depth), referred to as larger number of small-sized samples (LNSS) in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS) and placed them (10 each) in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.

  11. Marital quality and health: A meta-analytic review

    PubMed Central

    Robles, Theodore F.; Slatcher, Richard B.; Trombello, Joseph M.; McGinn, Meghan M.

    2013-01-01

    This meta-analysis reviewed 126 published empirical articles over the past 50 years describing associations between marital relationship quality and physical health in over 72,000 individuals. Health outcomes included clinical endpoints (objective assessments of function, disease severity, and mortality; subjective health assessments) and surrogate endpoints (biological markers that substitute for clinical endpoints, such as blood pressure). Biological mediators included cardiovascular reactivity and hypothalamic-pituitary-adrenal axis activity. Greater marital quality was related to better health, with mean effect sizes from r = .07 to .21, including lower risk of mortality, r = .11, and lower cardiovascular reactivity during marital conflict, r = −.13, but not daily cortisol slopes or cortisol reactivity during conflict. The small effect sizes were similar in magnitude to previously found associations between health behaviors (e.g., diet) and health outcomes. Effect sizes for a small subset of clinical outcomes were susceptible to publication bias. In some studies, effect sizes remained significant after accounting for confounds such as age and socioeconomic status. Studies with a higher proportion of women in the sample demonstrated larger effect sizes, but we found little evidence for gender differences in studies that explicitly tested gender moderation, with the exception of surrogate endpoint studies. Our conclusions are limited by small numbers of studies for specific health outcomes, unexplained heterogeneity, and designs that limit causal inferences. These findings highlight the need to explicitly test affective, health behavior, and biological mechanisms in future research, and focus on moderating factors that may alter the relationship between marital quality and health. PMID:23527470

  12. Single and simultaneous binary mergers in Wright-Fisher genealogies.

    PubMed

    Melfi, Andrew; Viswanath, Divakar

    2018-05-01

    The Kingman coalescent is a commonly used model in genetics, which is often justified with reference to the Wright-Fisher (WF) model. Current proofs of convergence of WF and other models to the Kingman coalescent assume a constant sample size. However, sample sizes have become quite large in human genetics. Therefore, we develop a convergence theory that allows the sample size to increase with population size. If the haploid population size is N and the sample size is N 1∕3-ϵ , ϵ>0, we prove that Wright-Fisher genealogies involve at most a single binary merger in each generation with probability converging to 1 in the limit of large N. Single binary merger or no merger in each generation of the genealogy implies that the Kingman partition distribution is obtained exactly. If the sample size is N 1∕2-ϵ , Wright-Fisher genealogies may involve simultaneous binary mergers in a single generation but do not involve triple mergers in the large N limit. The asymptotic theory is verified using numerical calculations. Variable population sizes are handled algorithmically. It is found that even distant bottlenecks can increase the probability of triple mergers as well as simultaneous binary mergers in WF genealogies. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.

    ERIC Educational Resources Information Center

    Parshall, Cynthia G.; Kromrey, Jeffrey D.

    1996-01-01

    Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)

  14. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 4 2013-10-01 2013-10-01 false Calculating Sample Size for NYTD Follow-Up Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... REQUIREMENTS APPLICABLE TO TITLE IV-E Pt. 1356, App. C Appendix C to Part 1356—Calculating Sample Size for NYTD...

  15. Synthesizing Information From Language Samples and Standardized Tests in School-Age Bilingual Assessment

    PubMed Central

    Pham, Giang

    2017-01-01

    Purpose Although language samples and standardized tests are regularly used in assessment, few studies provide clinical guidance on how to synthesize information from these testing tools. This study extends previous work on the relations between tests and language samples to a new population—school-age bilingual speakers with primary language impairment—and considers the clinical implications for bilingual assessment. Method Fifty-one bilingual children with primary language impairment completed narrative language samples and standardized language tests in English and Spanish. Children were separated into younger (ages 5;6 [years;months]–8;11) and older (ages 9;0–11;2) groups. Analysis included correlations with age and partial correlations between language sample measures and test scores in each language. Results Within the younger group, positive correlations with large effect sizes indicated convergence between test scores and microstructural language sample measures in both Spanish and English. There were minimal correlations in the older group for either language. Age related to English but not Spanish measures. Conclusions Tests and language samples complement each other in assessment. Wordless picture-book narratives may be more appropriate for ages 5–8 than for older children. We discuss clinical implications, including a case example of a bilingual child with primary language impairment, to illustrate how to synthesize information from these tools in assessment. PMID:28055056

  16. Sample size considerations when groups are the appropriate unit of analyses

    PubMed Central

    Sadler, Georgia Robins; Ko, Celine Marie; Alisangco, Jennifer; Rosbrook, Bradley P.; Miller, Eric; Fullerton, Judith

    2007-01-01

    This paper discusses issues to be considered by nurse researchers when groups should be used as a unit of randomization. Advantages and disadvantages are presented, with statistical calculations needed to determine effective sample size. Examples of these concepts are presented using data from the Black Cosmetologists Promoting Health Program. Different hypothetical scenarios and their impact on sample size are presented. Given the complexity of calculating sample size when using groups as a unit of randomization, it’s advantageous for researchers to work closely with statisticians when designing and implementing studies that anticipate the use of groups as the unit of randomization. PMID:17693219

  17. Minimizing the Maximum Expected Sample Size in Two-Stage Phase II Clinical Trials with Continuous Outcomes

    PubMed Central

    Wason, James M. S.; Mander, Adrian P.

    2012-01-01

    Two-stage designs are commonly used for Phase II trials. Optimal two-stage designs have the lowest expected sample size for a specific treatment effect, for example, the null value, but can perform poorly if the true treatment effect differs. Here we introduce a design for continuous treatment responses that minimizes the maximum expected sample size across all possible treatment effects. The proposed design performs well for a wider range of treatment effects and so is useful for Phase II trials. We compare the design to a previously used optimal design and show it has superior expected sample size properties. PMID:22651118

  18. A lack of consistent evidence for cortisol dysregulation in premenstrual syndrome/premenstrual dysphoric disorder.

    PubMed

    Kiesner, Jeff; Granger, Douglas A

    2016-03-01

    Although decades of research has examined the association between cortisol regulation and premenstrual syndrome/premenstrual dysphoric disorder (PMS/PMDD), no review exists to provide a general set of conclusions from the extant research. In the present review we summarize and interpret research that has tested for associations between PMS/PMDD and cortisol levels and reactivity (n=38 original research articles). Three types of studies are examined: correlational studies, environmental-challenge studies, and pharmacological-challenge studies. Overall, there was very little evidence that women with and without PMS/PMDD demonstrate systematic and predictable mean-level differences in cortisol, or differences in cortisol response/reactivity to challenges. Methodological differences in sample size, the types of symptoms used for diagnosis (physical and psychological vs. only affective), or the type of cortisol measure used (serum vs. salivary), did not account for differences between studies that did and did not find significant effects. Caution is recommended before accepting the conclusion of null effects, and recommendations are made that more rigorous research be conducted, considering symptom-specificity, within-person analyses, and multiple parameters of cortisol regulation, before final conclusions are drawn. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. [Characteristics and its forming mechanism on grain size distribution of suspended matter at Changjiang Estuary].

    PubMed

    Pang, Chong-guang; Yu, Wei; Yang, Yang

    2010-03-01

    In July of 2008, under the natural condition of sea water, the Laser in-situ scattering and transmissometry (LISST-100X Type C) was used to measure grain size distribution spectrum and volume concentration of total suspended matter in the sea water, including flocs at different layers of 24 sampling stations at Changjiang Estuary and its adjacent sea. The characteristics and its forming mechanism on grain size distribution of total suspended matter were analyzed based on the observation data of LISST-100X Type C, and combining with the temperature, salinity and turbidity of sea water, simultaneously observed by Alec AAQ1183. The observation data showed that the average median grain size of total suspended matter was about 4.69 phi in the whole measured sea area, and the characteristics of grain size distribution was relatively poor sorted, wide kurtosis, and basically symmetrical. The conclusion could be drawn that vertically average volume concentration decreased with the distance from the coastline, while median grain size had an increase trend with the distance, for example, at 31.0 degrees N section, the depth-average median grain size had been increased from 11 microm up to 60 microm. With the increasing of distance from the coast, the concentration of fine suspended sediment reduced distinctly, nevertheless some relatively big organic matter or big flocs appeared in quantity, so its grain size would rise. The observation data indicated that the effective density was ranged from 246 kg/m3 to 1334 kg/m, with average was 613 kg/m3. When the concentration of total suspended matter was relatively high, median grain size of total suspended matter increased with the water depth, while effective density decreased with the depth, because of the faster settling velocity and less effective density of large flocs that of small flocs. As for station 37 and 44, their correlation coefficients between effective density and median grain size were larger than 0.9.

  20. Estimating population size with correlated sampling unit estimates

    Treesearch

    David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey

    2003-01-01

    Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on mark–recapture or distance sampling methods occur...

  1. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    PubMed

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  2. Seroprevalence of human parvovirus B19 in healthy blood donors

    PubMed Central

    Kumar, Satish; Gupta, R.M.; Sen, Sourav; Sarkar, R.S.; Philip, J.; Kotwal, Atul; Sumathi, S.H.

    2013-01-01

    Background Human parvovirus B19 is an emerging transfusion transmitted infection. Although parvovirus B19 infection is connected with severe complications in some recipients, donor screening is not yet mandatory. To reduce the risk of contamination, plasma-pool screening and exclusion of highly viraemic donations are recommended. In this study the prevalence of parvovirus B19 in healthy blood donors was detected by ELISA. Methods A total of 1633 samples were screened for IgM and IgG antibodies against parvovirus B19 by ELISA. The initial 540 samples were screened for both IgM and IgG class antibodies and remaining 1093 samples were screened for only IgM class antibodies by ELISA. Results Net prevalence of IgM antibodies to human parvovirus B19 in our study was 7.53% and prevalence of IgG antibodies was 27.96%. Dual positivity (IgG and IgM) was 2.40%. Conclusion The seroprevalence of human parvovirus B19 among blood donor population in our study is high, and poses an adverse transfusion risk especially in high-risk group of patients who have no detectable antibodies to B19. Studies with large sample size are needed to validate these results. PMID:24600121

  3. Evaluation of HPV DNA positivity in colorectal cancer patients in Kerman, Southeast Iran

    PubMed

    Malekpour Afshar, Reza; Deldar, Zeinab; Mollaei, Hamid Reza; Arabzadeh, Seyed Alimohammad; Iranpour, Maryam

    2018-01-27

    Background: The HPV virus is known to be oncogenic and associations with many cancers has been proven. Although many studies have been conducted on the possible relationship with colorectal cancer (CRC), a definitive role of the virus has yet to be identified. Method: In this cross-sectional study, the frequency of HPV positivity in CRC samples in Kerman was assessed in 84 cases with a mean age of 47.7 ± 12.5 years over two years. Qualitative real time PCR was performed using general primers for the L1 region of HPV DNA. Results: Out of 84 CRC samples, 19 (22.6%), proved positive for HPV DNA. Genotyping of positive samples showed all of these to be of high risk HPV type. Prevalence of HPV infection appears to depend geographic region, life style, diet and other factors. Conclusion: In our location frequency of CRC is low, and this limited the sample size for evaluation of HPV DNA. The most prevalent types were HPV types 51 and 56. While HPV infection may play an important role in colorectal carcinogenesis, this needs to be assessed in future studies. Creative Commons Attribution License

  4. Vitamin C supplementation and the common cold--was Linus Pauling right or wrong?

    PubMed

    Hemilä, H

    1997-01-01

    In 1970 Linus Pauling claimed that vitamin C prevents and alleviates the episodes of the common cold. Pauling was correct in concluding from trials published up till then, that in general vitamin C does have biological effects on the common cold, but he was rather over-optimistic as regards the size of benefit. His quantitative conclusions were based on a single placebo-controlled trial on schoolchildren in a skiing camp in the Swiss Alps, in which a significant decrease in common cold incidence and duration in the group administered 1 g/day of vitamin C was found. As children in a skiing camp are not a representative sample of the general population, Pauling's extrapolation to the population at large was too bold, erring as to the magnitude of the effect. Nevertheless, Pauling's general conclusion that vitamin C has physiological effects on the common cold is of major importance as it conflicts with the prevailing consensus that the only physiological effect of vitamin C on human beings is to prevent scurvy.

  5. Lack of HPV in Benign and Malignant Epithelial Ovarian Tumors in Iran

    PubMed

    Farzaneh, Farah; Nadji, Seyed Alireza; Khosravi, Donya; Hosseini, Maryam Sadat; Hashemi Bahremani, Mohammad; Chehrazi, Mohammad; Bagheri, Ghazal; Sigaroodi, Afsaneh; Haghighatian, Zahra

    2017-05-01

    Background: Ovarian epithelial tumors one of the most common gynecological neoplasms; we here evaluated the presence of HPV in benign and malignant examples. Methods: In this cross-sectional study the records of 105 patients with epithelial ovarian tumors (benign and malignant) referred to Imam Hossein University Hospital from 2012 to 2015 were evaluated along with assessment of the presence of the HPV infection using PCR. Results: Among 105 patients, comprising 26 (24.8%) with malignant and 79 (75.2%) with benign lesions, the factors found to impact on malignancy were age at diagnosis, age at first pregnancy, number of pregnancies and hormonal status. However, malignancies was not related to abortion, late menopause, and early menarche. In none of the ovarian tissues (benign and malignant) was HPV DNA found. Conclusion: In this study HPV DNA could not be found in any epithelial ovarian tumors (benign and malignant) removed from 105 women; more studies with larger sample size are needed for a definite conclusion. Creative Commons Attribution License

  6. Neural activity in the hippocampus predicts individual visual short-term memory capacity.

    PubMed

    von Allmen, David Yoh; Wurmitzer, Karoline; Martin, Ernst; Klaver, Peter

    2013-07-01

    Although the hippocampus had been traditionally thought to be exclusively involved in long-term memory, recent studies raised controversial explanations why hippocampal activity emerged during short-term memory tasks. For example, it has been argued that long-term memory processes might contribute to performance within a short-term memory paradigm when memory capacity has been exceeded. It is still unclear, though, whether neural activity in the hippocampus predicts visual short-term memory (VSTM) performance. To investigate this question, we measured BOLD activity in 21 healthy adults (age range 19-27 yr, nine males) while they performed a match-to-sample task requiring processing of object-location associations (delay period  =  900 ms; set size conditions 1, 2, 4, and 6). Based on individual memory capacity (estimated by Cowan's K-formula), two performance groups were formed (high and low performers). Within whole brain analyses, we found a robust main effect of "set size" in the posterior parietal cortex (PPC). In line with a "set size × group" interaction in the hippocampus, a subsequent Finite Impulse Response (FIR) analysis revealed divergent hippocampal activation patterns between performance groups: Low performers (mean capacity  =  3.63) elicited increased neural activity at set size two, followed by a drop in activity at set sizes four and six, whereas high performers (mean capacity  =  5.19) showed an incremental activity increase with larger set size (maximal activation at set size six). Our data demonstrated that performance-related neural activity in the hippocampus emerged below capacity limit. In conclusion, we suggest that hippocampal activity reflected successful processing of object-location associations in VSTM. Neural activity in the PPC might have been involved in attentional updating. Copyright © 2013 Wiley Periodicals, Inc.

  7. Molecular Subtypes of Indonesian Breast Carcinomas - Lack of Association with Patient Age and Tumor Size

    PubMed Central

    Rahmawati, Yeni; Setyawati, Yunita; Widodo, Irianiwati; Ghozali, Ahmad; Purnomosari, Dewajani

    2018-01-01

    Objective: Breast carcinoma (BC) is a heterogeneous disease that exhibits variation in biological behaviour, prognosis and response to therapy. Molecular classification is generally into Luminal A, Luminal B, HER2+ and triple negative/basal-like, depending on receptor characteristics. Clinical factors that determined the BC prognosis are age and tumor size. Since information on molecular subtypes of Indonesian BCs is limited, the present study was conducted, with attention to subtypes in relation to age and tumor size. Methods: A retrospective cross-sectional study of 247 paraffin-embedded samples of invasive BC from Dr. Sardjito General Hospital Yogyakarta in the year 2012- 2015 was performed. Immunohistochemical staining using anti- ER, PR, HER2, Ki-67 and CK 5/6 antibodies was applied to classify molecular subtypes. Associations with age and tumor size were analyzed using the Chi Square Test. Results: The Luminal A was the most common subtype of Indonesian BC (41.3%), followed by triple negative (25.5%), HER2 (19.4%) and luminal B (13.8%). Among the triple negative lesions, the basal-like subtype was more frequent than the non basal-like (58.8 % vs 41.2%). Luminal B accounted for the highest percentage of younger age cases (< 40 years old) while HER2+ was most common in older age (> 50 years old) patients. Triple negative/basal-like were commonly large in size. Age (p = 0.080) and tumor size (p = 0.462) were not significantly associated with molecular subtypes of BC. Conclusion: The most common molecular subtype of Indonesian BC is luminal A, followed by triple-negative, HER2+ and luminal B. The majority of triple-negative lesions are basal-like. There are no association between age and tumor size with molecular subtypes of Indonesian BCs. PMID:29373908

  8. Molecular Subtypes of Indonesian Breast Carcinomas - Lack of Association with Patient Age and Tumor Size

    PubMed

    Rahmawati, Yeni; Setyawati, Yunita; Widodo, Irianiwati; Ghozali, Ahmad; Purnomosari, Dewajani

    2018-01-27

    Objective: Breast carcinoma (BC) is a heterogeneous disease that exhibits variation in biological behaviour, prognosis and response to therapy. Molecular classification is generally into Luminal A, Luminal B, HER2+ and triple negative/basal-like, depending on receptor characteristics. Clinical factors that determined the BC prognosis are age and tumor size. Since information on molecular subtypes of Indonesian BCs is limited, the present study was conducted, with attention to subtypes in relation to age and tumor size. Methods: A retrospective cross-sectional study of 247 paraffin-embedded samples of invasive BC from Dr. Sardjito General Hospital Yogyakarta in the year 2012- 2015 was performed. Immunohistochemical staining using anti- ER, PR, HER2, Ki-67 and CK 5/6 antibodies was applied to classify molecular subtypes. Associations with age and tumor size were analyzed using the Chi Square Test. Results: The Luminal A was the most common subtype of Indonesian BC (41.3%), followed by triple negative (25.5%), HER2 (19.4%) and luminal B (13.8%). Among the triple negative lesions, the basal-like subtype was more frequent than the non basal-like (58.8 % vs 41.2%). Luminal B accounted for the highest percentage of younger age cases (< 40 years old) while HER2+ was most common in older age (> 50 years old) patients. Triple negative/basal-like were commonly large in size. Age (p = 0.080) and tumor size (p = 0.462) were not significantly associated with molecular subtypes of BC. Conclusion: The most common molecular subtype of Indonesian BC is luminal A, followed by triple-negative, HER2+ and luminal B. The majority of triple-negative lesions are basal-like. There are no association between age and tumor size with molecular subtypes of Indonesian BCs. Creative Commons Attribution License

  9. Relationships Between Body Size Satisfaction and Weight Control Practices Among US Adults

    PubMed Central

    Millstein, Rachel A.; Carlson, Susan A.; Fulton, Janet E.; Galuska, Deborah A.; Zhang, Jian; Blanck, Heidi M.; Ainsworth, Barbara E.

    2008-01-01

    Context Few studies of US adults have specifically examined body size satisfaction Objectives Describe correlates of body size satisfaction and examine whether satisfaction was associated with trying to lose weight or specific weight control practices among US adults using a national sample of women and men. Design, Setting & Participants The National Physical Activity and Weight Loss Survey (NPAWLS) was a population-based, cross-sectional telephone survey of US adults (n = 9740). Main Outcome Measures Participants reported their weight, height, body size satisfaction, and weight loss practices. Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for each dependent variable. Results Among women and men, higher body mass index (BMI) was significantly associated with body size dissatisfaction. Dissatisfaction, compared with being very satisfied, was positively associated with trying to lose weight among women and men. This association was modified by BMI for women (OR normal weight = 19.69, overweight = 8.79, obese = 4.05; P < .01 for interaction) but not men (OR normal weight = 8.72, overweight = 10.50, obese = 7.86; P = 0.93 for interaction). Compared with women who were very satisfied, dissatisfied women used diet more (OR = 2.03), but not physical activity/exercise (OR = 0.55) or both strategies (OR = 0.63), to try to lose weight. Men who were somewhat satisfied, compared with those who were very satisfied, were more likely to use physical activity/exercise (OR = 1.64) and both diet and physical activity/exercise (OR = 1.54) to try to lose weight. Conclusion These findings highlight the sex differences in body size satisfaction, actions taken to try to lose weight, and the importance of considering body size satisfaction when designing weight-management programs. PMID:18596944

  10. Splenic release of platelets contributes to increased circulating platelet size and inflammation after myocardial infarction.

    PubMed

    Gao, Xiao-Ming; Moore, Xiao-Lei; Liu, Yang; Wang, Xin-Yu; Han, Li-Ping; Su, Yidan; Tsai, Alan; Xu, Qi; Zhang, Ming; Lambert, Gavin W; Kiriazis, Helen; Gao, Wei; Dart, Anthony M; Du, Xiao-Jun

    2016-07-01

    Acute myocardial infarction (AMI) is characterized by a rapid increase in circulating platelet size but the mechanism for this is unclear. Large platelets are hyperactive and associated with adverse clinical outcomes. We determined mean platelet volume (MPV) and platelet-monocyte conjugation (PMC) using blood samples from patients, and blood and the spleen from mice with AMI. We further measured changes in platelet size, PMC, cardiac and splenic contents of platelets and leucocyte infiltration into the mouse heart. In AMI patients, circulating MPV and PMC increased at 1-3 h post-MI and MPV returned to reference levels within 24 h after admission. In mice with MI, increases in platelet size and PMC became evident within 12 h and were sustained up to 72 h. Splenic platelets are bigger than circulating platelets in normal or infarct mice. At 24 h post-MI, splenic platelet storage was halved whereas cardiac platelets increased by 4-fold. Splenectomy attenuated all changes observed in the blood, reduced leucocyte and platelet accumulation in the infarct myocardium, limited infarct size and alleviated cardiac dilatation and dysfunction. AMI-induced elevated circulating levels of adenosine diphosphate and catecholamines in both human and the mouse, which may trigger splenic platelet release. Pharmacological inhibition of angiotensin-converting enzyme, β1-adrenergic receptor or platelet P2Y12 receptor reduced platelet abundance in the murine infarct myocardium albeit having diverse effects on platelet size and PMC. In conclusion, AMI evokes release of splenic platelets, which contributes to the increase in platelet size and PMC and facilitates myocardial accumulation of platelets and leucocytes, thereby promoting post-infarct inflammation. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.

  11. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  12. Influence of pore size distributions on decomposition of maize leaf residue: evidence from X-ray computed micro-tomography

    NASA Astrophysics Data System (ADS)

    Negassa, Wakene; Guber, Andrey; Kravchenko, Alexandra; Rivers, Mark

    2014-05-01

    Soil's potential to sequester carbon (C) depends not only on quality and quantity of organic inputs to soil but also on the residence time of the applied organic inputs within the soil. Soil pore structure is one of the main factors that influence residence time of soil organic matter by controlling gas exchange, soil moisture and microbial activities, thereby soil C sequestration capacity. Previous attempts to investigate the fate of organic inputs added to soil did not allow examining their decomposition in situ; the drawback that can now be remediated by application of X-ray computed micro-tomography (µ-CT). The non-destructive and non-invasive nature of µ-CT gives an opportunity to investigate the effect of soil pore size distributions on decomposition of plant residues at a new quantitative level. The objective of this study is to examine the influence of pore size distributions on the decomposition of plant residue added to soil. Samples with contrasting pore size distributions were created using aggregate fractions of five different sizes (<0.05, 0.05-0.1, 0.10-05, 0.5-1.0 and 1.0-2.0 mm). Weighted average pore diameters ranged from 10 µm (<0.05 mm fraction) to 104 µm (1-2 mm fraction), while maximum pore diameter were in a range from 29 µm (<0.05 mm fraction) to 568 µm (1-2 mm fraction) in the created soil samples. Dried pieces of maize leaves 2.5 mg in size (equivalent to 1.71 mg C g-1 soil) were added to half of the studied samples. Samples with and without maize leaves were incubated for 120 days. CO2 emission from the samples was measured at regular time intervals. In order to ensure that the observed differences are due to differences in pore structure and not due to differences in inherent properties of the studied aggregate fractions, we repeated the whole experiment using soil from the same aggregate size fractions but ground to <0.05 mm size. Five to six replicated samples were used for intact and ground samples of all sizes with and without leaves. Two replications of the intact aggregate fractions of all sizes with leaves were subjected to µ-CT scanning before and after incubation, whereas all the remaining replications of both intact and ground aggregate fractions of <0.05, 0.05-0.1, and 1.0-2.0 mm sizes with leaves were scanned with µ-CT after the incubation. The µ-CT image showed that approximately 80% of the leaves in the intact samples of large aggregate fractions (0.5-1.0 and 1.0-2.0 mm) was decomposed during the incubation, while only 50-60% of the leaves were decomposed in the intact samples of smaller sized fractions. Even lower percent of leaves (40-50%) was decomposed in the ground samples, with very similar leaf decomposition observed in all ground samples regardless of the aggregate fraction size. Consistent with µ-CT results, the proportion of decomposed leaf estimated with the conventional mass loss method was 48% and 60% for the <0.05 mm and 1.0-2.0 mm soil size fractions of intact aggregates, and 40-50% in ground samples, respectively. The results of the incubation experiment demonstrated that, while greater C mineralization was observed in samples of all size fractions amended with leaf, the effect of leaf presence was most pronounced in the smaller aggregate fractions (0.05-0.1 mm and 0.05 mm) of intact aggregates. The results of the present study unequivocally demonstrate that differences in pore size distributions have a major effect on the decomposition of plant residues added to soil. Moreover, in presence of plant residues, differences in pore size distributions appear to also influence the rates of decomposition of the intrinsic soil organic material.

  13. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  14. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Schrodinger's scat: a critical review of the currently available tiger (Panthera Tigris) and leopard (Panthera pardus) specific primers in India, and a novel leopard specific primer.

    PubMed

    Maroju, Pranay Amruth; Yadav, Sonu; Kolipakam, Vishnupriya; Singh, Shweta; Qureshi, Qamar; Jhala, Yadvendradev

    2016-02-09

    Non-invasive sampling has opened avenues for the genetic study of elusive species, which has contributed significantly to their conservation. Where field based identity of non-invasive sample is ambiguous (e.g. carnivore scats), it is essential to establish identity of the species through molecular approaches. A cost effective procedure to ascertain species identity is to use species specific primers (SSP) for PCR amplification and subsequent resolution through agarose gel electrophoresis. However, SSPs if ill designed can often cross amplify non-target sympatric species. Herein we report the problem of cross amplification with currently published SSPs, which have been used in several recent scientific articles on tigers (Panthera tigris) and leopards (Panthera pardus) in India. Since these papers form pioneering research on which future work will be based, an early rectification is required so as to not propagate this error further. We conclusively show cross amplification of three of the four SSPs, in sympatric non-target species like tiger SSP amplifying leopard and striped hyena (Hyaena hyaena), and leopard SSP amplifying tiger, lion (Panthera leo persica) and clouded leopard (Neofelis nebulosa), with the same product size. We develop and test a non-cross-amplifying leopard specific primer pair within the mitochondrial cytochrome b region. We also standardize a duplex PCR method to screen tiger and leopard samples simultaneously in one PCR reaction to reduce cost and time. These findings suggest the importance of an often overlooked preliminary protocol of conclusive identification of species from non-invasive samples. The cross amplification of published primers in conspecifics suggests the need to revisit inferences drawn by earlier work.

  16. A note on sample size calculation for mean comparisons based on noncentral t-statistics.

    PubMed

    Chow, Shein-Chung; Shao, Jun; Wang, Hansheng

    2002-11-01

    One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.

  17. Skylab experiment M487 habitability/crew quarters

    NASA Technical Reports Server (NTRS)

    Johnson, C. C.

    1975-01-01

    Results of Skylab experiment M487 (habitability/crew quarters), which was designed to evaluate the habitability features of Skylab, were presented. General observations and conclusions drawn from the data obtained are presented in detail. The objectives of the experiment, the manner in which data was acquired, and the instruments used to support the experiments are described. Illustrations and photographs of the living and work areas of Skylab and some of the habitability features are provided. Samples of the subjective evaluation questionnaires used by the crewmen are included. Habitability-related documents, crewmen biographies, functional characteristics and photographs of the instruments used, and details of Skylab compartment sizes and color schemes are included as appendixes.

  18. Dynamic pathways for viral capsid assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagan, Michael F.; Chandler, David

    2006-02-09

    We develop a class of models with which we simulate the assembly of particles into T1 capsid-like objects using Newtonian dynamics. By simulating assembly for many different values of system parameters, we vary the forces that drive assembly. For some ranges of parameters, assembly is facile, while for others, assembly is dynamically frustrated by kinetic traps corresponding to malformed or incompletely formed capsids. Our simulations sample many independent trajectories at various capsomer concentrations, allowing for statistically meaningful conclusions. Depending on subunit (i.e., capsomer) geometries, successful assembly proceeds by several mechanisms involving binding of intermediates of various sizes. We discuss themore » relationship between these mechanisms and experimental evaluations of capsid assembly processes.« less

  19. Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates

    PubMed Central

    Bartroff, Jay; Song, Jinlin

    2014-01-01

    This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948

  20. Attitudes toward Master's and Clinical Doctorate Degrees in Physical Therapy

    PubMed Central

    Mistry, Yamini; Francis, Christian; Haldane, Jessica; Symonds, Scott; Uguccioni, Erika; Berg, Katherine

    2014-01-01

    ABSTRACT Purpose: To examine the attitudes of a self-selected sample of Canadian physical therapists toward the transition from bachelor's to master's degrees and the implementation of clinical doctorate degrees in physical therapy (PT). Methods: A cross-sectional survey was conducted using a modified Dillman tailored approach. All eligible members of the Canadian Physiotherapy Association (CPA) were invited to participate. Results: Of 1,397 Canadian physical therapists who responded to the survey, 45% favoured the transition from bachelor's to master's degrees, 21% did not, and 34% were neutral; 27% favoured a transition from a master's to a doctoral degree for entry into practice in PT, 53% did not favour this transition, and 20% were neutral. Finally, 56% favoured the implementation of a post-professional clinical doctorate (PPCD) in PT, 23% did not, and 21% were neutral. Conclusions: Overall, a self-selected sample of Canadian physical therapists supported the future implementation of a post-professional clinical doctorate degree in PT but did not support an entry-to-practice doctoral degree. However, these results must be interpreted with caution because of the study's small sample size. PMID:25922561

Top