The cost of large numbers of hypothesis tests on power, effect size and sample size.
Lazzeroni, L C; Ray, A
2012-01-01
Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.
Sample size considerations for clinical research studies in nuclear cardiology.
Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J
2015-12-01
Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Accounting for twin births in sample size calculations for randomised trials.
Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J
2018-05-04
Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.
Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M
2018-06-01
Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.
ERIC Educational Resources Information Center
Shieh, Gwowen
2013-01-01
The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…
Sample size requirements for indirect association studies of gene-environment interactions (G x E).
Hein, Rebecca; Beckmann, Lars; Chang-Claude, Jenny
2008-04-01
Association studies accounting for gene-environment interactions (G x E) may be useful for detecting genetic effects. Although current technology enables very dense marker spacing in genetic association studies, the true disease variants may not be genotyped. Thus, causal genes are searched for by indirect association using genetic markers in linkage disequilibrium (LD) with the true disease variants. Sample sizes needed to detect G x E effects in indirect case-control association studies depend on the true genetic main effects, disease allele frequencies, whether marker and disease allele frequencies match, LD between loci, main effects and prevalence of environmental exposures, and the magnitude of interactions. We explored variables influencing sample sizes needed to detect G x E, compared these sample sizes with those required to detect genetic marginal effects, and provide an algorithm for power and sample size estimations. Required sample sizes may be heavily inflated if LD between marker and disease loci decreases. More than 10,000 case-control pairs may be required to detect G x E. However, given weak true genetic main effects, moderate prevalence of environmental exposures, as well as strong interactions, G x E effects may be detected with smaller sample sizes than those needed for the detection of genetic marginal effects. Moreover, in this scenario, rare disease variants may only be detectable when G x E is included in the analyses. Thus, the analysis of G x E appears to be an attractive option for the detection of weak genetic main effects of rare variants that may not be detectable in the analysis of genetic marginal effects only.
An audit of the statistics and the comparison with the parameter in the population
NASA Astrophysics Data System (ADS)
Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad
2015-10-01
The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.
Shen, You-xin; Liu, Wei-li; Li, Yu-hui; Guan, Hui-lin
2014-01-01
A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1) Does this conventional sampling strategy limit the detection of seeds of woody species? 2) Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length) × 10 cm (width) × 10 cm (depth), referred to as larger number of small-sized samples (LNSS) in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS) and placed them (10 each) in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.
Lachin, John M.; McGee, Paula L.; Greenbaum, Carla J.; Palmer, Jerry; Gottlieb, Peter; Skyler, Jay
2011-01-01
Preservation of -cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(), log(+1) and square-root transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8–12 years of age, adolescents (13–17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13–17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(+1) and transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes. PMID:22102862
Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay
2011-01-01
Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes.
Preparing rock powder specimens of controlled size distribution
NASA Technical Reports Server (NTRS)
Blum, P.
1968-01-01
Apparatus produces rock powder specimens of the size distribution needed in geological sampling. By cutting grooves in the surface of the rock sample and then by milling these shallow, parallel ridges, the powder specimen is produced. Particle size distribution is controlled by changing the height and width of ridges.
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese
2014-01-01
Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...
ERIC Educational Resources Information Center
Guo, Jiin-Huarng; Luh, Wei-Ming
2008-01-01
This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…
Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review
Morris, Tom; Gray, Laura
2017-01-01
Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637
How large a training set is needed to develop a classifier for microarray data?
Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M
2008-01-01
A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.
Parameter Estimation with Small Sample Size: A Higher-Order IRT Model Approach
ERIC Educational Resources Information Center
de la Torre, Jimmy; Hong, Yuan
2010-01-01
Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article…
Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling
2006-01-01
Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling. PMID:16937083
Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold
2016-04-25
To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Code Saturation Versus Meaning Saturation: How Many Interviews Are Enough?
Hennink, Monique M; Kaiser, Bonnie N; Marconi, Vincent C
2017-03-01
Saturation is a core guiding principle to determine sample sizes in qualitative research, yet little methodological research exists on parameters that influence saturation. Our study compared two approaches to assessing saturation: code saturation and meaning saturation. We examined sample sizes needed to reach saturation in each approach, what saturation meant, and how to assess saturation. Examining 25 in-depth interviews, we found that code saturation was reached at nine interviews, whereby the range of thematic issues was identified. However, 16 to 24 interviews were needed to reach meaning saturation where we developed a richly textured understanding of issues. Thus, code saturation may indicate when researchers have "heard it all," but meaning saturation is needed to "understand it all." We used our results to develop parameters that influence saturation, which may be used to estimate sample sizes for qualitative research proposals or to document in publications the grounds on which saturation was achieved.
Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review.
Kristunas, Caroline; Morris, Tom; Gray, Laura
2017-11-15
To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Any, not limited to healthcare settings. Any taking part in an SW-CRT published up to March 2016. The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22-0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Sample size considerations when groups are the appropriate unit of analyses
Sadler, Georgia Robins; Ko, Celine Marie; Alisangco, Jennifer; Rosbrook, Bradley P.; Miller, Eric; Fullerton, Judith
2007-01-01
This paper discusses issues to be considered by nurse researchers when groups should be used as a unit of randomization. Advantages and disadvantages are presented, with statistical calculations needed to determine effective sample size. Examples of these concepts are presented using data from the Black Cosmetologists Promoting Health Program. Different hypothetical scenarios and their impact on sample size are presented. Given the complexity of calculating sample size when using groups as a unit of randomization, it’s advantageous for researchers to work closely with statisticians when designing and implementing studies that anticipate the use of groups as the unit of randomization. PMID:17693219
Sample Size in Qualitative Interview Studies: Guided by Information Power.
Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit
2015-11-27
Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.
2003-01-01
This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.
Sample sizes to control error estimates in determining soil bulk density in California forest soils
Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber
2016-01-01
Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan
2016-03-09
Cardiac magnetic resonance (CMR) can quantify myocardial infarct (MI) size and myocardium at risk (MaR), enabling assessment of myocardial salvage index (MSI). We assessed how MSI impacts the number of patients needed to reach statistical power in relation to MI size alone and levels of biochemical markers in clinical cardioprotection trials and how scan day affect sample size. Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid scan day bias masking a treatment effect of 25%. Sample size in cardioprotection trials can be reduced 46% to 65% without compromising statistical power when using MSI by CMR as an outcome variable instead of MI size alone or biochemical markers. It is essential to ensure lack of bias in scan day between treatment and control arms to avoid compromising statistical power. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Power and sample size for multivariate logistic modeling of unmatched case-control studies.
Gail, Mitchell H; Haneuse, Sebastien
2017-01-01
Sample size calculations are needed to design and assess the feasibility of case-control studies. Although such calculations are readily available for simple case-control designs and univariate analyses, there is limited theory and software for multivariate unconditional logistic analysis of case-control data. Here we outline the theory needed to detect scalar exposure effects or scalar interactions while controlling for other covariates in logistic regression. Both analytical and simulation methods are presented, together with links to the corresponding software.
Sample-size needs for forestry herbicide trials
S.M. Zedaker; T.G. Gregoire; James H. Miller
1994-01-01
Forest herbicide experiments are increasingly being designed to evaluate smaller treatment differences when comparing existing effective treatments, tank mix ratios, surfactants, and new low-rate products. The ability to detect small differences in efficacy is dependent upon the relationship among sample size. type I and II error probabilities, and the coefficients of...
76 FR 44590 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-26
... health training. This interview will be administered to a sample of approximately 30 owners of construction businesses with 10 or fewer employees from the Greater Cincinnati area. The sample size is based... size experiences the highest fatality rate within construction (U.S. Dept. of Labor, 2008). The need...
Demonstration of Multi- and Single-Reader Sample Size Program for Diagnostic Studies software.
Hillis, Stephen L; Schartz, Kevin M
2015-02-01
The recently released software Multi- and Single-Reader Sample Size Sample Size Program for Diagnostic Studies , written by Kevin Schartz and Stephen Hillis, performs sample size computations for diagnostic reader-performance studies. The program computes the sample size needed to detect a specified difference in a reader performance measure between two modalities, when using the analysis methods initially proposed by Dorfman, Berbaum, and Metz (DBM) and Obuchowski and Rockette (OR), and later unified and improved by Hillis and colleagues. A commonly used reader performance measure is the area under the receiver-operating-characteristic curve. The program can be used with typical common reader-performance measures which can be estimated parametrically or nonparametrically. The program has an easy-to-use step-by-step intuitive interface that walks the user through the entry of the needed information. Features of the software include the following: (1) choice of several study designs; (2) choice of inputs obtained from either OR or DBM analyses; (3) choice of three different inference situations: both readers and cases random, readers fixed and cases random, and readers random and cases fixed; (4) choice of two types of hypotheses: equivalence or noninferiority; (6) choice of two output formats: power for specified case and reader sample sizes, or a listing of case-reader combinations that provide a specified power; (7) choice of single or multi-reader analyses; and (8) functionality in Windows, Mac OS, and Linux.
Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
A note on sample size calculation for mean comparisons based on noncentral t-statistics.
Chow, Shein-Chung; Shao, Jun; Wang, Hansheng
2002-11-01
One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.
2014-01-01
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150
Sample size determination for logistic regression on a logit-normal distribution.
Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance
2017-06-01
Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.
Wang, Sue-Jane; O'Neill, Robert T; Hung, Hm James
2010-10-01
The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
NASA Astrophysics Data System (ADS)
Yuan, Chao; Chareyre, Bruno; Darve, Félix
2016-09-01
A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the microstructure require frequent updates of the pore network.
Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M
2017-11-21
One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.
Simulation analyses of space use: Home range estimates, variability, and sample size
Bekoff, Marc; Mech, L. David
1984-01-01
Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size (number of locations). As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecific variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.
Further improvement of hydrostatic pressure sample injection for microchip electrophoresis.
Luo, Yong; Zhang, Qingquan; Qin, Jianhua; Lin, Bingcheng
2007-12-01
Hydrostatic pressure sample injection method is able to minimize the number of electrodes needed for a microchip electrophoresis process; however, it neither can be applied for electrophoretic DNA sizing, nor can be implemented on the widely used single-cross microchip. This paper presents an injector design that makes the hydrostatic pressure sample injection method suitable for DNA sizing. By introducing an assistant channel into the normal double-cross injector, a rugged DNA sample plug suitable for sizing can be successfully formed within the cross area during the sample loading. This paper also demonstrates that the hydrostatic pressure sample injection can be performed in the single-cross microchip by controlling the radial position of the detection point in the separation channel. Rhodamine 123 and its derivative as model sample were successfully separated.
What is the optimum sample size for the study of peatland testate amoeba assemblages?
Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J
2017-10-01
Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.
2014-04-15
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
NASA Technical Reports Server (NTRS)
Morgera, S. D.; Cooper, D. B.
1976-01-01
The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.
Chance-corrected classification for use in discriminant analysis: Ecological applications
Titus, K.; Mosher, J.A.; Williams, B.K.
1984-01-01
A method for evaluating the classification table from a discriminant analysis is described. The statistic, kappa, is useful to ecologists in that it removes the effects of chance. It is useful even with equal group sample sizes although the need for a chance-corrected measure of prediction becomes greater with more dissimilar group sample sizes. Examples are presented.
Reproducibility of preclinical animal research improves with heterogeneity of study samples
Vogt, Lucile; Sena, Emily S.; Würbel, Hanno
2018-01-01
Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495
Draut, Amy; Rubin, David M.
2013-01-01
Flood-deposited sediment has been used to decipher environmental parameters such as variability in watershed sediment supply, paleoflood hydrology, and channel morphology. It is not well known, however, how accurately the deposits reflect sedimentary processes within the flow, and hence what sampling intensity is needed to decipher records of recent or long-past conditions. We examine these problems using deposits from dam-regulated floods in the Colorado River corridor through Marble Canyon–Grand Canyon, Arizona, U.S.A., in which steady-peaked floods represent a simple end-member case. For these simple floods, most deposits show inverse grading that reflects coarsening suspended sediment (a result of fine-sediment-supply limitation), but there is enough eddy-scale variability that some profiles show normal grading that did not reflect grain-size evolution in the flow as a whole. To infer systemwide grain-size evolution in modern or ancient depositional systems requires sampling enough deposit profiles that the standard error of the mean of grain-size-change measurements becomes small relative to the magnitude of observed changes. For simple, steady-peaked floods, 5–10 profiles or fewer may suffice to characterize grain-size trends robustly, but many more samples may be needed from deposits with greater variability in their grain-size evolution.
NASA Astrophysics Data System (ADS)
Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.
2018-04-01
Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.
Statistical Analysis Techniques for Small Sample Sizes
NASA Technical Reports Server (NTRS)
Navard, S. E.
1984-01-01
The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.
Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat
2018-03-01
To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
Determining chewing efficiency using a solid test food and considering all phases of mastication.
Liu, Ting; Wang, Xinmiao; Chen, Jianshe; van der Glas, Hilbert W
2018-07-01
Following chewing a solid food, the median particle size, X 50 , is determined after N chewing cycles, by curve-fitting of the particle size distribution. Reduction of X 50 with N is traditionally followed from N ≥ 15-20 cycles when using the artificial test food Optosil ® , because of initially unreliable values of X 50 . The aims of the study were (i) to enable testing at small N-values by using initial particles of appropriate size, shape and amount, and (ii) to compare measures of chewing ability, i.e. chewing efficiency (N needed to halve the initial particle size, N(1/2-Xo)) and chewing performance (X 50 at a particular N-value, X 50,N ). 8 subjects with a natural dentition chewed 4 types of samples of Optosil particles: (1) 8 cubes of 8 mm, border size relative to bin size (traditional test), (2) 9 half-cubes of 9.6 mm, mid-size; similar sample volume, (3) 4 half-cubes of 9.6 mm, and 2 half-cubes of 9.6 mm; reduced particle number and sample volume. All samples were tested with 4 N-values. Curve-fitting with a 2nd order polynomial function yielded log(X 50 )-log(N) relationships, after which N(1/2-Xo) and X 50,N were obtained. Reliable X 50 -values are obtained for all N-values when using half-cubes with a mid-size relative to bin sizes. By using 2 or 4 half-cubes, determination of N(1/2-Xo) or X 50,N needs less chewing cycles than traditionally. Chewing efficiency is preferable over chewing performance because of a comparison of inter-subject chewing ability at the same stage of food comminution and constant intra-subject and inter-subject ratios between and within samples respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.
McClure, Foster D; Lee, Jung K
2005-01-01
Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.
Neuromuscular dose-response studies: determining sample size.
Kopman, A F; Lien, C A; Naguib, M
2011-02-01
Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.
ESTIMATING SAMPLE REQUIREMENTS FOR FIELD EVALUATIONS OF PESTICIDE LEACHING
A method is presented for estimating the number of samples needed to evaluate pesticide leaching threats to ground water at a desired level of precision. Sample size projections are based on desired precision (exhibited as relative tolerable error), level of confidence (90 or 95%...
Blanks: a computer program for analyzing furniture rough-part needs in standard-size blanks
Philip A. Araman
1983-01-01
A computer program is described that allows a company to determine the number of edge-glued, standard-size blanks required to satisfy its rough-part needs for a given production period. Yield and cost information also is determined by the program. A list of the program inputs, outputs, and uses of outputs is described, and an example analysis with sample output is...
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H
2017-02-01
We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.
Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.
Morgan, Timothy M; Case, L Douglas
2013-07-05
In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
Mudalige, Thilak K; Qu, Haiou; Linder, Sean W
2015-11-13
Engineered nanoparticles are available in large numbers of commercial products claiming various health benefits. Nanoparticle absorption, distribution, metabolism, excretion, and toxicity in a biological system are dependent on particle size, thus the determination of size and size distribution is essential for full characterization. Number based average size and size distribution is a major parameter for full characterization of the nanoparticle. In the case of polydispersed samples, large numbers of particles are needed to obtain accurate size distribution data. Herein, we report a rapid methodology, demonstrating improved nanoparticle recovery and excellent size resolution, for the characterization of gold nanoparticles in dietary supplements using asymmetric flow field flow fractionation coupled with visible absorption spectrometry and inductively coupled plasma mass spectrometry. A linear relationship between gold nanoparticle size and retention times was observed, and used for characterization of unknown samples. The particle size results from unknown samples were compared to results from traditional size analysis by transmission electron microscopy, and found to have less than a 5% deviation in size for unknown product over the size range from 7 to 30 nm. Published by Elsevier B.V.
Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan
2017-02-01
To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.
The effect of sample size and disease prevalence on supervised machine learning of narrative data.
McKnight, Lawrence K.; Wilcox, Adam; Hripcsak, George
2002-01-01
This paper examines the independent effects of outcome prevalence and training sample sizes on inductive learning performance. We trained 3 inductive learning algorithms (MC4, IB, and Naïve-Bayes) on 60 simulated datasets of parsed radiology text reports labeled with 6 disease states. Data sets were constructed to define positive outcome states at 4 prevalence rates (1, 5, 10, 25, and 50%) in training set sizes of 200 and 2,000 cases. We found that the effect of outcome prevalence is significant when outcome classes drop below 10% of cases. The effect appeared independent of sample size, induction algorithm used, or class label. Work is needed to identify methods of improving classifier performance when output classes are rare. PMID:12463878
Herzog, Sereina A; Low, Nicola; Berghold, Andrea
2015-06-19
The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.
Jorgenson, Andrew K; Clark, Brett
2013-01-01
This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region.
Automated sampling assessment for molecular simulations using the effective sample size
Zhang, Xin; Bhatt, Divesh; Zuckerman, Daniel M.
2010-01-01
To quantify the progress in the development of algorithms and forcefields used in molecular simulations, a general method for the assessment of the sampling quality is needed. Statistical mechanics principles suggest the populations of physical states characterize equilibrium sampling in a fundamental way. We therefore develop an approach for analyzing the variances in state populations, which quantifies the degree of sampling in terms of the effective sample size (ESS). The ESS estimates the number of statistically independent configurations contained in a simulated ensemble. The method is applicable to both traditional dynamics simulations as well as more modern (e.g., multi–canonical) approaches. Our procedure is tested in a variety of systems from toy models to atomistic protein simulations. We also introduce a simple automated procedure to obtain approximate physical states from dynamic trajectories: this allows sample–size estimation in systems for which physical states are not known in advance. PMID:21221418
Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.
Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra
2016-11-20
The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Implications of sampling design and sample size for national carbon accounting systems
Michael Köhl; Andrew Lister; Charles T. Scott; Thomas Baldauf; Daniel Plugge
2011-01-01
Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of...
Fujishima, Motonobu; Kawaguchi, Atsushi; Maikusa, Norihide; Kuwano, Ryozo; Iwatsubo, Takeshi; Matsuda, Hiroshi
2017-01-01
Little is known about the sample sizes required for clinical trials of Alzheimer's disease (AD)-modifying treatments using atrophy measures from serial brain magnetic resonance imaging (MRI) in the Japanese population. The primary objective of the present study was to estimate how large a sample size would be needed for future clinical trials for AD-modifying treatments in Japan using atrophy measures of the brain as a surrogate biomarker. Sample sizes were estimated from the rates of change of the whole brain and hippocampus by the k-means normalized boundary shift integral (KN-BSI) and cognitive measures using the data of 537 Japanese Alzheimer's Neuroimaging Initiative (J-ADNI) participants with a linear mixed-effects model. We also examined the potential use of ApoE status as a trial enrichment strategy. The hippocampal atrophy rate required smaller sample sizes than cognitive measures of AD and mild cognitive impairment (MCI). Inclusion of ApoE status reduced sample sizes for AD and MCI patients in the atrophy measures. These results show the potential use of longitudinal hippocampal atrophy measurement using automated image analysis as a progression biomarker and ApoE status as a trial enrichment strategy in a clinical trial of AD-modifying treatment in Japanese people.
Usami, Satoshi
2017-03-01
Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.
Scaling ice microstructures from the laboratory to nature: cryo-EBSD on large samples.
NASA Astrophysics Data System (ADS)
Prior, David; Craw, Lisa; Kim, Daeyeong; Peyroux, Damian; Qi, Chao; Seidemann, Meike; Tooley, Lauren; Vaughan, Matthew; Wongpan, Pat
2017-04-01
Electron backscatter diffraction (EBSD) has extended significantly our ability to conduct detailed quantitative microstructural investigations of rocks, metals and ceramics. EBSD on ice was first developed in 2004. Techniques have improved significantly in the last decade and EBSD is now becoming more common in the microstructural analysis of ice. This is particularly true for laboratory-deformed ice where, in some cases, the fine grain sizes exclude the possibility of using a thin section of the ice. Having the orientations of all axes (rather than just the c-axis as in an optical method) yields important new information about ice microstructure. It is important to examine natural ice samples in the same way so that we can scale laboratory observations to nature. In the case of ice deformation, higher strain rates are used in the laboratory than those seen in nature. These are achieved by increasing stress and/or temperature and it is important to assess that the microstructures produced in the laboratory are comparable with those observed in nature. Natural ice samples are coarse grained. Glacier and ice sheet ice has a grain size from a few mm up to several cm. Sea and lake ice has grain sizes of a few cm to many metres. Thus extending EBSD analysis to larger sample sizes to include representative microstructures is needed. The chief impediments to working on large ice samples are sample exchange, limitations on stage motion and temperature control. Large ice samples cannot be transferred through a typical commercial cryo-transfer system that limits sample sizes. We transfer through a nitrogen glove box that encloses the main scanning electron microscope (SEM) door. The nitrogen atmosphere prevents the cold stage and the sample from becoming covered in frost. Having a long optimal working distance for EBSD (around 30mm for the Otago cryo-EBSD facility) , by moving the camera away from the pole piece, enables the stage to move without crashing into either the EBSD camera or the SEM pole piece (final lens). In theory a sample up to 100mm perpendicular to the tilt axis by 150mm parallel to the tilt axis can be analysed. In practice, the motion of our stage is restricted to maximum dimensions of 100 by 50mm by a conductive copper braid on our cold stage. Temperature control becomes harder as the samples become larger. If the samples become too warm then they will start to sublime and the quality of EBSD data will reduce. Large samples need to be relatively thin ( 5mm or less) so that conduction of heat to the cold stage is more effective at keeping the surface temperature low. In the Otago facility samples of up to 40mm by 40mm present little problem and can be analysed for several hours without significant sublimation. Larger samples need more care, e.g. fast sample transfer to keep the sample very cold. The largest samples we work on routinely are 40 by 60mm in size. We will show examples of EBSD data from glacial ice and sea ice from Antarctica and from large laboratory ice samples.
Measuring Endocrine-active Chemicals at ng/L Concentrations in Water
Analytical chemistry challenges for supporting aquatic toxicity research and risk assessment are many: need for low detection limits, complex sample matrices, small sample size, and equipment limitations to name a few. Certain types of potent endocrine disrupting chemicals (EDCs)...
Measurement and prediction of the size of suspended sediment over dunes
USDA-ARS?s Scientific Manuscript database
Knowledge of the size of sediment in suspension is important information needed for the collection of concentration data using surrogate technologies and to further understand the processes acting in the transport of suspended sediment over dunes. Samples of suspended sediment were collected at fou...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gihring, Thomas; Green, Stefan; Schadt, Christopher Warren
2011-01-01
Technologies for massively parallel sequencing are revolutionizing microbial ecology and are vastly increasing the scale of ribosomal RNA (rRNA) gene studies. Although pyrosequencing has increased the breadth and depth of possible rRNA gene sampling, one drawback is that the number of reads obtained per sample is difficult to control. Pyrosequencing libraries typically vary widely in the number of sequences per sample, even within individual studies, and there is a need to revisit the behaviour of richness estimators and diversity indices with variable gene sequence library sizes. Multiple reports and review papers have demonstrated the bias in non-parametric richness estimators (e.g.more » Chao1 and ACE) and diversity indices when using clone libraries. However, we found that biased community comparisons are accumulating in the literature. Here we demonstrate the effects of sample size on Chao1, ACE, CatchAll, Shannon, Chao-Shen and Simpson's estimations specifically using pyrosequencing libraries. The need to equalize the number of reads being compared across libraries is reiterated, and investigators are directed towards available tools for making unbiased diversity comparisons.« less
NASA Astrophysics Data System (ADS)
Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R.
2016-02-01
Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation.
Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R
2016-02-15
Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation.
Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R.
2016-01-01
Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation. PMID:26876979
Wildt, Signe; Krag, Aleksander; Gluud, Liselotte
2011-01-01
Objectives To evaluate the adequacy of reporting of protocols for randomised trials on diseases of the digestive system registered in http://ClinicalTrials.gov and the consistency between primary outcomes, secondary outcomes and sample size specified in http://ClinicalTrials.gov and published trials. Methods Randomised phase III trials on adult patients with gastrointestinal diseases registered before January 2009 in http://ClinicalTrials.gov were eligible for inclusion. From http://ClinicalTrials.gov all data elements in the database required by the International Committee of Medical Journal Editors (ICMJE) member journals were extracted. The subsequent publications for registered trials were identified. For published trials, data concerning publication date, primary and secondary endpoint, sample size, and whether the journal adhered to ICMJE principles were extracted. Differences between primary and secondary outcomes, sample size and sample size calculations data in http://ClinicalTrials.gov and in the published paper were registered. Results 105 trials were evaluated. 66 trials (63%) were published. 30% of trials were registered incorrectly after their completion date. Several data elements of the required ICMJE data list were not filled in, with missing data in 22% and 11%, respectively, of cases concerning the primary outcome measure and sample size. In 26% of the published papers, data on sample size calculations were missing and discrepancies between sample size reporting in http://ClinicalTrials.gov and published trials existed. Conclusion The quality of registration of randomised controlled trials still needs improvement.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-04
... approved information collection, the List Sampling Frame Surveys. Revision to burden hours will be needed due to changes in the size of the target population, sampling design, and/or questionnaire length... Agriculture, (202) 720-4333. SUPPLEMENTARY INFORMATION: Title: List Sampling Frame Surveys. OMB Control Number...
Long-term effective population size dynamics of an intensively monitored vertebrate population
Mueller, A-K; Chakarov, N; Krüger, O; Hoffman, J I
2016-01-01
Long-term genetic data from intensively monitored natural populations are important for understanding how effective population sizes (Ne) can vary over time. We therefore genotyped 1622 common buzzard (Buteo buteo) chicks sampled over 12 consecutive years (2002–2013 inclusive) at 15 microsatellite loci. This data set allowed us to both compare single-sample with temporal approaches and explore temporal patterns in the effective number of parents that produced each cohort in relation to the observed population dynamics. We found reasonable consistency between linkage disequilibrium-based single-sample and temporal estimators, particularly during the latter half of the study, but no clear relationship between annual Ne estimates () and census sizes. We also documented a 14-fold increase in between 2008 and 2011, a period during which the census size doubled, probably reflecting a combination of higher adult survival and immigration from further afield. Our study thus reveals appreciable temporal heterogeneity in the effective population size of a natural vertebrate population, confirms the need for long-term studies and cautions against drawing conclusions from a single sample. PMID:27553455
Family Caregivers for Veterans with Spinal Cord Injury: Exploring the Stresses and Benefits
2017-12-01
testing of the newly developed instrument was conducted, however, the sample size was smaller than needed for full pilot testing. A future study with a...larger sample will be needed to verify the pilot testing results. 15. SUBJECT TERMS spinal cord injury, veterans, caregivers 16. SECURITY...A qualitative approach takes advantage of the rich information provided by those living the experience of caregiving and SCI, enabling us to learn
Jorgenson, Andrew K.; Clark, Brett
2013-01-01
This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region. PMID:23437323
NASA Astrophysics Data System (ADS)
Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua
2017-12-01
In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.
NASA Astrophysics Data System (ADS)
Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.
2009-04-01
Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".
Moran, James; Alexander, Thomas; Aalseth, Craig; Back, Henning; Mace, Emily; Overman, Cory; Seifert, Allen; Freeburg, Wilcox
2017-08-01
Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133Bq of total T activity. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of both natural and artificial T behavior in the environment. Copyright © 2017. Published by Elsevier Ltd.
Moran, James; Alexander, Thomas; Aalseth, Craig; ...
2017-01-26
Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. Here, we present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We also identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133 Bq of total T activity. Furthermore, this enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps inmore » our understanding of both natural and artificial T behavior in the environment.« less
Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong
2017-09-01
Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.
Cocco, Arturo; Serra, Giuseppe; Lentini, Andrea; Deliperi, Salvatore; Delrio, Gavino
2015-09-01
The within- and between-plant distribution of the tomato leafminer, Tuta absoluta (Meyrick), was investigated in order to define action thresholds based on leaf infestation and to propose enumerative and binomial sequential sampling plans for pest management applications in protected crops. The pest spatial distribution was aggregated between plants, and median leaves were the most suitable sample to evaluate the pest density. Action thresholds of 36 and 48%, 43 and 56% and 60 and 73% infested leaves, corresponding to economic thresholds of 1 and 3% damaged fruits, were defined for tomato cultivars with big, medium and small fruits respectively. Green's method was a more suitable enumerative sampling plan as it required a lower sampling effort. Binomial sampling plans needed lower average sample sizes than enumerative plans to make a treatment decision, with probabilities of error of <0.10. The enumerative sampling plan required 87 or 343 leaves to estimate the population density in extensive or intensive ecological studies respectively. Binomial plans would be more practical and efficient for control purposes, needing average sample sizes of 17, 20 and 14 leaves to take a pest management decision in order to avoid fruit damage higher than 1% in cultivars with big, medium and small fruits respectively. © 2014 Society of Chemical Industry.
Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E
2018-07-01
The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.
Olives, Casey; Valadez, Joseph J; Brooker, Simon J; Pagano, Marcello
2012-01-01
Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n=15 and n=25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n=15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools.
Conducting Three-Level Longitudinal Analyses
ERIC Educational Resources Information Center
Peugh, James L.; Heck, Ronald H.
2017-01-01
Researchers in the field of early adolescence interested in quantifying the environmental influences on a response variable of interest over time would use cluster sampling (i.e., obtaining repeated measures from students nested within classrooms and/or schools) to obtain the needed sample size. The resulting longitudinal data would be nested at…
The prevalence of terraced treescapes in analyses of phylogenetic data sets.
Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J
2018-04-04
The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.
Development of sampling plans for cotton bolls injured by stink bugs (Hemiptera: Pentatomidae).
Reay-Jones, F P F; Toews, M D; Greene, J K; Reeves, R B
2010-04-01
Cotton, Gossypium hirsutum L., bolls were sampled in commercial fields for stink bug (Hemiptera: Pentatomidae) injury during 2007 and 2008 in South Carolina and Georgia. Across both years of this study, boll-injury percentages averaged 14.8 +/- 0.3 (SEM). At average boll injury treatment levels of 10, 20, 30, and 50%, the percentage of samples with at least one injured boll was 82, 97, 100, and 100%, respectively. Percentage of field-sampling date combinations with average injury < 10, 20, 30, and 50% was 35, 80, 95, and 99%, respectively. At the average of 14.8% boll injury or 2.9 injured bolls per 20-boll sample, 112 samples at Dx = 0.1 (within 10% of the mean) were required for population estimation, compared with only 15 samples at Dx = 0.3. Using a sample size of 20 bolls, our study indicated that, at the 10% threshold and alpha = beta = 0.2 (with 80% confidence), control was not needed when <1.03 bolls were injured. The sampling plan required continued sampling for a range of 1.03-3.8 injured bolls per 20-boll sample. Only when injury was > 3.8 injured bolls per 20-boll sample was a control measure needed. Sequential sampling plans were also determined for thresholds of 20, 30, and 50% injured bolls. Sample sizes for sequential sampling plans were significantly reduced when compared with a fixed sampling plan (n=10) for all thresholds and error rates.
Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla
2016-01-01
Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897
An Employer Needs Assessment for Vocational Education: Palomar Community College District.
ERIC Educational Resources Information Center
Muraski, Ed J.; Barker, Cherie
A study was conducted to determine the employment needs within the Palomar Community College District. Surveys were mailed to a stratified random sample of 600 North San Diego County employers, requesting respondents to provide information on type and size of business, to rank the occupational programs offered by Palomar according to employment…
Sampling strategies for radio-tracking coyotes
Smith, G.J.; Cary, J.R.; Rongstad, O.J.
1981-01-01
Ten coyotes radio-tracked for 24 h periods were most active at night and moved little during daylight hours. Home-range size determined from radio-locations of 3 adult coyotes increased with the number of locations until an asymptote was reached at about 35-40 independent day locations or 3 6 nights of hourly radio-locations. Activity of the coyote did not affect the asymptotic nature of the home-range calculations, but home-range sizes determined from more than 3 nights of hourly locations were considerably larger than home-range sizes determined from daylight locations. Coyote home-range sizes were calculated from daylight locations, full-night tracking periods, and half-night tracking periods. Full- and half-lnight sampling strategies involved obtaining hourly radio-locations during 12 and 6 h periods, respectively. The half-night sampling strategy was the best compromise for our needs, as it adequately indexed the home-range size, reduced time and energy spent, and standardized the area calculation without requiring the researcher to become completely nocturnal. Sight tracking also provided information about coyote activity and sociability.
Chase, Jonathan M; Knight, Tiffany M
2013-05-01
There is little consensus about how natural (e.g. productivity, disturbance) and anthropogenic (e.g. invasive species, habitat destruction) ecological drivers influence biodiversity. Here, we show that when sampling is standardised by area (species density) or individuals (rarefied species richness), the measured effect sizes depend critically on the spatial grain and extent of sampling, as well as the size of the species pool. This compromises comparisons of effects sizes within studies using standard statistics, as well as among studies using meta-analysis. To derive an unambiguous effect size, we advocate that comparisons need to be made on a scale-independent metric, such as Hurlbert's Probability of Interspecific Encounter. Analyses of this metric can be used to disentangle the relative influence of changes in the absolute and relative abundances of individuals, as well as their intraspecific aggregations, in driving differences in biodiversity among communities. This and related approaches are necessary to achieve generality in understanding how biodiversity responds to ecological drivers and will necessitate a change in the way many ecologists collect and analyse their data. © 2013 John Wiley & Sons Ltd/CNRS.
NASA Astrophysics Data System (ADS)
Tian, Shili; Pan, Yuepeng; Wang, Jian; Wang, Yuesi
2016-11-01
Current science and policy requirements have focused attention on the need to expand and improve particulate matter (PM) sampling methods. To explore how sampling filter type affects artifacts in PM composition measurements, size-resolved particulate SO42-, NO3- and NH4+ (SNA) were measured on quartz fiber filters (QFF), glass fiber filters (GFF) and cellulose membranes (CM) concurrently in an urban area of Beijing on both clean and hazy days. The results showed that SNA concentrations in most of the size fractions exhibited the following patterns on different filters: CM > QFF > GFF for NH4+; GFF > QFF > CM for SO42-; and GFF > CM > QFF for NO3-. The different patterns in coarse particles were mainly affected by filter acidity, and that in fine particles were mainly affected by hygroscopicity of the filters (especially in size fraction of 0.65-2.1 μm). Filter acidity and hygroscopicity also shifted the peaks of the annual mean size distributions of SNA on QFF from 0.43-0.65 μm on clean days to 0.65-1.1 μm on hazy days. However, this size shift was not as distinct for samples measured with CM and GFF. In addition, relative humidity (RH) and pollution levels are important factors that can enhance particulate size mode shifts of SNA on clean and hazy days. Consequently, the annual mean size distributions of SNA had maxima at 0.65-1.1 μm for QFF samples and 0.43-0.65 μm for GFF and CM samples. Compared with NH4+ and SO42-, NO3- is more sensitive to RH and pollution levels, accordingly, the annual mean size distribution of NO3- exhibited peak at 0.65-1.1 μm for CM samples instead of 0.43-0.65 μm. These methodological uncertainties should be considered when quantifying the concentrations and size distributions of SNA under different RH and haze conditions.
Bioelectrical impedance analysis: A new tool for assessing fish condition
Hartman, Kyle J.; Margraf, F. Joseph; Hafs, Andrew W.; Cox, M. Keith
2015-01-01
Bioelectrical impedance analysis (BIA) is commonly used in human health and nutrition fields but has only recently been considered as a potential tool for assessing fish condition. Once BIA is calibrated, it estimates fat/moisture levels and energy content without the need to kill fish. Despite the promise held by BIA, published studies have been divided on whether BIA can provide accurate estimates of body composition in fish. In cases where BIA was not successful, the models lacked the range of fat levels or sample sizes we determined were needed for model success (range of dry fat levels of 29%, n = 60, yielding an R2 of 0.8). Reduced range of fat levels requires an increased sample size to achieve that benchmark; therefore, standardization of methods is needed. Here we discuss standardized methods based on a decade of research, identify sources of error, discuss where BIA is headed, and suggest areas for future research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, T.A.
1990-01-01
A study undertaken on an Eocene age coal bed in southeast Kalimantan, Indonesia determined that there was a relationship between megascopically determined coal types and kinds and sizes of organic components. The study also concluded that the most efficient way to characterize the seam was from collection of two 3 cm blocks from each layer or bench defined by megascopic character and that a maximum of 125 point counts was needed on each block. Microscopic examination of uncrushed block samples showed the coal to be composed of plant parts and tissues set in a matrix of both fine-grained and amorphousmore » material. The particulate matrix is composed of cell wall and liptinite fragments, resins, spores, algae, and fungal material. The amorphous matrix consists of unstructured (at 400x) huminite and liptinite. Size measurements showed that each particulate component possessed its own size distribution which approached normality when transformed to a log{sub 2} or phi scale. Degradation of the plant material during peat accumulation probably controlled grain size in the coal types. This notion is further supported by the increased concentration of decay resistant resin and cell fillings in the nonbanded and dull coal types. In the sampling design experiment, two blocks from each layer and two layers from each coal type were collected. On each block, 2 to 4 traverses totaling 500 point counts per block were performed to test the minimum number of points needed to characterize a block. A hierarchical analysis of variance showed that most of the petrographic variation occurred between coal types. The results from these analyses also indicated that, within a coal type, sampling should concentrate on the layer level and that only 250 point counts, split between two blocks, were needed to characterize a layer.« less
Aggregation in organic light emitting diodes
NASA Astrophysics Data System (ADS)
Meyer, Abigail
Organic light emitting diode (OLED) technology has great potential for becoming a solid state lighting source. However, there are inefficiencies in OLED devices that need to be understood. Since these inefficiencies occur on a nanometer scale there is a need for structural data on this length scale in three dimensions which has been unattainable until now. Local Electron Atom Probe (LEAP), a specific implementation of Atom Probe Tomography (APT), is used in this work to acquire morphology data in three dimensions on a nanometer scale with much better chemical resolution than is previously seen. Before analyzing LEAP data, simulations were used to investigate how detector efficiency, sample size and cluster size affect data analysis which is done using radial distribution functions (RDFs). Data is reconstructed using the LEAP software which provides mass and position data. Two samples were then analyzed, 3% DCM2 in C60 and 2% DCM2 in Alq3. Analysis of both samples indicated little to no clustering was present in this system.
Miyashita, Shin-Ichi; Mitsuhashi, Hiroaki; Fujii, Shin-Ichiro; Takatsu, Akiko; Inagaki, Kazumi; Fujimoto, Toshiyuki
2017-02-01
In order to facilitate reliable and efficient determination of both the particle number concentration (PNC) and the size of nanoparticles (NPs) by single-particle ICP-MS (spICP-MS) without the need to correct for the particle transport efficiency (TE, a possible source of bias in the results), a total-consumption sample introduction system consisting of a large-bore, high-performance concentric nebulizer and a small-volume on-axis cylinder chamber was utilized. Such a system potentially permits a particle TE of 100 %, meaning that there is no need to include a particle TE correction when calculating the PNC and the NP size. When the particle TE through the sample introduction system was evaluated by comparing the frequency of sharp transient signals from the NPs in a measured NP standard of precisely known PNC to the particle frequency for a measured NP suspension, the TE for platinum NPs with a nominal diameter of 70 nm was found to be very high (i.e., 93 %), and showed satisfactory repeatability (relative standard deviation of 1.0 % for four consecutive measurements). These results indicated that employing this total consumption system allows the particle TE correction to be ignored when calculating the PNC. When the particle size was determined using a solution-standard-based calibration approach without an NP standard, the particle diameters of platinum and silver NPs with nominal diameters of 30-100 nm were found to agree well with the particle diameters determined by transmission electron microscopy, regardless of whether a correction was performed for the particle TE. Thus, applying the proposed system enables NP size to be accurately evaluated using a solution-standard-based calibration approach without the need to correct for the particle TE.
Needs of the Learning Effect on Instructional Website for Vocational High School Students
ERIC Educational Resources Information Center
Lo, Hung-Jen; Fu, Gwo-Liang; Chuang, Kuei-Chih
2013-01-01
The purpose of study was to understand the correlation between the needs of the learning effect on instructional website for the vocational high school students. Our research applied the statistic methods of product-moment correlation, stepwise regression, and structural equation method to analyze the questionnaire with the sample size of 377…
ERIC Educational Resources Information Center
Adekunjo, Olalekan Abraham; Adepoju, Samuel Olusegun; Adeola, Anuoluwapo Odebunmi
2015-01-01
The study assessed users' information needs and satisfaction in selected seminary libraries in Oyo State, Nigeria. This paper employed the descriptive survey research design, whereby the expost-facto was employed with a sample size of three hundred (300) participants, selected from six seminaries located in Ibadan, Oyo and Ogbomoso, all in Oyo…
1991-08-01
cohorts). The abundance of individuals greater than 20 mm SL and the complexity of size demography indicated longevity of 2 to 3 years for a substantial...Richard Kasul ......... ......................... . 86 DATA ANALYSIS AND INTERPRETATION ...... ................... 93 Measurement of Size Demography ...Experi- ment Station, Vicksburg, MS. Miller, A. C., and Payne, B. S. 1988. "The Need for Quantitative Sampling to Characterize Size Demography and Density
Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette
2018-03-01
In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Vitamin D receptor gene and osteoporosis - author`s response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Looney, J.E.; Yoon, Hyun Koo; Fischer, M.
1996-04-01
We appreciate the comments of Dr. Nguyen et al. about our recent study, but we disagree with their suggestion that the lack of an association between low bone density and the BB VDR genotype, which we reported, is an artifact generated by the small sample size. Furthermore, our results are consistent with similar conclusions reached by a number of other investigators, as recently reported by Peacock. Peacock states {open_quotes}Taken as a whole, the results of studies outlined ... indicate that VDR alleles, cannot account for the major part of the heritable component of bone density as indicated by Morrison etmore » al.{close_quotes}. The majority of the 17 studies cited in this editorial could not confirm an association between the VDR genotype and the bone phenotype. Surely one cannot criticize this combined work as representing an artifact because of a too small sample size. We do not dispute the suggestion by Nguyen et al. that large sample sizes are required to analyze small biological effects. This is evident in both Peacock`s summary and in their own bone density studies. We did not design our study with a larger sample size because, based on the work of Morrison et al., we had hypothesized a large biological effect; large sample sizes are only needed for small biological effects. 4 refs.« less
Olives, Casey; Valadez, Joseph J.; Brooker, Simon J.; Pagano, Marcello
2012-01-01
Background Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. Methodology We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n = 15 and n = 25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Principle Findings Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n = 15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. Conclusion/Significance This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools. PMID:22970333
Zhang, Song; Cao, Jing; Ahn, Chul
2017-02-20
We investigate the estimation of intervention effect and sample size determination for experiments where subjects are supposed to contribute paired binary outcomes with some incomplete observations. We propose a hybrid estimator to appropriately account for the mixed nature of observed data: paired outcomes from those who contribute complete pairs of observations and unpaired outcomes from those who contribute either pre-intervention or post-intervention outcomes. We theoretically prove that if incomplete data are evenly distributed between the pre-intervention and post-intervention periods, the proposed estimator will always be more efficient than the traditional estimator. A numerical research shows that when the distribution of incomplete data is unbalanced, the proposed estimator will be superior when there is moderate-to-strong positive within-subject correlation. We further derive a closed-form sample size formula to help researchers determine how many subjects need to be enrolled in such studies. Simulation results suggest that the calculated sample size maintains the empirical power and type I error under various design configurations. We demonstrate the proposed method using a real application example. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moran, James; Alexander, Thomas; Aalseth, Craig
2017-08-01
Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of T behavior in the environment.
Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction
Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; Coe, Jesse; Conrad, Chelsie E.; Dörner, Katerina; Sierra, Raymond G.; Stevenson, Hilary P.; Camacho-Alanis, Fernanda; Grant, Thomas D.; Nelson, Garrett; James, Daniel; Calero, Guillermo; Wachter, Rebekka M.; Spence, John C. H.; Weierstall, Uwe; Fromme, Petra; Ros, Alexandra
2015-01-01
The advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles can be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ∼4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. This method will also permit an analysis of the dependence of crystal quality on crystal size. PMID:26798818
Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction
Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; ...
2015-08-19
We report that the advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles canmore » be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ~4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. Ultimately, this method will also permit an analysis of the dependence of crystal quality on crystal size.« less
Is Some Data Better than No Data at All? Evaluating the Utility of Secondary Needs Assessment Data
ERIC Educational Resources Information Center
Shamblen, Stephen R.; Dwivedi, Pramod
2010-01-01
Needs assessments in substance abuse prevention often rely on secondary data measures of consumption and consequences to determine what population subgroup and geographic areas should receive a portion of limited resources. Although these secondary data measures have some benefits (e.g. large sample sizes, lack of survey response biases and cost),…
Development and Validation of the Caring Loneliness Scale.
Karhe, Liisa; Kaunonen, Marja; Koivisto, Anna-Maija
2016-12-01
The Caring Loneliness Scale (CARLOS) includes 5 categories derived from earlier qualitative research. This article assesses the reliability and construct validity of a scale designed to measure patient experiences of loneliness in a professional caring relationship. Statistical analysis with 4 different sample sizes included Cronbach's alpha and exploratory factor analysis with principal axis factoring extraction. The sample size of 250 gave the most useful and comprehensible structure, but all 4 samples yielded underlying content of loneliness experiences. The initial 5 categories were reduced to 4 factors with 24 items and Cronbach's alpha ranging from .77 to .90. The findings support the reliability and validity of CARLOS for the assessment of Finnish breast cancer and heart surgery patients' experiences but as all instruments, further validation is needed.
Ellison, Laura E.; Lukacs, Paul M.
2014-01-01
Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.
Researchers’ Intuitions About Power in Psychological Research
Bakker, Marjan; Hartgerink, Chris H. J.; Wicherts, Jelte M.; van der Maas, Han L. J.
2016-01-01
Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers’ experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. PMID:27354203
Researchers' Intuitions About Power in Psychological Research.
Bakker, Marjan; Hartgerink, Chris H J; Wicherts, Jelte M; van der Maas, Han L J
2016-08-01
Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers' experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. © The Author(s) 2016.
Performance of a Line Loss Correction Method for Gas Turbine Emission Measurements
NASA Astrophysics Data System (ADS)
Hagen, D. E.; Whitefield, P. D.; Lobo, P.
2015-12-01
International concern for the environmental impact of jet engine exhaust emissions in the atmosphere has led to increased attention on gas turbine engine emission testing. The Society of Automotive Engineers Aircraft Exhaust Emissions Measurement Committee (E-31) has published an Aerospace Information Report (AIR) 6241 detailing the sampling system for the measurement of non-volatile particulate matter from aircraft engines, and is developing an Aerospace Recommended Practice (ARP) for methodology and system specification. The Missouri University of Science and Technology (MST) Center for Excellence for Aerospace Particulate Emissions Reduction Research has led numerous jet engine exhaust sampling campaigns to characterize emissions at different locations in the expanding exhaust plume. Particle loss, due to various mechanisms, occurs in the sampling train that transports the exhaust sample from the engine exit plane to the measurement instruments. To account for the losses, both the size dependent penetration functions and the size distribution of the emitted particles need to be known. However in the proposed ARP, particle number and mass are measured, but size is not. Here we present a methodology to generate number and mass correction factors for line loss, without using direct size measurement. A lognormal size distribution is used to represent the exhaust aerosol at the engine exit plane and is defined by the measured number and mass at the downstream end of the sample train. The performance of this line loss correction is compared to corrections based on direct size measurements using data taken by MST during numerous engine test campaigns. The experimental uncertainty in these correction factors is estimated. Average differences between the line loss correction method and size based corrections are found to be on the order of 10% for number and 2.5% for mass.
Analysis of Duplicated Multiple-Samples Rank Data Using the Mack-Skillings Test.
Carabante, Kennet Mariano; Alonso-Marenco, Jose Ramon; Chokumnoyporn, Napapan; Sriwattana, Sujinda; Prinyawiwatkul, Witoon
2016-07-01
Appropriate analysis for duplicated multiple-samples rank data is needed. This study compared analysis of duplicated rank preference data using the Friedman versus Mack-Skillings tests. Panelists (n = 125) ranked twice 2 orange juice sets: different-samples set (100%, 70%, vs. 40% juice) and similar-samples set (100%, 95%, vs. 90%). These 2 sample sets were designed to get contrasting differences in preference. For each sample set, rank sum data were obtained from (1) averaged rank data of each panelist from the 2 replications (n = 125), (2) rank data of all panelists from each of the 2 separate replications (n = 125 each), (3) jointed rank data of all panelists from the 2 replications (n = 125), and (4) rank data of all panelists pooled from the 2 replications (n = 250); rank data (1), (2), and (4) were separately analyzed by the Friedman test, although those from (3) by the Mack-Skillings test. The effect of sample sizes (n = 10 to 125) was evaluated. For the similar-samples set, higher variations in rank data from the 2 replications were observed; therefore, results of the main effects were more inconsistent among methods and sample sizes. Regardless of analysis methods, the larger the sample size, the higher the χ(2) value, the lower the P-value (testing H0 : all samples are not different). Analyzing rank data (2) separately by replication yielded inconsistent conclusions across sample sizes, hence this method is not recommended. The Mack-Skillings test was more sensitive than the Friedman test. Furthermore, it takes into account within-panelist variations and is more appropriate for analyzing duplicated rank data. © 2016 Institute of Food Technologists®
Sim, Julius; Lewis, Martyn
2012-03-01
To investigate methods to determine the size of a pilot study to inform a power calculation for a randomized controlled trial (RCT) using an interval/ratio outcome measure. Calculations based on confidence intervals (CIs) for the sample standard deviation (SD). Based on CIs for the sample SD, methods are demonstrated whereby (1) the observed SD can be adjusted to secure the desired level of statistical power in the main study with a specified level of confidence; (2) the sample for the main study, if calculated using the observed SD, can be adjusted, again to obtain the desired level of statistical power in the main study; (3) the power of the main study can be calculated for the situation in which the SD in the pilot study proves to be an underestimate of the true SD; and (4) an "efficient" pilot size can be determined to minimize the combined size of the pilot and main RCT. Trialists should calculate the appropriate size of a pilot study, just as they should the size of the main RCT, taking into account the twin needs to demonstrate efficiency in terms of recruitment and to produce precise estimates of treatment effect. Copyright © 2012 Elsevier Inc. All rights reserved.
In Search of the Largest Possible Tsunami: An Example Following the 2011 Japan Tsunami
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2012-12-01
Many tsunami hazard assessments focus on estimating the largest possible tsunami: i.e., the worst-case scenario. This is typically performed by examining historic and prehistoric tsunami data or by estimating the largest source that can produce a tsunami. We demonstrate that worst-case assessments derived from tsunami and tsunami-source catalogs are greatly affected by sampling bias. Both tsunami and tsunami sources are well represented by a Pareto distribution. It is intuitive to assume that there is some limiting size (i.e., runup or seismic moment) for which a Pareto distribution is truncated or tapered. Likelihood methods are used to determine whether a limiting size can be determined from existing catalogs. Results from synthetic catalogs indicate that several observations near the limiting size are needed for accurate parameter estimation. Accordingly, the catalog length needed to empirically determine the limiting size is dependent on the difference between the limiting size and the observation threshold, with larger catalog lengths needed for larger limiting-threshold size differences. Most, if not all, tsunami catalogs and regional tsunami source catalogs are of insufficient length to determine the upper bound on tsunami runup. As an example, estimates of the empirical tsunami runup distribution are obtained from the Miyako tide gauge station in Japan, which recorded the 2011 Tohoku-oki tsunami as the largest tsunami among 51 other events. Parameter estimation using a tapered Pareto distribution is made both with and without the Tohoku-oki event. The catalog without the 2011 event appears to have a low limiting tsunami runup. However, this is an artifact of undersampling. Including the 2011 event, the catalog conforms more to a pure Pareto distribution with no confidence in estimating a limiting runup. Estimating the size distribution of regional tsunami sources is subject to the same sampling bias. Physical attenuation mechanisms such as wave breaking likely limit the maximum tsunami runup at a particular site. However, historic and prehistoric data alone cannot determine the upper bound on tsunami runup. Because of problems endemic to sampling Pareto distributions of tsunamis and their sources, we recommend that tsunami hazard assessment be based on a specific design probability of exceedance following a pure Pareto distribution, rather than attempting to determine the worst-case scenario.
Walton, Emily; Casey, Christy; Mitsch, Jurgen; Vázquez-Diosdado, Jorge A; Yan, Juan; Dottorini, Tania; Ellis, Keith A; Winterlich, Anthony; Kaler, Jasmeet
2018-02-01
Automated behavioural classification and identification through sensors has the potential to improve health and welfare of the animals. Position of a sensor, sampling frequency and window size of segmented signal data has a major impact on classification accuracy in activity recognition and energy needs for the sensor, yet, there are no studies in precision livestock farming that have evaluated the effect of all these factors simultaneously. The aim of this study was to evaluate the effects of position (ear and collar), sampling frequency (8, 16 and 32 Hz) of a triaxial accelerometer and gyroscope sensor and window size (3, 5 and 7 s) on the classification of important behaviours in sheep such as lying, standing and walking. Behaviours were classified using a random forest approach with 44 feature characteristics. The best performance for walking, standing and lying classification in sheep (accuracy 95%, F -score 91%-97%) was obtained using combination of 32 Hz, 7 s and 32 Hz, 5 s for both ear and collar sensors, although, results obtained with 16 Hz and 7 s window were comparable with accuracy of 91%-93% and F -score 88%-95%. Energy efficiency was best at a 7 s window. This suggests that sampling at 16 Hz with 7 s window will offer benefits in a real-time behavioural monitoring system for sheep due to reduced energy needs.
Walton, Emily; Casey, Christy; Mitsch, Jurgen; Vázquez-Diosdado, Jorge A.; Yan, Juan; Dottorini, Tania; Ellis, Keith A.; Winterlich, Anthony
2018-01-01
Automated behavioural classification and identification through sensors has the potential to improve health and welfare of the animals. Position of a sensor, sampling frequency and window size of segmented signal data has a major impact on classification accuracy in activity recognition and energy needs for the sensor, yet, there are no studies in precision livestock farming that have evaluated the effect of all these factors simultaneously. The aim of this study was to evaluate the effects of position (ear and collar), sampling frequency (8, 16 and 32 Hz) of a triaxial accelerometer and gyroscope sensor and window size (3, 5 and 7 s) on the classification of important behaviours in sheep such as lying, standing and walking. Behaviours were classified using a random forest approach with 44 feature characteristics. The best performance for walking, standing and lying classification in sheep (accuracy 95%, F-score 91%–97%) was obtained using combination of 32 Hz, 7 s and 32 Hz, 5 s for both ear and collar sensors, although, results obtained with 16 Hz and 7 s window were comparable with accuracy of 91%–93% and F-score 88%–95%. Energy efficiency was best at a 7 s window. This suggests that sampling at 16 Hz with 7 s window will offer benefits in a real-time behavioural monitoring system for sheep due to reduced energy needs. PMID:29515862
Moustakas, Aristides; Evans, Matthew R
2015-02-28
Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.
The Army Communications Objectives Measurement System (ACOMS): Survey Design
1988-04-01
monthly basis so that the annual sample includes sufficient Hispanics to detect at the .80 power level: (1) Year-to-year changes of 3% in item...Hispanics. The requirements are listed in terms of power level and must be translated into requisite sample sizes. The requirements are expressed as the...annual samples needed to detect certain differences at the 80% power level. Differences in both directions are to be examined, so that a two-tailed
Cancer-Related Fatigue and Its Associations with Depression and Anxiety: A Systematic Review
Brown, Linda F.; Kroenke, Kurt
2010-01-01
Background Fatigue is an important symptom in cancer and has been shown to be associated with psychological distress. Objectives This review assesses evidence regarding associations of CRF with depression and anxiety. Methods Database searches yielded 59 studies reporting correlation coefficients or odds ratios. Results Combined sample size was 12,103. Average correlation of fatigue with depression, weighted by sample size, was 0.56 and for anxiety, 0.46. Thirty-one instruments were used to assess fatigue, suggesting a lack of consensus on measurement. Conclusion This review confirms the association of fatigue with depression and anxiety. Directionality needs to be better delineated in longitudinal studies. PMID:19855028
Rosenthal, Mariana; Anderson, Katey; Tengelsen, Leslie; Carter, Kris; Hahn, Christine; Ball, Christopher
2017-08-24
The Right Size Roadmap was developed by the Association of Public Health Laboratories and the Centers for Disease Control and Prevention to improve influenza virologic surveillance efficiency. Guidelines were provided to state health departments regarding representativeness and statistical estimates of specimen numbers needed for seasonal influenza situational awareness, rare or novel influenza virus detection, and rare or novel influenza virus investigation. The aim of this study was to compare Roadmap sampling recommendations with Idaho's influenza virologic surveillance to determine implementation feasibility. We calculated the proportion of medically attended influenza-like illness (MA-ILI) from Idaho's influenza-like illness surveillance among outpatients during October 2008 to May 2014, applied data to Roadmap-provided sample size calculators, and compared calculations with actual numbers of specimens tested for influenza by the Idaho Bureau of Laboratories (IBL). We assessed representativeness among patients' tested specimens to census estimates by age, sex, and health district residence. Among outpatients surveilled, Idaho's mean annual proportion of MA-ILI was 2.30% (20,834/905,818) during a 5-year period. Thus, according to Roadmap recommendations, Idaho needs to collect 128 specimens from MA-ILI patients/week for situational awareness, 1496 influenza-positive specimens/week for detection of a rare or novel influenza virus at 0.2% prevalence, and after detection, 478 specimens/week to confirm true prevalence is ≤2% of influenza-positive samples. The mean number of respiratory specimens Idaho tested for influenza/week, excluding the 2009-2010 influenza season, ranged from 6 to 24. Various influenza virus types and subtypes were collected and specimen submission sources were representative in terms of geographic distribution, patient age range and sex, and disease severity. Insufficient numbers of respiratory specimens are submitted to IBL for influenza laboratory testing. Increased specimen submission would facilitate meeting Roadmap sample size recommendations. ©Mariana Rosenthal, Katey Anderson, Leslie Tengelsen, Kris Carter, Christine Hahn, Christopher Ball. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 24.08.2017.
2017-01-01
Background The Right Size Roadmap was developed by the Association of Public Health Laboratories and the Centers for Disease Control and Prevention to improve influenza virologic surveillance efficiency. Guidelines were provided to state health departments regarding representativeness and statistical estimates of specimen numbers needed for seasonal influenza situational awareness, rare or novel influenza virus detection, and rare or novel influenza virus investigation. Objective The aim of this study was to compare Roadmap sampling recommendations with Idaho’s influenza virologic surveillance to determine implementation feasibility. Methods We calculated the proportion of medically attended influenza-like illness (MA-ILI) from Idaho’s influenza-like illness surveillance among outpatients during October 2008 to May 2014, applied data to Roadmap-provided sample size calculators, and compared calculations with actual numbers of specimens tested for influenza by the Idaho Bureau of Laboratories (IBL). We assessed representativeness among patients’ tested specimens to census estimates by age, sex, and health district residence. Results Among outpatients surveilled, Idaho’s mean annual proportion of MA-ILI was 2.30% (20,834/905,818) during a 5-year period. Thus, according to Roadmap recommendations, Idaho needs to collect 128 specimens from MA-ILI patients/week for situational awareness, 1496 influenza-positive specimens/week for detection of a rare or novel influenza virus at 0.2% prevalence, and after detection, 478 specimens/week to confirm true prevalence is ≤2% of influenza-positive samples. The mean number of respiratory specimens Idaho tested for influenza/week, excluding the 2009-2010 influenza season, ranged from 6 to 24. Various influenza virus types and subtypes were collected and specimen submission sources were representative in terms of geographic distribution, patient age range and sex, and disease severity. Conclusions Insufficient numbers of respiratory specimens are submitted to IBL for influenza laboratory testing. Increased specimen submission would facilitate meeting Roadmap sample size recommendations. PMID:28838883
Simulating recurrent event data with hazard functions defined on a total time scale.
Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald
2015-03-08
In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.
Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.
2011-01-01
Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.
Barker, C.E.; Pawlewicz, M.J.
1993-01-01
In coal samples, published recommendations based on statistical methods suggest 100 measurements are needed to estimate the mean random vitrinite reflectance (Rv-r) to within ??2%. Our survey of published thermal maturation studies indicates that those using dispersed organic matter (DOM) mostly have an objective of acquiring 50 reflectance measurements. This smaller objective size in DOM versus that for coal samples poses a statistical contradiction because the standard deviations of DOM reflectance distributions are typically larger indicating a greater sample size is needed to accurately estimate Rv-r in DOM. However, in studies of thermal maturation using DOM, even 50 measurements can be an unrealistic requirement given the small amount of vitrinite often found in such samples. Furthermore, there is generally a reduced need for assuring precision like that needed for coal applications. Therefore, a key question in thermal maturation studies using DOM is how many measurements of Rv-r are needed to adequately estimate the mean. Our empirical approach to this problem is to compute the reflectance distribution statistics: mean, standard deviation, skewness, and kurtosis in increments of 10 measurements. This study compares these intermediate computations of Rv-r statistics with a final one computed using all measurements for that sample. Vitrinite reflectance was measured on mudstone and sandstone samples taken from borehole M-25 in the Cerro Prieto, Mexico geothermal system which was selected because the rocks have a wide range of thermal maturation and a comparable humic DOM with depth. The results of this study suggest that after only 20-30 measurements the mean Rv-r is generally known to within 5% and always to within 12% of the mean Rv-r calculated using all of the measured particles. Thus, even in the worst case, the precision after measuring only 20-30 particles is in good agreement with the general precision of one decimal place recommended for mean Rv-r measurements on DOM. The coefficient of variation (V = standard deviation/mean) is proposed as a statistic to indicate the reliability of the mean Rv-r estimates made at n ??? 20. This preliminary study suggests a V 0.2 suggests an unreliable mean in such small samples. ?? 1993.
ERIC Educational Resources Information Center
Oyewole, Olawale; Adetimirin, Airen
2015-01-01
Lecturers and postgraduates are among the users of the university libraries and their perception of the libraries has influence on utilization of the information resources, hence the need for this study. Survey method was adopted for the study and simple random sampling method was used to select sample size of 38 lecturers and 233 postgraduates.…
The use of coliform plate count data to assess stream sanitary and ecological condition is limited by the need to store samples at 4oC and analyze them within a 24-hour period. We are testing LH-PCR as an alternative tool to assess the bacterial load of streams, offering a cost ...
78 FR 17921 - Notice of Intent To Seek Reinstatement of an Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-25
... may be needed due to changes in the size of the target population, sampling design, and/or questionnaire length. DATES: Comments on this notice must be received by May 24, 2013 to be assured of...
A new estimator of the discovery probability.
Favaro, Stefano; Lijoi, Antonio; Prünster, Igor
2012-12-01
Species sampling problems have a long history in ecological and biological studies and a number of issues, including the evaluation of species richness, the design of sampling experiments, and the estimation of rare species variety, are to be addressed. Such inferential problems have recently emerged also in genomic applications, however, exhibiting some peculiar features that make them more challenging: specifically, one has to deal with very large populations (genomic libraries) containing a huge number of distinct species (genes) and only a small portion of the library has been sampled (sequenced). These aspects motivate the Bayesian nonparametric approach we undertake, since it allows to achieve the degree of flexibility typically needed in this framework. Based on an observed sample of size n, focus will be on prediction of a key aspect of the outcome from an additional sample of size m, namely, the so-called discovery probability. In particular, conditionally on an observed basic sample of size n, we derive a novel estimator of the probability of detecting, at the (n+m+1)th observation, species that have been observed with any given frequency in the enlarged sample of size n+m. Such an estimator admits a closed-form expression that can be exactly evaluated. The result we obtain allows us to quantify both the rate at which rare species are detected and the achieved sample coverage of abundant species, as m increases. Natural applications are represented by the estimation of the probability of discovering rare genes within genomic libraries and the results are illustrated by means of two expressed sequence tags datasets. © 2012, The International Biometric Society.
Nutrition labeling and value size pricing at fast-food restaurants: a consumer perspective.
O'Dougherty, Maureen; Harnack, Lisa J; French, Simone A; Story, Mary; Oakes, J Michael; Jeffery, Robert W
2006-01-01
This pilot study examined nutrition-related attitudes that may affect food choices at fast-food restaurants, including consumer attitudes toward nutrition labeling of fast foods and elimination of value size pricing. A convenience sample of 79 fast-food restaurant patrons aged 16 and above (78.5% white, 55% female, mean age 41.2 [17.1]) selected meals from fast-food restaurant menus that varied as to whether nutrition information was provided and value pricing included and completed a survey and interview on nutrition-related attitudes. Only 57.9% of participants rated nutrition as important when buying fast food. Almost two thirds (62%) supported a law requiring nutrition labeling on restaurant menus. One third (34%) supported a law requiring restaurants to offer lower prices on smaller instead of bigger-sized portions. This convenience sample of fast-food patrons supported nutrition labels on menus. More research is needed with larger samples on whether point-of-purchase nutrition labeling at fast-food restaurants raises perceived importance of nutrition when eating out.
An overview of the genetic dissection of complex traits.
Rao, D C
2008-01-01
Thanks to the recent revolutionary genomic advances such as the International HapMap consortium, resolution of the genetic architecture of common complex traits is beginning to look hopeful. While demonstrating the feasibility of genome-wide association (GWA) studies, the pathbreaking Wellcome Trust Case Control Consortium (WTCCC) study also serves to underscore the critical importance of very large sample sizes and draws attention to potential problems, which need to be addressed as part of the study design. Even the large WTCCC study had vastly inadequate power for several of the associations reported (and confirmed) and, therefore, most of the regions harboring relevant associations may not be identified anytime soon. This chapter provides an overview of some of the key developments in the methodological approaches to genetic dissection of common complex traits. Constrained Bayesian networks are suggested as especially useful for analysis of pathway-based SNPs. Likewise, composite likelihood is suggested as a promising method for modeling complex systems. It discusses the key steps in a study design, with an emphasis on GWA studies. Potential limitations highlighted by the WTCCC GWA study are discussed, including problems associated with massive genotype imputation, analysis of pooled national samples, shared controls, and the critical role of interactions. GWA studies clearly need massive sample sizes that are only possible through genuine collaborations. After all, for common complex traits, the question is not whether we can find some pieces of the puzzle, but how large and what kind of a sample we need to (nearly) solve the genetic puzzle.
Matteson, M T; Ivancevich, J M; McMahon, J T
1977-08-01
This study examines the role of 1) personal job-related needs and 2) certain organizational characteristics in affecting overall job satisfaction for a sample of 259 laboratory professionals, primarily medical technologists. Specific individual needs and specific organizational characteristics were found to be related to three measures of overall job satisfaction. Additional comparisons were made for administrators versus non-administrators and for differences associated with different sized organizations. Implications for the managers of medical technologists and other laboratory professionals are discussed.
High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Juan; Zou, Qingze, E-mail: qzzou@rci.rutgers.edu
In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized inmore » a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.« less
Suhonen, Riitta; Stolt, Minna; Katajisto, Jouko; Leino-Kilpi, Helena
2015-12-01
To report a review of quality regarding sampling, sample and data collection procedures of empirical nursing research of ethical climate studies where nurses were informants. Surveys are needed to obtain generalisable information about topics sensitive to nursing. Methodological quality of the studies is of key concern, especially the description of sampling and data collection procedures. Methodological literature review. Using the electronic MEDLINE database, empirical nursing research articles focusing on ethical climate were accessed in 2013 (earliest-22 November 2013). Using the search terms 'ethical' AND ('climate*' OR 'environment*') AND ('nurse*' OR 'nursing'), 376 citations were retrieved. Based on a four-phase retrieval process, 26 studies were included in the detailed analysis. Sampling method was reported in 58% of the studies, and it was random in a minority of the studies (26%). The identification of target sample and its size (92%) was reported, whereas justification for sample size was less often given. In over two-thirds (69%) of the studies with identifiable response rate, it was below 75%. A variety of data collection procedures were used with large amount of missing data about the details of who distributed, recruited and collected the questionnaires. Methods to increase response rates were seldom described. Discussion about nonresponse, representativeness of the sample and generalisability of the results was missing in many studies. This review highlights the methodological challenges and developments that need to be considered in ensuring the use of valid information in developing health care through research findings. © 2015 Nordic College of Caring Science.
High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force.
Ren, Juan; Zou, Qingze
2014-07-01
In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized in a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.
Intra-class correlation estimates for assessment of vitamin A intake in children.
Agarwal, Girdhar G; Awasthi, Shally; Walter, Stephen D
2005-03-01
In many community-based surveys, multi-level sampling is inherent in the design. In the design of these studies, especially to calculate the appropriate sample size, investigators need good estimates of intra-class correlation coefficient (ICC), along with the cluster size, to adjust for variation inflation due to clustering at each level. The present study used data on the assessment of clinical vitamin A deficiency and intake of vitamin A-rich food in children in a district in India. For the survey, 16 households were sampled from 200 villages nested within eight randomly-selected blocks of the district. ICCs and components of variances were estimated from a three-level hierarchical random effects analysis of variance model. Estimates of ICCs and variance components were obtained at village and block levels. Between-cluster variation was evident at each level of clustering. In these estimates, ICCs were inversely related to cluster size, but the design effect could be substantial for large clusters. At the block level, most ICC estimates were below 0.07. At the village level, many ICC estimates ranged from 0.014 to 0.45. These estimates may provide useful information for the design of epidemiological studies in which the sampled (or allocated) units range in size from households to large administrative zones.
Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua
2016-03-01
This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, P<0.05). This was the first study to determine the sampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.
Royer, Danielle F; Lockwood, Charles A; Scott, Jeremiah E; Grine, Frederick E
2009-10-01
Previous studies of the Middle Stone Age human remains from Klasies River have concluded that they exhibited more sexual dimorphism than extant populations, but these claims have not been assessed statistically. We evaluate these claims by comparing size variation in the best-represented elements at the site, namely the mandibular corpora and M(2)s, to that in samples from three recent human populations using resampling methods. We also examine size variation in these same elements from seven additional middle and late Pleistocene sites: Skhūl, Dolní Vestonice, Sima de los Huesos, Arago, Krapina, Shanidar, and Vindija. Our results demonstrate that size variation in the Klasies assemblage was greater than in recent humans, consistent with arguments that the Klasies people were more dimorphic than living humans. Variation in the Skhūl, Dolní Vestonice, and Sima de los Huesos mandibular samples is also higher than in the recent human samples, indicating that the Klasies sample was not unusual among middle and late Pleistocene hominins. In contrast, the Neandertal samples (Krapina, Shanidar, and Vindija) do not evince relatively high mandibular and molar variation, which may indicate that the level of dimorphism in Neandertals was similar to that observed in extant humans. These results suggest that the reduced levels of dimorphism in Neandertals and living humans may have developed independently, though larger fossil samples are needed to test this hypothesis.
Compact ultrahigh vacuum sample environments for x-ray nanobeam diffraction and imaging.
Evans, P G; Chahine, G; Grifone, R; Jacques, V L R; Spalenka, J W; Schülli, T U
2013-11-01
X-ray nanobeams present the opportunity to obtain structural insight in materials with small volumes or nanoscale heterogeneity. The effective spatial resolution of the information derived from nanobeam techniques depends on the stability and precision with which the relative position of the x-ray optics and sample can be controlled. Nanobeam techniques include diffraction, imaging, and coherent scattering, with applications throughout materials science and condensed matter physics. Sample positioning is a significant mechanical challenge for x-ray instrumentation providing vacuum or controlled gas environments at elevated temperatures. Such environments often have masses that are too large for nanopositioners capable of the required positional accuracy of the order of a small fraction of the x-ray spot size. Similarly, the need to place x-ray optics as close as 1 cm to the sample places a constraint on the overall size of the sample environment. We illustrate a solution to the mechanical challenge in which compact ion-pumped ultrahigh vacuum chambers with masses of 1-2 kg are integrated with nanopositioners. The overall size of the environment is sufficiently small to allow their use with zone-plate focusing optics. We describe the design of sample environments for elevated-temperature nanobeam diffraction experiments demonstrate in situ diffraction, reflectivity, and scanning nanobeam imaging of the ripening of Au crystallites on Si substrates.
Compact ultrahigh vacuum sample environments for x-ray nanobeam diffraction and imaging
NASA Astrophysics Data System (ADS)
Evans, P. G.; Chahine, G.; Grifone, R.; Jacques, V. L. R.; Spalenka, J. W.; Schülli, T. U.
2013-11-01
X-ray nanobeams present the opportunity to obtain structural insight in materials with small volumes or nanoscale heterogeneity. The effective spatial resolution of the information derived from nanobeam techniques depends on the stability and precision with which the relative position of the x-ray optics and sample can be controlled. Nanobeam techniques include diffraction, imaging, and coherent scattering, with applications throughout materials science and condensed matter physics. Sample positioning is a significant mechanical challenge for x-ray instrumentation providing vacuum or controlled gas environments at elevated temperatures. Such environments often have masses that are too large for nanopositioners capable of the required positional accuracy of the order of a small fraction of the x-ray spot size. Similarly, the need to place x-ray optics as close as 1 cm to the sample places a constraint on the overall size of the sample environment. We illustrate a solution to the mechanical challenge in which compact ion-pumped ultrahigh vacuum chambers with masses of 1-2 kg are integrated with nanopositioners. The overall size of the environment is sufficiently small to allow their use with zone-plate focusing optics. We describe the design of sample environments for elevated-temperature nanobeam diffraction experiments demonstrate in situ diffraction, reflectivity, and scanning nanobeam imaging of the ripening of Au crystallites on Si substrates.
Lot quality assurance sampling (LQAS) for monitoring a leprosy elimination program.
Gupte, M D; Narasimhamurthy, B
1999-06-01
In a statistical sense, prevalences of leprosy in different geographical areas can be called very low or rare. Conventional survey methods to monitor leprosy control programs, therefore, need large sample sizes, are expensive, and are time-consuming. Further, with the lowering of prevalence to the near-desired target level, 1 case per 10,000 population at national or subnational levels, the program administrator's concern will be shifted to smaller areas, e.g., districts, for assessment and, if needed, for necessary interventions. In this paper, Lot Quality Assurance Sampling (LQAS), a quality control tool in industry, is proposed to identify districts/regions having a prevalence of leprosy at or above a certain target level, e.g., 1 in 10,000. This technique can also be considered for identifying districts/regions at or below the target level of 1 per 10,000, i.e., areas where the elimination level is attained. For simulating various situations and strategies, a hypothetical computerized population of 10 million persons was created. This population mimics the actual population in terms of the empirical information on rural/urban distributions and the distribution of households by size for the state of Tamil Nadu, India. Various levels with respect to leprosy prevalence are created using this population. The distribution of the number of cases in the population was expected to follow the Poisson process, and this was also confirmed by examination. Sample sizes and corresponding critical values were computed using Poisson approximation. Initially, villages/towns are selected from the population and from each selected village/town households are selected using systematic sampling. Households instead of individuals are used as sampling units. This sampling procedure was simulated 1000 times in the computer from the base population. The results in four different prevalence situations meet the required limits of Type I error of 5% and 90% Power. It is concluded that after validation under field conditions, this method can be considered for a rapid assessment of the leprosy situation.
Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T
2007-01-01
The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.
A USANS/SANS study of the accessibility of pores in the Barnett Shale to methane and water
Ruppert, Leslie F.; Sakurovs, Richard; Blach, Tomasz P.; He, Lilin; Melnichenko, Yuri B.; Mildner, David F.; Alcantar-Lopez, Leo
2013-01-01
Shale is an increasingly important source of natural gas in the United States. The gas is held in fine pores that need to be accessed by horizontal drilling and hydrofracturing techniques. Understanding the nature of the pores may provide clues to making gas extraction more efficient. We have investigated two Mississippian Barnett Shale samples, combining small-angle neutron scattering (SANS) and ultrasmall-angle neutron scattering (USANS) to determine the pore size distribution of the shale over the size range 10 nm to 10 μm. By adding deuterated methane (CD4) and, separately, deuterated water (D2O) to the shale, we have identified the fraction of pores that are accessible to these compounds over this size range. The total pore size distribution is essentially identical for the two samples. At pore sizes >250 nm, >85% of the pores in both samples are accessible to both CD4 and D2O. However, differences in accessibility to CD4 are observed in the smaller pore sizes (~25 nm). In one sample, CD4 penetrated the smallest pores as effectively as it did the larger ones. In the other sample, less than 70% of the smallest pores (4, but they were still largely penetrable by water, suggesting that small-scale heterogeneities in methane accessibility occur in the shale samples even though the total porosity does not differ. An additional study investigating the dependence of scattered intensity with pressure of CD4 allows for an accurate estimation of the pressure at which the scattered intensity is at a minimum. This study provides information about the composition of the material immediately surrounding the pores. Most of the accessible (open) pores in the 25 nm size range can be associated with either mineral matter or high reflectance organic material. However, a complementary scanning electron microscopy investigation shows that most of the pores in these shale samples are contained in the organic components. The neutron scattering results indicate that the pores are not equally proportioned in the different constituents within the shale. There is some indication from the SANS results that the composition of the pore-containing material varies with pore size; the pore size distribution associated with mineral matter is different from that associated with organic phases.
Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B
2017-08-15
In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of the WPC or BPC can increase the required number of clusters. By illustrating how the parameters required for sample size calculations arise from the CRXO design and by providing guidance on both how to choose values for the parameters and perform the sample size calculations, the implementation of the sample size formulae for CRXO trials may improve.
Alcohol marketing research: the need for a new agenda.
Meier, Petra S
2011-03-01
This paper aims to contribute to a rethink of marketing research priorities to address policy makers' evidence needs in relation to alcohol marketing. Discussion paper reviewing evidence gaps identified during an appraisal of policy options to restrict alcohol marketing. Evidence requirements can be categorized as follows: (i) the size of marketing effects for the whole population and for policy-relevant population subgroups, (ii) the balance between immediate and long-term effects and the time lag, duration and cumulative build-up of effects and (iii) comparative effects of partial versus comprehensive marketing restrictions on consumption and harm. These knowledge gaps impede the appraisal and evaluation of existing and new interventions, because without understanding the size and timing of expected effects, researchers may choose inadequate time-frames, samples or sample sizes. To date, research has tended to rely on simplified models of marketing and has focused disproportionately on youth populations. The effects of cumulative exposure across multiple marketing channels, targeting of messages at certain population groups and indirect effects of advertising on consumption remain unclear. It is essential that studies into marketing effect sizes are geared towards informing policy decision-makers, anchored strongly in theory, use measures of effect that are well-justified and recognize fully the complexities of alcohol marketing efforts. © 2010 The Author, Addiction © 2010 Society for the Study of Addiction.
Szabo, Z.; Oden, J.H.; Gibs, J.; Rice, D.E.; Ding, Y.; ,
2001-01-01
Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering. Variations in concentrations aluminum and iron (1 -74 and 1-199 ug/L (micrograms per liter), respectively), common indicators of the presence of particulate-borne trace elements, were greatest in sample sets from individual wells with the greatest variations in turbidity and particle concentration. Differences in trace-element concentrations in sequentially collected unfiltered samples with variable turbidity were 5 to 10 times as great as those in concurrently collected samples that were passed through various filters. These results indicate that turbidity must be both reduced and stabilized even when low-flow sample-collection techniques are used in order to obtain water samples that do not contain considerable particulate artifacts. Currently (2001) available techniques need to be refined to ensure that the measured trace-element concentrations are representative of those that are mobile in the aquifer water.
Sample preparation techniques for the determination of trace residues and contaminants in foods.
Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M
2007-06-15
The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.
Public health financial management needs: report of a national survey.
Costich, Julia F; Honoré, Peggy A; Scutchfield, F Douglas
2009-01-01
The work reported here builds on the identification of public health financial management practice competencies by a national expert panel. The next logical step was to provide a validity check for the competencies and identify priority areas for educational programming. We developed a survey for local public health finance officers based on the public health finance competencies and field tested it with a convenience sample of officials. We asked respondents to indicate the importance of each competency area and the need for training to improve performance; we also requested information regarding respondent education, jurisdiction size, and additional comments. Our local agency survey sample drew on the respondent list from the National Association of County and City Health Officials 2005 local health department survey, stratified by agency size and limited to jurisdiction populations of 25,000 to 1,000,000. Identifying appropriate respondents was a major challenge. The survey was fielded electronically, yielding 112 responses from 30 states. The areas identified as most important and needing most additional training were knowledge of budget activities, financial data interpretation and communication, and ability to assess and correct the organization's financial status. The majority of respondents had some postbaccalaureate education. Many provided additional comments and recommendations. Health department finance officers demonstrated a high level of general agreement regarding the importance of finance competencies in public health and the need for training. The findings point to a critical need for additional training opportunities that are accessible, cost-effective, and targeted to individual needs.
NASA Astrophysics Data System (ADS)
Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc
2012-11-01
Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling curves before explaining differences in diversity.
Technical note: Alternatives to reduce adipose tissue sampling bias.
Cruz, G D; Wang, Y; Fadel, J G
2014-10-01
Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.
Spatial Sampling of Weather Data for Regional Crop Yield Simulations
NASA Technical Reports Server (NTRS)
Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian;
2016-01-01
Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management data for regional simulations of crop yields is still needed.
Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor
NASA Technical Reports Server (NTRS)
Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)
1980-01-01
The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.
Variable aperture-based ptychographical iterative engine method
NASA Astrophysics Data System (ADS)
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.
NASA Astrophysics Data System (ADS)
Plionis, A. A.; Peterson, D. S.; Tandon, L.; LaMont, S. P.
2010-03-01
Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid non-distructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.
Wareham, K J; Hyde, R M; Grindlay, D; Brennan, M L; Dean, R S
2017-10-04
Randomised controlled trials (RCTs) are a key component of the veterinary evidence base. Sample sizes and defined outcome measures are crucial components of RCTs. To describe the sample size and number of outcome measures of veterinary RCTs either funded by the pharmaceutical industry or not, published in 2011. A structured search of PubMed identified RCTs examining the efficacy of pharmaceutical interventions. Number of outcome measures, number of animals enrolled per trial, whether a primary outcome was identified, and the presence of a sample size calculation were extracted from the RCTs. The source of funding was identified for each trial and groups compared on the above parameters. Literature searches returned 972 papers; 86 papers comprising 126 individual trials were analysed. The median number of outcomes per trial was 5.0; there were no significant differences across funding groups (p = 0.133). The median number of animals enrolled per trial was 30.0; this was similar across funding groups (p = 0.302). A primary outcome was identified in 40.5% of trials and was significantly more likely to be stated in trials funded by a pharmaceutical company. A very low percentage of trials reported a sample size calculation (14.3%). Failure to report primary outcomes, justify sample sizes and the reporting of multiple outcome measures was a common feature in all of the clinical trials examined in this study. It is possible some of these factors may be affected by the source of funding of the studies, but the influence of funding needs to be explored with a larger number of trials. Some veterinary RCTs provide a weak evidence base and targeted strategies are required to improve the quality of veterinary RCTs to ensure there is reliable evidence on which to base clinical decisions.
Measuring size evolution of distant, faint galaxies in the radio regime
NASA Astrophysics Data System (ADS)
Lindroos, L.; Knudsen, K. K.; Stanley, F.; Muxlow, T. W. B.; Beswick, R. J.; Conway, J.; Radcliffe, J. F.; Wrigley, N.
2018-05-01
We measure the evolution of sizes for star-forming galaxies as seen in 1.4 GHz continuum radio for z = 0-3. The measurements are based on combined VLA+MERLIN data of the Hubble Deep Field, and using a uv-stacking algorithm combined with model fitting to estimate the average sizes of galaxies. A sample of ˜1000 star-forming galaxies is selected from optical and near-infrared catalogues, with stellar masses M⊙ ≈ 1010-1011 M⊙ and photometric redshifts 0-3. The median sizes are parametrized for stellar mass M* = 5 × 1010 M⊙ as R_e = A× {}(H(z)/H(1.5))^{α _z}. We find that the median radio sizes evolve towards larger sizes at later times with αz = -1.1 ± 0.6, and A (the median size at z ≈ 1.5) is found to be 0.26^'' ± 0.07^'' or 2.3±0.6 kpc. The measured radio sizes are typically a factor of 2 smaller than those measure in the optical, and are also smaller than the typical H α sizes in the literature. This indicates that star formation, as traced by the radio continuum, is typically concentrated towards the centre of galaxies, for the sampled redshift range. Furthermore, the discrepancy of measured sizes from different tracers of star formation, indicates the need for models of size evolution to adopt a multiwavelength approach in the measurement of the sizes star-forming regions.
The Relationship between Organizational Culture Types and Innovation in Aerospace Companies
NASA Astrophysics Data System (ADS)
Nelson, Adaora N.
Innovation in the aerospace industry has proven to be an effective strategy for competitiveness and sustainability. The organizational culture of the firm must be conducive to innovation. The problem was that although innovation is needed for aerospace companies to be competitive and sustainable, certain organizational culture issues might hinder leaders from successfully innovating (Emery, 2010; Ramanigopal, 2012). The purpose of this study was to assess the relationship of hierarchical, clan, adhocracy and market organizational types and innovation in aerospace companies within the U.S while controlling for company size and length of time in business. The non-experimental quantitative study included a random sample of 136 aerospace leaders in the U.S. There was a significant relationship between market organizational culture and innovation, F(1,132) = 4.559, p = .035. No significant relationships were found between hierarchical organizational culture and innovation and between clan culture and innovation. The relationship between adhocracy culture and innovation was not significant, possible due to inadequate sample size. Company size was shown to be a justifiable covariate in the study, due to a significant relationship with innovative (F(1, 130) = 4.66, p < .1, r = .19). Length of time in business had no relationship with innovation. The findings imply that market organizational cultures are more likely to result in innovative outcomes in the aerospace industry. Organizational leaders are encouraged to adopt a market culture and adopt smaller organizational structures. Recommendations for further research include investigating the relationship between adhocracy culture and innovation using an adequate sample size. Research is needed to determine other variables that predict innovation. This study should be repeated at periodic intervals and across other industrial sectors and countries.
Melvin, Elizabeth M; Moore, Brandon R; Gilchrist, Kristin H; Grego, Sonia; Velev, Orlin D
2011-09-01
The recent development of microfluidic "lab on a chip" devices requiring sample sizes <100 μL has given rise to the need to concentrate dilute samples and trap analytes, especially for surface-based detection techniques. We demonstrate a particle collection device capable of concentrating micron-sized particles in a predetermined area by combining AC electroosmosis (ACEO) and dielectrophoresis (DEP). The planar asymmetric electrode pattern uses ACEO pumping to induce equal, quadrilateral flow directed towards a stagnant region in the center of the device. A number of system parameters affecting particle collection efficiency were investigated including electrode and gap width, chamber height, applied potential and frequency, and number of repeating electrode pairs and electrode geometry. The robustness of the on-chip collection design was evaluated against varying electrolyte concentrations, particle types, and particle sizes. These devices are amenable to integration with a variety of detection techniques such as optical evanescent waveguide sensing.
Single-image diffusion coefficient measurements of proteins in free solution.
Zareh, Shannon Kian; DeSantis, Michael C; Kessler, Jonathan M; Li, Je-Luen; Wang, Y M
2012-04-04
Diffusion coefficient measurements are important for many biological and material investigations, such as studies of particle dynamics and kinetics, and size determinations. Among current measurement methods, single particle tracking (SPT) offers the unique ability to simultaneously obtain location and diffusion information about a molecule while using only femtomoles of sample. However, the temporal resolution of SPT is limited to seconds for single-color-labeled samples. By directly imaging three-dimensional diffusing fluorescent proteins and studying the widths of their intensity profiles, we were able to determine the proteins' diffusion coefficients using single protein images of submillisecond exposure times. This simple method improves the temporal resolution of diffusion coefficient measurements to submilliseconds, and can be readily applied to a range of particle sizes in SPT investigations and applications in which diffusion coefficient measurements are needed, such as reaction kinetics and particle size determinations. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Naval Medical Research and Development News. Volume 7, Issue 9
2015-09-01
satisfaction with the simulated training; career intentions; and, general, occupational, and task-specific self-efficacy using pretest and post - test ...samples needed to be transported to the labs for testing . What was needed was a rapid, on -site, diagnostic test that could be done quickly. "The U.S...relatively small size of the group -- usually only a handful of people per deployment - required members to juggle multiple tasks on their own, including
Using simulation to aid trial design: Ring-vaccination trials.
Hitchings, Matt David Thomas; Grais, Rebecca Freeman; Lipsitch, Marc
2017-03-01
The 2014-6 West African Ebola epidemic highlights the need for rigorous, rapid clinical trial methods for vaccines. A challenge for trial design is making sample size calculations based on incidence within the trial, total vaccine effect, and intracluster correlation, when these parameters are uncertain in the presence of indirect effects of vaccination. We present a stochastic, compartmental model for a ring vaccination trial. After identification of an index case, a ring of contacts is recruited and either vaccinated immediately or after 21 days. The primary outcome of the trial is total vaccine effect, counting cases only from a pre-specified window in which the immediate arm is assumed to be fully protected and the delayed arm is not protected. Simulation results are used to calculate necessary sample size and estimated vaccine effect. Under baseline assumptions about vaccine properties, monthly incidence in unvaccinated rings and trial design, a standard sample-size calculation neglecting dynamic effects estimated that 7,100 participants would be needed to achieve 80% power to detect a difference in attack rate between arms, while incorporating dynamic considerations in the model increased the estimate to 8,900. This approach replaces assumptions about parameters at the ring level with assumptions about disease dynamics and vaccine characteristics at the individual level, so within this framework we were able to describe the sensitivity of the trial power and estimated effect to various parameters. We found that both of these quantities are sensitive to properties of the vaccine, to setting-specific parameters over which investigators have little control, and to parameters that are determined by the study design. Incorporating simulation into the trial design process can improve robustness of sample size calculations. For this specific trial design, vaccine effectiveness depends on properties of the ring vaccination design and on the measurement window, as well as the epidemiologic setting.
Correlates of self worth and body size dissatisfaction among obese Latino youth
Mirza, Nazrat M; Mackey, Eleanor Race; Armstrong, Bridget; Jaramillo, Ana; Palmer, Matilde M
2011-01-01
The current study examined self-worth and body size dissatisfaction, and their association with maternal acculturation among obese Latino youth enrolled in a community-based obesity intervention program. Upon entry to the program, a sample of 113 participants reported global self-worth comparable to general population norms, but lower athletic competence and perception of physical appearance. Interestingly, body size dissatisfaction was more prevalent among younger respondents. Youth body size dissatisfaction was associated with less acculturated mothers and higher maternal dissatisfaction with their child's body size. By contrast, although global self-worth was significantly related to body dissatisfaction, it was not influenced by mothers’ acculturation or dissatisfaction with their own or their child’s body size. Obesity intervention programs targeted to Latino youth need to address self-worth concerns among the youth as well as addressing maternal dissatisfaction with their children’s body size. PMID:21354881
Herath, Samantha; Yap, Elaine
2018-02-01
In diagnosing peripheral pulmonary lesions (PPL), radial endobronchial ultrasound (R-EBUS) is emerging as a safer method in comparison to CT-guided biopsy. Despite the better safety profile, the yield of R-EBUS remains lower (73%) than CT-guided biopsy (90%) due to the smaller size of samples. We adopted a hybrid method by adding cryobiopsy via the R-EBUS Guide Sheath (GS) to produce larger, non-crushed samples to improve diagnostic capability and enhance molecular testing. We report six prospective patients who underwent this procedure in our institution. R-EBUS samples were obtained via conventional sampling methods (needle aspiration, forceps biopsy, and cytology brush), followed by a cryobiopsy. An endobronchial blocker was placed near the planned area of biopsy in advance and inflated post-biopsy to minimize the risk of bleeding in all patients. A chest X-ray was performed 1 h post-procedure. All the PPLs were visualized with R-EBUS. The mean diameter of cryobiopsy samples was twice the size of forceps biopsy samples. In four patients, cryobiopsy samples were superior in size and the number of malignant cells per high power filed and was the preferred sample selected for mutation analysis and molecular testing. There was no pneumothorax or significant bleeding to report. Cryobiopsy samples were consistently larger and were the preferred samples for molecular testing, with an increase in the diagnostic yield and reduction in the need for repeat procedures, without hindering the marked safety profile of R-EBUS. Using an endobronchial blocker improves the safety of this procedure.
Self-objectification and disordered eating: A meta-analysis.
Schaefer, Lauren M; Thompson, J Kevin
2018-06-01
Objectification theory posits that self-objectification increases risk for disordered eating. The current study sought to examine the relationship between self-objectification and disordered eating using meta-analytic techniques. Data from 53 cross-sectional studies (73 effect sizes) revealed a significant moderate positive overall effect (r = .39), which was moderated by gender, ethnicity, sexual orientation, and measurement of self-objectification. Specifically, larger effect sizes were associated with female samples and the Objectified Body Consciousness Scale. Effect sizes were smaller among heterosexual men and African American samples. Age, body mass index, country of origin, measurement of disordered eating, sample type and publication type were not significant moderators. Overall, results from the first meta-analysis to examine the relationship between self-objectification and disordered eating provide support for one of the major tenets of objectification theory and suggest that self-objectification may be a meaningful target in eating disorder interventions, though further work is needed to establish temporal and causal relationships. Findings highlight current gaps in the literature (e.g., limited representation of males, and ethnic and sexual minorities) with implications for guiding future research. © 2018 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
European Social Fund, Dublin (Ireland).
A study examined attitudes of Irish employers toward vocational training (VT) activities, state agencies responsible for administering VT, and the skills that employees would need in the future. Of a sample of 500 firms that were selected as being representative from the standpoints of size, sector, location, and form of ownership, 219 were…
Unique technical innovations for short rotation woody crops research and development
Adam H. Wiese; Ronald S., Jr. Zalesny
2006-01-01
Often technology that is available to conduct short rotation woody crops (SRWC) research is too expensive, difficult to operate, cumbersome, and/or impractical for meeting sample size requirements. Thus, we have designed, constructed, and tested technical innovations that have allowed us to meet our specific experimental needs.
Toxicological assessment of environmentally-realistic complex mixtures of drinking-water disinfection byproducts (DBPs) are needed to address concerns raised by some epidemiological studies showing associations between exposure to chemically disinfected water and adverse reproduc...
There is a critical need to assess the health effects associated with exposure of commercially produced NPs across the size ranges reflective of that detected in the industrial sectors that are generating, as well as incorporating, NPs into products. Generation of stable and low ...
Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique
2017-01-01
Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Measurements of Regolith Simulant Thermal Conductivity Under Asteroid and Mars Surface Conditions
NASA Astrophysics Data System (ADS)
Ryan, A. J.; Christensen, P. R.
2017-12-01
Laboratory measurements have been necessary to interpret thermal data of planetary surfaces for decades. We present a novel radiometric laboratory method to determine temperature-dependent thermal conductivity of complex regolith simulants under rough to high vacuum and across a wide range of temperatures. This method relies on radiometric temperature measurements instead of contact measurements, eliminating the need to disturb the sample with thermal probes. We intend to determine the conductivity of grains that are up to 2 cm in diameter and to parameterize the effects of angularity, sorting, layering, composition, and eventually cementation. We present the experimental data and model results for a suite of samples that were selected to isolate and address regolith physical parameters that affect bulk conductivity. Spherical glass beads of various sizes were used to measure the effect of size frequency distribution. Spherical beads of polypropylene and well-rounded quartz sand have respectively lower and higher solid phase thermal conductivities than the glass beads and thus provide the opportunity to test the sensitivity of bulk conductivity to differences in solid phase conductivity. Gas pressure in our asteroid experimental chambers is held at 10^-6 torr, which is sufficient to negate gas thermal conduction in even our coarsest of samples. On Mars, the atmospheric pressure is such that the mean free path of the gas molecules is comparable to the pore size for many regolith particulates. Thus, subtle variations in pore size and/or atmospheric pressure can produce large changes in bulk regolith conductivity. For each sample measured in our martian environmental chamber, we repeat thermal measurement runs at multiple pressures to observe this behavior. Finally, we present conductivity measurements of angular basaltic simulant that is physically analogous to sand and gravel that may be present on Bennu. This simulant was used for OSIRIS-REx TAGSAM Sample Return Arm engineering tests. We measure the original size frequency distribution as well as several sorted size fractions. These results will support the efforts of the OSIRIS-REx team in selecting a site on asteroid Bennu that is safe for the spacecraft and meets grain size requirements for sampling.
A low-volume cavity ring-down spectrometer for sample-limited applications
NASA Astrophysics Data System (ADS)
Stowasser, C.; Farinas, A. D.; Ware, J.; Wistisen, D. W.; Rella, C.; Wahl, E.; Crosson, E.; Blunier, T.
2014-08-01
In atmospheric and environmental sciences, optical spectrometers are used for the measurements of greenhouse gas mole fractions and the isotopic composition of water vapor or greenhouse gases. The large sample cell volumes (tens of milliliters to several liters) in commercially available spectrometers constrain the usefulness of such instruments for applications that are limited in sample size and/or need to track fast variations in the sample stream. In an effort to make spectrometers more suitable for sample-limited applications, we developed a low-volume analyzer capable of measuring mole fractions of methane and carbon monoxide based on a commercial cavity ring-down spectrometer. The instrument has a small sample cell (9.6 ml) and can selectively be operated at a sample cell pressure of 140, 45, or 20 Torr (effective internal volume of 1.8, 0.57, and 0.25 ml). We present the new sample cell design and the flow path configuration, which are optimized for small sample sizes. To quantify the spectrometer's usefulness for sample-limited applications, we determine the renewal rate of sample molecules within the low-volume spectrometer. Furthermore, we show that the performance of the low-volume spectrometer matches the performance of the standard commercial analyzers by investigating linearity, precision, and instrumental drift.
NASA Astrophysics Data System (ADS)
Rai, A. K.; Kumar, A.; Hies, T.; Nguyen, H. H.
2016-11-01
High sediment load passing through hydropower components erodes the hydraulic components resulting in loss of efficiency, interruptions in power production and downtime for repair/maintenance, especially in Himalayan regions. The size and concentration of sediment play a major role in silt erosion. The traditional process of collecting samples manually to analyse in laboratory cannot suffice the need of monitoring temporal variation in sediment properties. In this study, a multi-frequency acoustic instrument was applied at desilting chamber to monitor sediment size and concentration entering the turbine. The sediment size and concentration entering the turbine were also measured with manual samples collected twice daily. The samples collected manually were analysed in laboratory with a laser diffraction instrument for size and concentration apart from analysis by drying and filtering methods for concentration. A conductivity probe was used to calculate total dissolved solids, which was further used in results from drying method to calculate suspended solid content of the samples. The acoustic instrument was found to provide sediment concentration values similar to drying and filtering methods. However, no good match was found between mean grain size from the acoustic method with the current status of development and laser diffraction method in the first field application presented here. The future versions of the software and significant sensitivity improvements of the ultrasonic transducers are expected to increase the accuracy in the obtained results. As the instrument is able to capture the concentration and in the future most likely more accurate mean grain size of the suspended sediments, its application for monitoring silt erosion in hydropower plant shall be highly useful.
NASA Astrophysics Data System (ADS)
Pawcenis, Dominika; Koperska, Monika A.; Milczarek, Jakub M.; Łojewski, Tomasz; Łojewska, Joanna
2014-02-01
A direct goal of this paper was to improve the methods of sample preparation and separation for analyses of fibroin polypeptide with the use of size exclusion chromatography (SEC). The motivation for the study arises from our interest in natural polymers included in historic textile and paper artifacts, and is a logical response to the urgent need for developing rationale-based methods for materials conservation. The first step is to develop a reliable analytical tool which would give insight into fibroin structure and its changes caused by both natural and artificial ageing. To investigate the influence of preparation conditions, two sets of artificially aged samples were prepared (with and without NaCl in sample solution) and measured by the means of SEC with multi angle laser light scattering detector. It was shown that dialysis of fibroin dissolved in LiBr solution allows removal of the salt which destroys stacks chromatographic columns and prevents reproducible analyses. Salt rich (NaCl) water solutions of fibroin improved the quality of chromatograms.
Total Water Content Measurements with an Isokinetic Sampling Probe
NASA Technical Reports Server (NTRS)
Reehorst, Andrew L.; Miller, Dean R.; Bidwell, Colin S.
2010-01-01
The NASA Glenn Research Center has developed a Total Water Content (TWC) Isokinetic Sampling Probe. Since it is not sensitive to cloud water particle phase nor size, it is particularly attractive to support super-cooled large droplet and high ice water content aircraft icing studies. The instrument is comprised of the Sampling Probe, Sample Flow Control, and Water Vapor Measurement subsystems. Analysis and testing have been conducted on the subsystems to ensure their proper function and accuracy. End-to-end bench testing has also been conducted to ensure the reliability of the entire instrument system. A Stokes Number based collection efficiency correction was developed to correct for probe thickness effects. The authors further discuss the need to ensure that no condensation occurs within the instrument plumbing. Instrument measurements compared to facility calibrations from testing in the NASA Glenn Icing Research Tunnel are presented and discussed. There appears to be liquid water content and droplet size effects in the differences between the two measurement techniques.
Variable aperture-based ptychographical iterative engine method.
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Johnson, K E; McMorris, B J; Raynor, L A; Monsen, K A
2013-01-01
The Omaha System is a standardized interface terminology that is used extensively by public health nurses in community settings to document interventions and client outcomes. Researchers using Omaha System data to analyze the effectiveness of interventions have typically calculated p-values to determine whether significant client changes occurred between admission and discharge. However, p-values are highly dependent on sample size, making it difficult to distinguish statistically significant changes from clinically meaningful changes. Effect sizes can help identify practical differences but have not yet been applied to Omaha System data. We compared p-values and effect sizes (Cohen's d) for mean differences between admission and discharge for 13 client problems documented in the electronic health records of 1,016 young low-income parents. Client problems were documented anywhere from 6 (Health Care Supervision) to 906 (Caretaking/parenting) times. On a scale from 1 to 5, the mean change needed to yield a large effect size (Cohen's d ≥ 0.80) was approximately 0.60 (range = 0.50 - 1.03) regardless of p-value or sample size (i.e., the number of times a client problem was documented in the electronic health record). Researchers using the Omaha System should report effect sizes to help readers determine which differences are practical and meaningful. Such disclosures will allow for increased recognition of effective interventions.
Is it appropriate to composite fish samples for mercury trend monitoring and consumption advisories?
Gandhi, Nilima; Bhavsar, Satyendra P; Gewurtz, Sarah B; Drouillard, Ken G; Arhonditsis, George B; Petro, Steve
2016-03-01
Monitoring mercury levels in fish can be costly because variation by space, time, and fish type/size needs to be captured. Here, we explored if compositing fish samples to decrease analytical costs would reduce the effectiveness of the monitoring objectives. Six compositing methods were evaluated by applying them to an existing extensive dataset, and examining their performance in reproducing the fish consumption advisories and temporal trends. The methods resulted in varying amount (average 34-72%) of reductions in samples, but all (except one) reproduced advisories very well (96-97% of the advisories did not change or were one category more restrictive compared to analysis of individual samples). Similarly, the methods performed reasonably well in recreating temporal trends, especially when longer-term and frequent measurements were considered. The results indicate that compositing samples within 5cm fish size bins or retaining the largest/smallest individuals and compositing in-between samples in batches of 5 with decreasing fish size would be the best approaches. Based on the literature, the findings from this study are applicable to fillet, muscle plug and whole fish mercury monitoring studies. The compositing methods may also be suitable for monitoring Persistent Organic Pollutants (POPs) in fish. Overall, compositing fish samples for mercury monitoring could result in a substantial savings (approximately 60% of the analytical cost) and should be considered in fish mercury monitoring, especially in long-term programs or when study cost is a concern. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Sediment quantity and quality in three impoundments in Massachusetts
Zimmerman, Marc James; Breault, Robert F.
2003-01-01
As part of a study with an overriding goal of providing information that would assist State and Federal agencies in developing screening protocols for managing sediments impounded behind dams that are potential candidates for removal, the U.S Geological Survey determined sediment quantity and quality at three locations: one on the French River and two on Yokum Brook, a tributary to the west branch of the Westfield River. Data collected with a global positioning system, a geographic information system, and sediment-thickness data aided in the creation of sediment maps and the calculation of sediment volumes at Perryville Pond on the French River in Webster, Massachusetts, and at the Silk Mill and Ballou Dams on Yokum Brook in Becket, Massachusetts. From these data the following sediment volumes were determined: Perryville Pond, 71,000 cubic yards, Silk Mill, 1,600 cubic yards, and Ballou, 800 cubic yards. Sediment characteristics were assessed in terms of grain size and concentrations of potentially hazardous organic compounds and metals. Assessment of the approaches and methods used at study sites indicated that ground-penetrating radar produced data that were extremely difficult and time-consuming to interpret for the three study sites. Because of these difficulties, a steel probe was ultimately used to determine sediment depth and extent for inclusion in the sediment maps. Use of these methods showed that, where sampling sites were accessible, a machine-driven coring device would be preferable to the physically exhausting, manual sediment-coring methods used in this investigation. Enzyme-linked immunosorbent assays were an effective tool for screening large numbers of samples for a range of organic contaminant compounds. An example calculation of the number of samples needed to characterize mean concentrations of contaminants indicated that the number of samples collected for most analytes was adequate; however, additional analyses for lead, copper, silver, arsenic, total petroleum hydrocarbons, and chlordane are needed to meet the criteria determined from the calculations. Particle-size analysis did not reveal a clear spatial distribution pattern at Perryville Pond. On average, less than 65 percent of each sample was greater in size than very fine sand. The sample with the highest percentage of clay-sized particles (24.3 percent) was collected just upstream from the dam and generally had the highest concentrations of contaminants determined here. In contrast, more than 90 percent of the sediment samples in the Becket impoundments had grain sizes larger than very fine sand; as determined by direct observation, rocks, cobbles, and boulders constituted a substantial amount of the material impounded at Becket. In general, the highest percentages of the finest particles, clays, occurred in association with the highest concentrations of contaminants. Enzyme-linked immunosorbent assays of the Perryville samples showed the widespread presence of petroleum hydrocarbons (16 out of 26 samples), polycyclic aromatic hydrocarbons (23 out of 26 samples), and chlordane (18 out of 26 samples); polychlorinated biphenyls were detected in five samples from four locations. Neither petroleum hydrocarbons nor polychlorinated biphenyls were detected at Becket, and chlordane was detected in only one sample. All 14 Becket samples contained polycyclic aromatic hydrocarbons. Replicate quality-control analyses revealed consistent results between paired samples. Samples from throughout Perryville Pond contained a number of metals at potentially toxic concentrations. These metals included arsenic, cadmium, copper, lead, nickel, and zinc. At Becket, no metals were found in elevated concentrations. In general, most of the concentrations of organic compounds and metals detected in Perryville Pond exceeded standards for benthic organisms, but only rarely exceeded standards for human contact. The most highly contaminated samples were
Electrofishing effort requirements for estimating species richness in the Kootenai River, Idaho
Watkins, Carson J.; Quist, Michael C.; Shepard, Bradley B.; Ireland, Susan C.
2016-01-01
This study was conducted on the Kootenai River, Idaho to provide insight on sampling requirements to optimize future monitoring effort associated with the response of fish assemblages to habitat rehabilitation. Our objective was to define the electrofishing effort (m) needed to have a 95% probability of sampling 50, 75, and 100% of the observed species richness and to evaluate the relative influence of depth, velocity, and instream woody cover on sample size requirements. Sidechannel habitats required more sampling effort to achieve 75 and 100% of the total species richness than main-channel habitats. The sampling effort required to have a 95% probability of sampling 100% of the species richness was 1100 m for main-channel sites and 1400 m for side-channel sites. We hypothesized that the difference in sampling requirements between main- and side-channel habitats was largely due to differences in habitat characteristics and species richness between main- and side-channel habitats. In general, main-channel habitats had lower species richness than side-channel habitats. Habitat characteristics (i.e., depth, current velocity, and woody instream cover) were not related to sample size requirements. Our guidelines will improve sampling efficiency during monitoring effort in the Kootenai River and provide insight on sampling designs for other large western river systems where electrofishing is used to assess fish assemblages.
Robust DNA Isolation and High-throughput Sequencing Library Construction for Herbarium Specimens.
Saeidi, Saman; McKain, Michael R; Kellogg, Elizabeth A
2018-03-08
Herbaria are an invaluable source of plant material that can be used in a variety of biological studies. The use of herbarium specimens is associated with a number of challenges including sample preservation quality, degraded DNA, and destructive sampling of rare specimens. In order to more effectively use herbarium material in large sequencing projects, a dependable and scalable method of DNA isolation and library preparation is needed. This paper demonstrates a robust, beginning-to-end protocol for DNA isolation and high-throughput library construction from herbarium specimens that does not require modification for individual samples. This protocol is tailored for low quality dried plant material and takes advantage of existing methods by optimizing tissue grinding, modifying library size selection, and introducing an optional reamplification step for low yield libraries. Reamplification of low yield DNA libraries can rescue samples derived from irreplaceable and potentially valuable herbarium specimens, negating the need for additional destructive sampling and without introducing discernible sequencing bias for common phylogenetic applications. The protocol has been tested on hundreds of grass species, but is expected to be adaptable for use in other plant lineages after verification. This protocol can be limited by extremely degraded DNA, where fragments do not exist in the desired size range, and by secondary metabolites present in some plant material that inhibit clean DNA isolation. Overall, this protocol introduces a fast and comprehensive method that allows for DNA isolation and library preparation of 24 samples in less than 13 h, with only 8 h of active hands-on time with minimal modifications.
Williams, Rachel E; Arabi, Mazdak; Loftis, Jim; Elmund, G Keith
2014-09-01
Implementation of numeric nutrient standards in Colorado has prompted a need for greater understanding of human impacts on ambient nutrient levels. This study explored the variability of annual nutrient concentrations due to upstream anthropogenic influences and developed a mathematical expression for the number of samples required to estimate median concentrations for standard compliance. A procedure grounded in statistical hypothesis testing was developed to estimate the number of annual samples required at monitoring locations while taking into account the difference between the median concentrations and the water quality standard for a lognormal population. For the Cache La Poudre River in northern Colorado, the relationship between the median and standard deviation of total N (TN) and total P (TP) concentrations and the upstream point and nonpoint concentrations and general hydrologic descriptors was explored using multiple linear regression models. Very strong relationships were evident between the upstream anthropogenic influences and annual medians for TN and TP ( > 0.85, < 0.001) and corresponding standard deviations ( > 0.7, < 0.001). Sample sizes required to demonstrate (non)compliance with the standard depend on the measured water quality conditions. When the median concentration differs from the standard by >20%, few samples are needed to reach a 95% confidence level. When the median is within 20% of the corresponding water quality standard, however, the required sample size increases rapidly, and hundreds of samples may be required. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Gomez Baquero, David; Koppel, Kadri; Chambers, Delores; Hołda, Karolina; Głogowski, Robert; Chambers, Edgar
2018-05-23
Sensory analysis of pet foods has been emerging as an important field of study for the pet food industry over the last few decades. Few studies have been conducted on understanding the pet owners’ perception of pet foods. The objective of this study is to gain a deeper understanding on the perception of the visual characteristics of dry dog foods by dog owners in different consumer segments. A total of 120 consumers evaluated the appearance of 30 dry dog food samples with varying visual characteristics. The consumers rated the acceptance of the samples and associated each one with a list of positive and negative beliefs. Cluster Analysis, ANOVA and Correspondence Analysis were used to analyze the consumer responses. The acceptability of the appearance of dry dog foods was affected by the number of different kibbles present, color(s), shape(s), and size(s) of the kibbles in the product. Three consumer clusters were identified. Consumers rated highest single-kibble samples of medium sizes, traditional shapes, and brown colors. Participants disliked extra-small or extra-large kibble sizes, shapes with high-dimensional contrast, and kibbles of light brown color. These findings can help dry dog food manufacturers to meet consumers’ needs with increasing benefits to the pet food and commodity industries.
Sizing for the apparel industry using statistical analysis - a Brazilian case study
NASA Astrophysics Data System (ADS)
Capelassi, C. H.; Carvalho, M. A.; El Kattel, C.; Xu, B.
2017-10-01
The study of the body measurements of Brazilian women used the Kinect Body Imaging system for 3D body scanning. The result of the study aims to meet the needs of the apparel industry for accurate measurements. Data was statistically treated using the IBM SPSS 23 system, with 95% confidence (P<0,05) for the inferential analysis, with the purpose of grouping the measurements in sizes, so that a smaller number of sizes can cover a greater number of people. The sample consisted of 101 volunteers aged between 19 and 62 years. A cluster analysis was performed to identify the main body shapes of the sample. The results were divided between the top and bottom body portions; For the top portion, were used the measurements of the abdomen, waist and bust circumferences, as well as the height; For the bottom portion, were used the measurements of the hip circumference and the height. Three sizing systems were developed for the researched sample from the Abdomen-to-Height Ratio - AHR (top portion): Small (AHR < 0,52), Medium (AHR: 0,52-0,58), Large (AHR > 0,58) and from the Hip-to-Height Ratio - HHR (bottom portion): Small (HHR < 0,62), Medium (HHR: 0,62-0,68), Large (HHR > 0,68).
Cuc, Andrea V; Locke, Dona E C; Duncan, Noah; Fields, Julie A; Snyder, Charlene Hoffman; Hanna, Sherrie; Lunde, Angela; Smith, Glenn E; Chandler, Melanie
2017-12-01
This study aims to provide effect size estimates of the impact of two cognitive rehabilitation interventions provided to patients with mild cognitive impairment: computerized brain fitness exercise and memory support system on support partners' outcomes of depression, anxiety, quality of life, and partner burden. A randomized controlled pilot trial was performed. At 6 months, the partners from both treatment groups showed stable to improved depression scores, while partners in an untreated control group showed worsening depression over 6 months. There were no statistically significant differences on anxiety, quality of life, or burden outcomes in this small pilot trial; however, effect sizes were moderate, suggesting that the sample sizes in this pilot study were not adequate to detect statistical significance. Either form of cognitive rehabilitation may help partners' mood, compared with providing no treatment. However, effect size estimates related to other partner outcomes (i.e., burden, quality of life, and anxiety) suggest that follow-up efficacy trials will need sample sizes of at least 30-100 people per group to accurately determine significance. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Size distributions of manure particles released under simulated rainfall.
Pachepsky, Yakov A; Guber, Andrey K; Shelton, Daniel R; McCarty, Gregory W
2009-03-01
Manure and animal waste deposited on cropland and grazing lands serve as a source of microorganisms, some of which may be pathogenic. These microorganisms are released along with particles of dissolved manure during rainfall events. Relatively little if anything is known about the amounts and sizes of manure particles released during rainfall, that subsequently may serve as carriers, abode, and nutritional source for microorganisms. The objective of this work was to obtain and present the first experimental data on sizes of bovine manure particles released to runoff during simulated rainfall and leached through soil during subsequent infiltration. Experiments were conducted using 200 cm long boxes containing turfgrass soil sod; the boxes were designed so that rates of manure dissolution and subsequent infiltration and runoff could be monitored independently. Dairy manure was applied on the upper portion of boxes. Simulated rainfall (ca. 32.4 mm h(-1)) was applied for 90 min on boxes with stands of either live or dead grass. Electrical conductivity, turbidity, and particle size distributions obtained from laser diffractometry were determined in manure runoff and soil leachate samples. Turbidity of leachates and manure runoff samples decreased exponentially. Turbidity of manure runoff samples was on average 20% less than turbidity of soil leachate samples. Turbidity of leachate samples from boxes with dead grass was on average 30% less than from boxes with live grass. Particle size distributions in manure runoff and leachate suspensions remained remarkably stable after 15 min of runoff initiation, although the turbidity continued to decrease. Particles had the median diameter of 3.8 microm, and 90% of particles were between 0.6 and 17.8 microm. The particle size distributions were not affected by the grass status. Because manure particles are known to affect transport and retention of microbial pathogens in soil, more information needs to be collected about the concurrent release of pathogens and manure particles during rainfall events.
Barkhofen, Sonja; Bartley, Tim J; Sansoni, Linda; Kruse, Regina; Hamilton, Craig S; Jex, Igor; Silberhorn, Christine
2017-01-13
Sampling the distribution of bosons that have undergone a random unitary evolution is strongly believed to be a computationally hard problem. Key to outperforming classical simulations of this task is to increase both the number of input photons and the size of the network. We propose driven boson sampling, in which photons are input within the network itself, as a means to approach this goal. We show that the mean number of photons entering a boson sampling experiment can exceed one photon per input mode, while maintaining the required complexity, potentially leading to less stringent requirements on the input states for such experiments. When using heralded single-photon sources based on parametric down-conversion, this approach offers an ∼e-fold enhancement in the input state generation rate over scattershot boson sampling, reaching the scaling limit for such sources. This approach also offers a dramatic increase in the signal-to-noise ratio with respect to higher-order photon generation from such probabilistic sources, which removes the need for photon number resolution during the heralding process as the size of the system increases.
Discovery of Taeniid Eggs from A 17th Century Tomb in Korea
Lee, Hye-Jung; Shin, Dong-Hoon
2011-01-01
Even though Taenia spp. eggs are occasionally discovered from archeological remains around the world, these eggs have never been discovered in ancient samples from Korea. When we attempted to re-examine the archeological samples maintained in our collection, the eggs of Taenia spp., 5 in total number, were recovered from a tomb of Gongju-si. The eggs had radially striated embryophore, and 37.5-40.0 µm×37.5 µm in size. This is the first report on taeniid eggs from ancient samples of Korea, and it is suggested that intensive examination of voluminous archeological samples should be needed for identification of Taenia spp. PMID:22072839
Discovery of taeniid eggs from a 17th century tomb in Korea.
Lee, Hye-Jung; Shin, Dong-Hoon; Seo, Min
2011-09-01
Even though Taenia spp. eggs are occasionally discovered from archeological remains around the world, these eggs have never been discovered in ancient samples from Korea. When we attempted to re-examine the archeological samples maintained in our collection, the eggs of Taenia spp., 5 in total number, were recovered from a tomb of Gongju-si. The eggs had radially striated embryophore, and 37.5-40.0 µm×37.5 µm in size. This is the first report on taeniid eggs from ancient samples of Korea, and it is suggested that intensive examination of voluminous archeological samples should be needed for identification of Taenia spp.
Duran, Tinka; Stimpson, Jim P.; Smith, Corey
2013-01-01
Introduction Population-based data are essential for quantifying the problems and measuring the progress made by comprehensive cancer control programs. However, cancer information specific to the American Indian/Alaska Native (AI/AN) population is not readily available. We identified major population-based surveys conducted in the United States that contain questions related to cancer, documented the AI/AN sample size in these surveys, and identified gaps in the types of cancer-related information these surveys collect. Methods We conducted an Internet query of US Department of Health and Human Services agency websites and a Medline search to identify population-based surveys conducted in the United States from 1960 through 2010 that contained information about cancer. We used a data extraction form to collect information about the purpose, sample size, data collection methods, and type of information covered in the surveys. Results Seventeen survey sources met the inclusion criteria. Information on access to and use of cancer treatment, follow-up care, and barriers to receiving timely and quality care was not consistently collected. Estimates specific to the AI/AN population were often lacking because of inadequate AI/AN sample size. For example, 9 national surveys reviewed reported an AI/AN sample size smaller than 500, and 10 had an AI/AN sample percentage less than 1.5%. Conclusion Continued efforts are needed to increase the overall number of AI/AN participants in these surveys, improve the quality of information on racial/ethnic background, and collect more information on treatment and survivorship. PMID:23517582
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
The Contribution of Expanding Portion Sizes to the US Obesity Epidemic
Young, Lisa R.; Nestle, Marion
2002-01-01
Objectives. Because larger food portions could be contributing to the increasing prevalence of overweight and obesity, this study was designed to weigh samples of marketplace foods, identify historical changes in the sizes of those foods, and compare current portions with federal standards. Methods. We obtained information about current portions from manufacturers or from direct weighing; we obtained information about past portions from manufacturers or contemporary publications. Results. Marketplace food portions have increased in size and now exceed federal standards. Portion sizes began to grow in the 1970s, rose sharply in the 1980s, and have continued in parallel with increasing body weights. Conclusions. Because energy content increases with portion size, educational and other public health efforts to address obesity should focus on the need for people to consume smaller portions. PMID:11818300
NASA Astrophysics Data System (ADS)
Torres Beltran, M.
2016-02-01
The Scientific Committee on Oceanographic Research (SCOR) Working Group 144 "Microbial Community Responses to Ocean Deoxygenation" workshop held in Vancouver, British Columbia in July 2014 had the primary objective of kick-starting the establishment of a minimal core of technologies, techniques and standard operating procedures (SOPs) to enable compatible process rate and multi-molecular data (DNA, RNA and protein) collection in marine oxygen minimum zones (OMZs) and other oxygen starved waters. Experimental activities conducted in Saanich Inlet, a seasonally anoxic fjord on Vancouver Island British Columbia, were designed to compare and cross-calibrate in situ sampling devices (McLane PPS system) with conventional bottle sampling and incubation methods. Bottle effects on microbial community composition, and activity were tested using different filter combinations and sample volumes to compare PPS/IPS (0.4 µm) versus Sterivex (0.22 µm) filtration methods with and without prefilters (2.7 µm). Resulting biomass was processed for small subunit ribosomal RNA gene sequencing across all three domains of life on the 454 platform followed by downstream community structure analyses. Significant community shifts occurred within and between filter fractions for in situ versus on-ship processed samples. For instance, the relative abundance of several bacterial phyla including Bacteroidetes, Delta and Gammaproteobacteria decreased five-fold on-ship when compared to in situ filtration. Similarly, experimental mesocosms showed similar community structure and activity to in situ filtered samples indicating the need to cross-calibrate incubations to constrain bottle effects. In addition, alpha and beta diversity significantly changed as function of filter size and volume, as well as the operational taxonomic units identified using indicator species analysis for each filter size. Our results provide statistical support that microbial community structure is systematically biased by filter fraction methods and highlight the need for establishing compatible techniques among researchers that facilitate comparative and reproducible science for the whole community.
How Many Fish Need to Be Measured to Effectively Evaluate Trawl Selectivity?
Santos, Juan; Sala, Antonello
2016-01-01
The aim of this study was to provide practitioners working with trawl selectivity with general and easily understandable guidelines regarding the fish sampling effort necessary during sea trials. In particular, we focused on how many fish would need to be caught and length measured in a trawl haul in order to assess the selectivity parameters of the trawl at a designated uncertainty level. We also investigated the dependency of this uncertainty level on the experimental method used to collect data and on the potential effects of factors such as the size structure in the catch relative to the size selection of the gear. We based this study on simulated data created from two different fisheries: the Barents Sea cod (Gadus morhua) trawl fishery and the Mediterranean Sea multispecies trawl fishery represented by red mullet (Mullus barbatus). We used these two completely different fisheries to obtain results that can be used as general guidelines for other fisheries. We found that the uncertainty in the selection parameters decreased with increasing number of fish measured and that this relationship could be described by a power model. The sampling effort needed to achieve a specific uncertainty level for the selection parameters was always lower for the covered codend method compared to the paired-gear method. In many cases, the number of fish that would need to be measured to maintain a specific uncertainty level was around 10 times higher for the paired-gear method than for the covered codend method. The trends observed for the effect of sampling effort in the two fishery cases investigated were similar; therefore the guidelines presented herein should be applicable to other fisheries. PMID:27560696
ERIC Educational Resources Information Center
Baughman, Steven A., Ed.; Curry, Elizabeth A., Ed.
As interlibrary cooperation has proliferated in the last several decades, multitype library organizations and systems have emerged as important forces in librarianship. The need for thoughtful and organized strategic planning is an important cornerstone for the success of organizations of all sizes. Part of a project by the Interlibrary…
K. W. Thorpe; R. L. Ridgway; R. E. Webb
1991-01-01
Egg mass survey data from operational gypsy moth (Lymantria dispar L.) management programs in five Maryland county parks and the Beltsville Agricultural Research Center (BARC) have demonstrated that improved survey protocols are needed to increase the precision and accuracy of the surveys.
Fall prevention in high-risk patients.
Shuey, Kathleen M; Balch, Christine
2014-12-01
In the oncology population, disease process and treatment factors place patients at risk for falls. Fall bundles provide a framework for developing comprehensive fall programs in oncology. Small sample size of interventional studies and focus on ambulatory and geriatric populations limit the applicability of results. Additional research is needed. Copyright © 2014 Elsevier Inc. All rights reserved.
Career Satisfaction Following Technical Education
ERIC Educational Resources Information Center
McDonald, Betty Manager
2011-01-01
The effect of career and technical education in the Caribbean is an area of intervention research that needs more attention. This present research is the first of its kind within the region. The study benefits from a large sample size (N = 500) conducted among a non-traditional population in the field of career development. This paper reports on…
Planned Missing Data Designs with Small Sample Sizes: How Small Is Too Small?
ERIC Educational Resources Information Center
Jia, Fan; Moore, E. Whitney G.; Kinai, Richard; Crowe, Kelly S.; Schoemann, Alexander M.; Little, Todd D.
2014-01-01
Utilizing planned missing data (PMD) designs (ex. 3-form surveys) enables researchers to ask participants fewer questions during the data collection process. An important question, however, is just how few participants are needed to effectively employ planned missing data designs in research studies. This article explores this question by using…
An IRT Analysis of Preservice Teacher Self-Efficacy in Technology Integration
ERIC Educational Resources Information Center
Browne, Jeremy
2011-01-01
The need for rigorously developed measures of preservice teacher traits regarding technology integration training has been acknowledged (Kay 2006), but such instruments are still extremely rare. The Technology Integration Confidence Scale (TICS) represents one such measure, but past analyses of its functioning have been limited by sample size and…
Loescher, Henry; Ayres, Edward; Duffy, Paul; Luo, Hongyan; Brunke, Max
2014-01-01
Soils are highly variable at many spatial scales, which makes designing studies to accurately estimate the mean value of soil properties across space challenging. The spatial correlation structure is critical to develop robust sampling strategies (e.g., sample size and sample spacing). Current guidelines for designing studies recommend conducting preliminary investigation(s) to characterize this structure, but are rarely followed and sampling designs are often defined by logistics rather than quantitative considerations. The spatial variability of soils was assessed across ∼1 ha at 60 sites. Sites were chosen to represent key US ecosystems as part of a scaling strategy deployed by the National Ecological Observatory Network. We measured soil temperature (Ts) and water content (SWC) because these properties mediate biological/biogeochemical processes below- and above-ground, and quantified spatial variability using semivariograms to estimate spatial correlation. We developed quantitative guidelines to inform sample size and sample spacing for future soil studies, e.g., 20 samples were sufficient to measure Ts to within 10% of the mean with 90% confidence at every temperate and sub-tropical site during the growing season, whereas an order of magnitude more samples were needed to meet this accuracy at some high-latitude sites. SWC was significantly more variable than Ts at most sites, resulting in at least 10× more SWC samples needed to meet the same accuracy requirement. Previous studies investigated the relationship between the mean and variability (i.e., sill) of SWC across space at individual sites across time and have often (but not always) observed the variance or standard deviation peaking at intermediate values of SWC and decreasing at low and high SWC. Finally, we quantified how far apart samples must be spaced to be statistically independent. Semivariance structures from 10 of the 12-dominant soil orders across the US were estimated, advancing our continental-scale understanding of soil behavior. PMID:24465377
Whitehead, John; Valdés-Márquez, Elsa; Lissmats, Agneta
2009-01-01
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail. Copyright 2008 John Wiley & Sons, Ltd.
Similarities and differences in dream content at the cross-cultural, gender, and individual levels.
William Domhoff, G; Schneider, Adam
2008-12-01
The similarities and differences in dream content at the cross-cultural, gender, and individual levels provide one starting point for carrying out studies that attempt to discover correspondences between dream content and various types of waking cognition. Hobson and Kahn's (Hobson, J. A., & Kahn, D. (2007). Dream content: Individual and generic aspects. Consciousness and Cognition, 16, 850-858.) conclusion that dream content may be more generic than most researchers realize, and that individual differences are less salient than usually thought, provides the occasion for a review of findings based on the Hall and Van de Castle (Hall, C., & Van de Castle, R. (1966). The content analysis of dreams. New York: Appleton-Century-Crofts.) coding system for the study of dream content. Then new findings based on a computationally intensive randomization strategy are presented to show the minimum sample sizes needed to detect gender and individual differences in dream content. Generally speaking, sample sizes of 100-125 dream reports are needed because most dream elements appear in less than 50% of dream reports and the magnitude of the differences usually is not large.
Sonnenburg, Jana; Schulz, Katja; Blome, Sandra; Staubach, Christoph
2016-10-01
Classical swine fever (CSF) is one of the most important viral diseases of domestic pigs ( Sus scrofa domesticus) and wild boar ( Sus scrofa ). For at least 4 decades, several European Union member states were confronted with outbreaks among wild boar and, as it had been shown that infected wild boar populations can be a major cause of primary outbreaks in domestic pigs, strict control measures for both species were implemented. To guarantee early detection and to demonstrate freedom from disease, intensive surveillance is carried out based on a hunting bag sample. In this context, virologic investigations play a major role in the early detection of new introductions and in regions immunized with a conventional vaccine. The required financial resources and personnel for reliable testing are often large, and sufficient sample sizes to detect low virus prevalences are difficult to obtain. We conducted a simulation to model the possible impact of changes in sample size and sampling intervals on the probability of CSF virus detection based on a study area of 65 German hunting grounds. A 5-yr period with 4,652 virologic investigations was considered. Results suggest that low prevalences could not be detected with a justifiable effort. The simulation of increased sample sizes per sampling interval showed only a slightly better performance but would be unrealistic in practice, especially outside the main hunting season. Further studies on other approaches such as targeted or risk-based sampling for virus detection in connection with (marker) antibody surveillance are needed.
Is psychology suffering from a replication crisis? What does "failure to replicate" really mean?
Maxwell, Scott E; Lau, Michael Y; Howard, George S
2015-09-01
Psychology has recently been viewed as facing a replication crisis because efforts to replicate past study findings frequently do not show the same result. Often, the first study showed a statistically significant result but the replication does not. Questions then arise about whether the first study results were false positives, and whether the replication study correctly indicates that there is truly no effect after all. This article suggests these so-called failures to replicate may not be failures at all, but rather are the result of low statistical power in single replication studies, and the result of failure to appreciate the need for multiple replications in order to have enough power to identify true effects. We provide examples of these power problems and suggest some solutions using Bayesian statistics and meta-analysis. Although the need for multiple replication studies may frustrate those who would prefer quick answers to psychology's alleged crisis, the large sample sizes typically needed to provide firm evidence will almost always require concerted efforts from multiple investigators. As a result, it remains to be seen how many of the recently claimed failures to replicate will be supported or instead may turn out to be artifacts of inadequate sample sizes and single study replications. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Yap, Elaine
2017-01-01
In diagnosing peripheral pulmonary lesions (PPL), radial endobronchial ultrasound (R‐EBUS) is emerging as a safer method in comparison to CT‐guided biopsy. Despite the better safety profile, the yield of R‐EBUS remains lower (73%) than CT‐guided biopsy (90%) due to the smaller size of samples. We adopted a hybrid method by adding cryobiopsy via the R‐EBUS Guide Sheath (GS) to produce larger, non‐crushed samples to improve diagnostic capability and enhance molecular testing. We report six prospective patients who underwent this procedure in our institution. R‐EBUS samples were obtained via conventional sampling methods (needle aspiration, forceps biopsy, and cytology brush), followed by a cryobiopsy. An endobronchial blocker was placed near the planned area of biopsy in advance and inflated post‐biopsy to minimize the risk of bleeding in all patients. A chest X‐ray was performed 1 h post‐procedure. All the PPLs were visualized with R‐EBUS. The mean diameter of cryobiopsy samples was twice the size of forceps biopsy samples. In four patients, cryobiopsy samples were superior in size and the number of malignant cells per high power filed and was the preferred sample selected for mutation analysis and molecular testing. There was no pneumothorax or significant bleeding to report. Cryobiopsy samples were consistently larger and were the preferred samples for molecular testing, with an increase in the diagnostic yield and reduction in the need for repeat procedures, without hindering the marked safety profile of R‐EBUS. Using an endobronchial blocker improves the safety of this procedure. PMID:29321931
Green synthesis and characterization of ANbO3 (A = Na, K) nanopowders fabricated using a biopolymer
NASA Astrophysics Data System (ADS)
Khorrami, Gh. H.; Mousavi, M.; Khayatian, S. A.; Kompany, A.; Khorsand Zak, A.
2017-10-01
Lead-free sodium niobate (NaNbO3, NN) and potassium niobate (KNbO3, KN) nanopowders were successfully synthesized by a simple and green synthesis process in gelatin media. Gelatin, which is a biopolymer, was used as stabilizer. In order to determine the lowest calcination temperature needed to obtain pure NN and KN nanopowders, the produced gels were analyzed by thermogravometric analyzer (TGA). The produced gels were calcined at 500∘C and 600∘C. The structural and optical properties of the prepared powders were examined using X-ray diffraction (XRD) technique, transmission electron microscopy (TEM), and UV-Vis spectroscopy. The XRD results revealed that pure phase NN and KN nanopowders were formed at low temperature calcination of 500∘C and 600∘C, respectively. The Scherrer formula and size-strain plot (SSP) method were employed to estimate crystallite size and lattice strain of the samples. The TEM images show that the NN and KN samples calcined at 600∘C have cubic shape with an average particle size of 60.95 and 39.29 nm, respectively. The optical bandgap energy of the samples was calculated using UV-Vis diffused reflectance spectra of the samples and Kubelka-Munck relation.
Generation of sub-femtoliter droplet by T-junction splitting on microfluidic chips
NASA Astrophysics Data System (ADS)
Yang, Yu-Jun; Feng, Xuan; Xu, Na; Pang, Dai-Wen; Zhang, Zhi-Ling
2013-03-01
In the paper, sub-femtoliter droplets were easily produced by droplet splitting at a simple T-junction with orifice, which did not need expensive equipments, complex photolithography skill, or high energy input. The volume of the daughter droplet was not limited by channel size but controlled by channel geometry and fluidic characteristic. Moreover, single bead sampling and bead quantification in different orders of magnitude of droplet volumes were investigated. The droplets split at our T-junction chip had small volume and monodispersed size and could be produced efficiently, orderly, and controllably.
Directions for new developments on statistical design and analysis of small population group trials.
Hilgers, Ralf-Dieter; Roes, Kit; Stallard, Nigel
2016-06-14
Most statistical design and analysis methods for clinical trials have been developed and evaluated where at least several hundreds of patients could be recruited. These methods may not be suitable to evaluate therapies if the sample size is unavoidably small, which is usually termed by small populations. The specific sample size cut off, where the standard methods fail, needs to be investigated. In this paper, the authors present their view on new developments for design and analysis of clinical trials in small population groups, where conventional statistical methods may be inappropriate, e.g., because of lack of power or poor adherence to asymptotic approximations due to sample size restrictions. Following the EMA/CHMP guideline on clinical trials in small populations, we consider directions for new developments in the area of statistical methodology for design and analysis of small population clinical trials. We relate the findings to the research activities of three projects, Asterix, IDeAl, and InSPiRe, which have received funding since 2013 within the FP7-HEALTH-2013-INNOVATION-1 framework of the EU. As not all aspects of the wide research area of small population clinical trials can be addressed, we focus on areas where we feel advances are needed and feasible. The general framework of the EMA/CHMP guideline on small population clinical trials stimulates a number of research areas. These serve as the basis for the three projects, Asterix, IDeAl, and InSPiRe, which use various approaches to develop new statistical methodology for design and analysis of small population clinical trials. Small population clinical trials refer to trials with a limited number of patients. Small populations may result form rare diseases or specific subtypes of more common diseases. New statistical methodology needs to be tailored to these specific situations. The main results from the three projects will constitute a useful toolbox for improved design and analysis of small population clinical trials. They address various challenges presented by the EMA/CHMP guideline as well as recent discussions about extrapolation. There is a need for involvement of the patients' perspective in the planning and conduct of small population clinical trials for a successful therapy evaluation.
New Measurements of the Particle Size Distribution of Apollo 11 Lunar Soil 10084
NASA Technical Reports Server (NTRS)
McKay, D.S.; Cooper, B.L.; Riofrio, L.M.
2009-01-01
We have initiated a major new program to determine the grain size distribution of nearly all lunar soils collected in the Apollo program. Following the return of Apollo soil and core samples, a number of investigators including our own group performed grain size distribution studies and published the results [1-11]. Nearly all of these studies were done by sieving the samples, usually with a working fluid such as Freon(TradeMark) or water. We have measured the particle size distribution of lunar soil 10084,2005 in water, using a Microtrac(TradeMark) laser diffraction instrument. Details of our own sieving technique and protocol (also used in [11]). are given in [4]. While sieving usually produces accurate and reproducible results, it has disadvantages. It is very labor intensive and requires hours to days to perform properly. Even using automated sieve shaking devices, four or five days may be needed to sieve each sample, although multiple sieve stacks increases productivity. Second, sieving is subject to loss of grains through handling and weighing operations, and these losses are concentrated in the finest grain sizes. Loss from handling becomes a more acute problem when smaller amounts of material are used. While we were able to quantitatively sieve into 6 or 8 size fractions using starting soil masses as low as 50mg, attrition and handling problems limit the practicality of sieving smaller amounts. Third, sieving below 10 or 20microns is not practical because of the problems of grain loss, and smaller grains sticking to coarser grains. Sieving is completely impractical below about 5- 10microns. Consequently, sieving gives no information on the size distribution below approx.10 microns which includes the important submicrometer and nanoparticle size ranges. Finally, sieving creates a limited number of size bins and may therefore miss fine structure of the distribution which would be revealed by other methods that produce many smaller size bins.
Resisting body dissatisfaction: fat women who endorse fat acceptance.
McKinley, Nita Mary
2004-05-01
Fat women who endorsed fat acceptance (N=128) were recruited from Radiance Magazine. Relationships between objectified body consciousness (OBC), body esteem, and psychological well-being for the mostly European American sample were similar to those found in other samples. OBC was independently related to body esteem when weight dissatisfaction was controlled. Those who endorsed the need for social change in attitudes towards fat people had higher body esteem and self-acceptance, and lower body shame, than those who endorsed personal acceptance of body size only.
Kidney function endpoints in kidney transplant trials: a struggle for power.
Ibrahim, A; Garg, A X; Knoll, G A; Akbari, A; White, C A
2013-03-01
Kidney function endpoints are commonly used in randomized controlled trials (RCTs) in kidney transplantation (KTx). We conducted this study to estimate the proportion of ongoing RCTs with kidney function endpoints in KTx where the proposed sample size is large enough to detect meaningful differences in glomerular filtration rate (GFR) with adequate statistical power. RCTs were retrieved using the key word "kidney transplantation" from the National Institute of Health online clinical trial registry. Included trials had at least one measure of kidney function tracked for at least 1 month after transplant. We determined the proportion of two-arm parallel trials that had sufficient sample sizes to detect a minimum 5, 7.5 and 10 mL/min difference in GFR between arms. Fifty RCTs met inclusion criteria. Only 7% of the trials were above a sample size of 562, the number needed to detect a minimum 5 mL/min difference between the groups should one exist (assumptions: α = 0.05; power = 80%, 10% loss to follow-up, common standard deviation of 20 mL/min). The result increased modestly to 36% of trials when a minimum 10 mL/min difference was considered. Only a minority of ongoing trials have adequate statistical power to detect between-group differences in kidney function using conventional sample size estimating parameters. For this reason, some potentially effective interventions which ultimately could benefit patients may be abandoned from future assessment. © Copyright 2013 The American Society of Transplantation and the American Society of Transplant Surgeons.
Avula, Haritha
2013-01-01
A good research beginning refers to formulating a well-defined research question, developing a hypothesis and choosing an appropriate study design. The first part of the review series has discussed these issues in depth and this paper intends to throw light on other issues pertaining to the implementation of research. These include the various ethical norms and standards in human experimentation, the eligibility criteria for the participants, sampling methods and sample size calculation, various outcome measures that need to be defined and the biases that can be introduced in research. PMID:24174747
Amplification volume reduction on DNA database samples using FTA™ Classic Cards.
Wong, Hang Yee; Lim, Eng Seng Simon; Tan-Siew, Wai Fun
2012-03-01
The DNA forensic community always strives towards improvements in aspects such as sensitivity, robustness, and efficacy balanced with cost efficiency. Therefore our laboratory decided to study the feasibility of PCR amplification volume reduction using DNA entrapped in FTA™ Classic Card and to bring cost savings to the laboratory. There were a few concerns the laboratory needed to address. First, the kinetics of the amplification reaction could be significantly altered. Second, an increase in sensitivity might affect interpretation due to increased stochastic effects even though they were pristine samples. Third, statics might cause FTA punches to jump out of its allocated well into another thus causing sample-to-sample contamination. Fourth, the size of the punches might be too small for visual inspection. Last, there would be a limit to the extent of volume reduction due to evaporation and the possible need of re-injection of samples for capillary electrophoresis. The laboratory had successfully optimized a reduced amplification volume of 10 μL for FTA samples. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S
2016-11-01
For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Interaction of fine sediment with alluvial streambeds
Jobson, Harvey E.; Carey, William P.
1989-01-01
More knowledge is needed about the physical processes that control the transport of fine sediment moving over an alluvial bed. The knowledge is needed to design rational sampling and monitoring programs that assess the transport and fate of toxic substances in surface waters because the toxics are often associated with silt- and clay-sized particles. This technical note reviews some of the past research in areas that may contribute to an increased understanding of the processes involved. An alluvial streambed can have a large capacity to store fine sediments that are extracted from the flow when instream concentrations are high and it can gradually release fine sediment to the flow when the instream concentrations are low. Several types of storage mechanisms are available depending on the relative size distribution of the suspended load and bed material, as well as the flow hydraulics. Alluvial flow tends to segregate the deposited material according to size and density. Some of the storage locations are temporary, but some can store the fine sediment for very long periods of time.
Measuring solids concentration in stormwater runoff: comparison of analytical methods.
Clark, Shirley E; Siu, Christina Y S
2008-01-15
Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.
Ryskin, Rachel A; Brown-Schmidt, Sarah
2014-01-01
Seven experiments use large sample sizes to robustly estimate the effect size of a previous finding that adults are more likely to commit egocentric errors in a false-belief task when the egocentric response is plausible in light of their prior knowledge. We estimate the true effect size to be less than half of that reported in the original findings. Even though we found effects in the same direction as the original, they were substantively smaller; the original study would have had less than 33% power to detect an effect of this magnitude. The influence of plausibility on the curse of knowledge in adults appears to be small enough that its impact on real-life perspective-taking may need to be reevaluated.
Rast, Philippe; Hofer, Scott M.
2014-01-01
We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, K.S.; Cauvet, D.; Lybeer, M.
1999-04-01
Anthropogenic activities related to 100 years of industrialization in the metropolitan Detroit area have significantly enriched the bed sediment of the lower reaches of the Rouge River in Cr, Cu, Fe, Ni, Pb, and Zn. These enriched elements, which may represent a threat to biota, are predominantly present in sequentially extracted reducible and oxidizable chemical phases with small contributions from residual phases. In size-fractionated samples trace metal concentrations generally increase with decreasing particle size, with the greatest contribution to this increase from the oxidizable phase. Experimental results obtained on replicate samples of river sediment demonstrate that the accuracy of themore » sequential extraction procedure, evaluated by comparing the sums of the three individual fractions, is generally better than 10%. Oxidizable and reducible phases therefore constitute important sources of potentially available heavy metals that need to be explicitly considered when evaluating sediment and water quality impacts on biota.« less
Melvin, Elizabeth M.; Moore, Brandon R.; Gilchrist, Kristin H.; Grego, Sonia; Velev, Orlin D.
2011-01-01
The recent development of microfluidic “lab on a chip” devices requiring sample sizes <100 μL has given rise to the need to concentrate dilute samples and trap analytes, especially for surface-based detection techniques. We demonstrate a particle collection device capable of concentrating micron-sized particles in a predetermined area by combining AC electroosmosis (ACEO) and dielectrophoresis (DEP). The planar asymmetric electrode pattern uses ACEO pumping to induce equal, quadrilateral flow directed towards a stagnant region in the center of the device. A number of system parameters affecting particle collection efficiency were investigated including electrode and gap width, chamber height, applied potential and frequency, and number of repeating electrode pairs and electrode geometry. The robustness of the on-chip collection design was evaluated against varying electrolyte concentrations, particle types, and particle sizes. These devices are amenable to integration with a variety of detection techniques such as optical evanescent waveguide sensing. PMID:22662040
Using the internet to recruit rural MSM for HIV risk assessment: sampling issues.
Bowen, Anne; Williams, Mark; Horvath, Keith
2004-09-01
The Internet is an emerging research tool that may be useful for contacting and working with rural men who have sex with men (MSM). Little is known about HIV risks for rural men and Internet methodological issues are only beginning to be examined. Internet versus conventionally recruited samples have shown both similarities and differences in their demographic characteristics. In this study, rural MSM from three sizes of town were recruited by two methods: conventional (e.g. face-to-face/snowball) or Internet. After stratifying for size of city, demographic characteristics of the two groups were similar. Both groups had ready access to the Internet. Patterns of sexual risk were similar across the city sizes but varied by recruitment approach, with the Internet group presenting a somewhat higher HIV sexual risk profile. Overall, these findings suggest the Internet provides a useful and low cost approach to recruiting and assessing HIV sexual risks for rural White MSM. Further research is needed on methods for recruiting rural minority MSM.
Bedload Rating and Flow Competence Curves Vary With Watershed and Bed Material Parameters
NASA Astrophysics Data System (ADS)
Bunte, K.; Abt, S. R.
2003-12-01
Bedload transport rating curves and flow competence curves (largest bedload size for specified flow) are usually not known for streams unless a large number of bedload samples has been collected and analyzed. However, this information is necessary for assessing instream flow needs and stream responses to watershed effects. This study therefore analyzed whether bedload transport rating and flow competence curves were related to stream parameters. Bedload transport rating curves and flow competence curves were obtained from extensive bedload sampling in six gravel- and cobble-bed mountain streams. Samples were collected using bedload traps and a large net sampler, both of which provide steep and relatively well-defined bedload rating and flow competence curves due to a long sampling duration, a large sampler opening and a large sampler capacity. The sampled streams have snowmelt regimes, steep (1-9%) gradients, and watersheds that are mainly forested and relatively undisturbed with basin area sizes of 8 to 105 km2. The channels are slightly incised and can contain flows of more than 1.5 times bankfull with little overbank flow. Exponents of bedload rating and flow competence curves obtained from these measurements were found to systematically increase with basin area size and decrease with the degree of channel armoring. By contrast, coefficients of bedload rating and flow competence curves decreased with basin size and increased with armoring. All of these relationships were well-defined (0.86 < r2 < 0.99). Data sets from other studies in coarse-bedded streams fit the indicated trend if the sampling device used allows measuring bedload transport rates over a wide range and if bedload supply is somewhat low. The existence of a general positive trend between bedload rating curve exponents and basin area, and a negative trend between coefficients and basin area, is confirmed by a large data set of bedload rating curves obtained from Helley-Smith samples. However, in this case, the trends only become visible as basin area sizes span a wide range (1 - 10,000 km2). The well-defined relationships obtained from the bedload trap and the large net sampler suggest that exponents and coefficients of bedload transport rating curves (and flow competence curves) are predictable from an easily obtainable parameter such as basin size. However, the relationships of bedload rating curve exponents and coefficients with basin size and armoring appear to be influenced by the sampling device used and the watershed sediment production.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence
2017-11-01
When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.
Clinical and MRI activity as determinants of sample size for pediatric multiple sclerosis trials
Verhey, Leonard H.; Signori, Alessio; Arnold, Douglas L.; Bar-Or, Amit; Sadovnick, A. Dessa; Marrie, Ruth Ann; Banwell, Brenda
2013-01-01
Objective: To estimate sample sizes for pediatric multiple sclerosis (MS) trials using new T2 lesion count, annualized relapse rate (ARR), and time to first relapse (TTFR) endpoints. Methods: Poisson and negative binomial models were fit to new T2 lesion and relapse count data, and negative binomial time-to-event and exponential models were fit to TTFR data of 42 children with MS enrolled in a national prospective cohort study. Simulations were performed by resampling from the best-fitting model of new T2 lesion count, number of relapses, or TTFR, under various assumptions of the effect size, trial duration, and model parameters. Results: Assuming a 50% reduction in new T2 lesions over 6 months, 90 patients/arm are required, whereas 165 patients/arm are required for a 40% treatment effect. Sample sizes for 2-year trials using relapse-related endpoints are lower than that for 1-year trials. For 2-year trials and a conservative assumption of overdispersion (ϑ), sample sizes range from 70 patients/arm (using ARR) to 105 patients/arm (TTFR) for a 50% reduction in relapses, and 230 patients/arm (ARR) to 365 patients/arm (TTFR) for a 30% relapse reduction. Assuming a less conservative ϑ, 2-year trials using ARR require 45 patients/arm (60 patients/arm for TTFR) for a 50% reduction in relapses and 145 patients/arm (200 patients/arm for TTFR) for a 30% reduction. Conclusion: Six-month phase II trials using new T2 lesion count as an endpoint are feasible in the pediatric MS population; however, trials powered on ARR or TTFR will need to be 2 years in duration and will require multicentered collaboration. PMID:23966255
Moran, Anthony R; Hettiarachchi, Hiroshan
2011-07-01
Clayey soil found in coal mines in Appalachian Ohio is often sold to landfills for constructing Recompacted Soil Liners (RSL) in landfills. Since clayey soils possess low hydraulic conductivity, the suitability of mined clay for RSL in Ohio is first assessed by determining its clay content. When soil samples are tested in a laboratory, the same engineering properties are typically expected for the soils originated from the same source, provided that the testing techniques applied are standard, but mined clay from Appalachian Ohio has shown drastic differences in particle size distribution depending on the sampling and/or laboratory processing methods. Sometimes more than a 10 percent decrease in the clay content is observed in the samples collected at the stockpiles, compared to those collected through reverse circulation drilling. This discrepancy poses a challenge to geotechnical engineers who work on the prequalification process of RSL material as it can result in misleading estimates of the hydraulic conductivity of the samples. This paper describes a laboratory investigation conducted on mined clay from Appalachian Ohio to determine how and why the standard sampling and/or processing methods can affect the grain-size distributions. The variation in the clay content was determined to be due to heavy concentrations of shale fragments in the clayey soils. It was also concluded that, in order to obtain reliable grain size distributions from the samples collected at a stockpile of mined clay, the material needs to be processed using a soil grinder. Otherwise, the samples should be collected through drilling.
Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco
2012-10-12
Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.
Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun
2018-06-01
The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.
Moran, Anthony R.; Hettiarachchi, Hiroshan
2011-01-01
Clayey soil found in coal mines in Appalachian Ohio is often sold to landfills for constructing Recompacted Soil Liners (RSL) in landfills. Since clayey soils possess low hydraulic conductivity, the suitability of mined clay for RSL in Ohio is first assessed by determining its clay content. When soil samples are tested in a laboratory, the same engineering properties are typically expected for the soils originated from the same source, provided that the testing techniques applied are standard, but mined clay from Appalachian Ohio has shown drastic differences in particle size distribution depending on the sampling and/or laboratory processing methods. Sometimes more than a 10 percent decrease in the clay content is observed in the samples collected at the stockpiles, compared to those collected through reverse circulation drilling. This discrepancy poses a challenge to geotechnical engineers who work on the prequalification process of RSL material as it can result in misleading estimates of the hydraulic conductivity of the samples. This paper describes a laboratory investigation conducted on mined clay from Appalachian Ohio to determine how and why the standard sampling and/or processing methods can affect the grain-size distributions. The variation in the clay content was determined to be due to heavy concentrations of shale fragments in the clayey soils. It was also concluded that, in order to obtain reliable grain size distributions from the samples collected at a stockpile of mined clay, the material needs to be processed using a soil grinder. Otherwise, the samples should be collected through drilling. PMID:21845150
Unmet need for contraception among married women in an urban area of Puducherry, India.
Sulthana, Bahiya; Shewade, Hemant Deepak; Sunderamurthy, Bhuvaneswary; Manoharan, Keerthana; Subramanian, Manimozhi
2015-01-01
Unmet need for contraception remains a national problem. The study was conducted in an urban area of Puducherry, India, among the eligible couples to assess the unmet need for contraception and to determine the awareness and pattern of use of contraceptives along with the socio-demographic factors associated with the unmet needs for contraception. This cross-sectional study included eligible couples with married women in age group of 15-45 yr as the study population (n=267). Probability proportional to size sampling followed by systematic random sampling was used. A pre-tested questionnaire was administered to collect data from the respondents. Double data entry and validation of data was done. Unmet need for contraception was 27.3 per cent (95% CI: 22.3-33); unmet need for spacing and limiting was 4.9 and 22.5 per cent, respectively. Among those with unmet need (n=73), 50 per cent reported client related factors (lack of knowledge, shyness, etc.); and 37 per cent reported contraception related factors (availability, accessibility, affordability, side effects) as a cause for unmet need. Our study showed a high unmet need for contraception in the study area indicating towards a necessity to address user perspective to meet the contraception needs.
Improvement of sampling plans for Salmonella detection in pooled table eggs by use of real-time PCR.
Pasquali, Frédérique; De Cesare, Alessandra; Valero, Antonio; Olsen, John Emerdhal; Manfreda, Gerardo
2014-08-01
Eggs and egg products have been described as the most critical food vehicles of salmonellosis. The prevalence and level of contamination of Salmonella on table eggs are low, which severely affects the sensitivity of sampling plans applied voluntarily in some European countries, where one to five pools of 10 eggs are tested by the culture based reference method ISO 6579:2004. In the current study we have compared the testing-sensitivity of the reference culture method ISO 6579:2004 and an alternative real-time PCR method on Salmonella contaminated egg-pool of different sizes (4-9 uninfected eggs mixed with one contaminated egg) and contamination levels (10°-10(1), 10(1)-10(2), 10(2)-10(3)CFU/eggshell). Two hundred and seventy samples corresponding to 15 replicates per pool size and inoculum level were tested. At the lowest contamination level real-time PCR detected Salmonella in 40% of contaminated pools vs 12% using ISO 6579. The results were used to estimate the lowest number of sample units needed to be tested in order to have a 95% certainty not falsely to accept a contaminated lot by Monte Carlo simulation. According to this simulation, at least 16 pools of 10 eggs each are needed to be tested by ISO 6579 in order to obtain this confidence level, while the minimum number of pools to be tested was reduced to 8 pools of 9 eggs each, when real-time PCR was applied as analytical method. This result underlines the importance of including analytical methods with higher sensitivity in order to improve the efficiency of sampling and reduce the number of samples to be tested. Copyright © 2013 Elsevier B.V. All rights reserved.
Chen, D T; Jiang, X; Akula, N; Shugart, Y Y; Wendland, J R; Steele, C J M; Kassem, L; Park, J-H; Chatterjee, N; Jamain, S; Cheng, A; Leboyer, M; Muglia, P; Schulze, T G; Cichon, S; Nöthen, M M; Rietschel, M; McMahon, F J; Farmer, A; McGuffin, P; Craig, I; Lewis, C; Hosang, G; Cohen-Woods, S; Vincent, J B; Kennedy, J L; Strauss, J
2013-02-01
Meta-analyses of bipolar disorder (BD) genome-wide association studies (GWAS) have identified several genome-wide significant signals in European-ancestry samples, but so far account for little of the inherited risk. We performed a meta-analysis of ∼750,000 high-quality genetic markers on a combined sample of ∼14,000 subjects of European and Asian-ancestry (phase I). The most significant findings were further tested in an extended sample of ∼17,700 cases and controls (phase II). The results suggest novel association findings near the genes TRANK1 (LBA1), LMAN2L and PTGFR. In phase I, the most significant single nucleotide polymorphism (SNP), rs9834970 near TRANK1, was significant at the P=2.4 × 10(-11) level, with no heterogeneity. Supportive evidence for prior association findings near ANK3 and a locus on chromosome 3p21.1 was also observed. The phase II results were similar, although the heterogeneity test became significant for several SNPs. On the basis of these results and other established risk loci, we used the method developed by Park et al. to estimate the number, and the effect size distribution, of BD risk loci that could still be found by GWAS methods. We estimate that >63,000 case-control samples would be needed to identify the ∼105 BD risk loci discoverable by GWAS, and that these will together explain <6% of the inherited risk. These results support previous GWAS findings and identify three new candidate genes for BD. Further studies are needed to replicate these findings and may potentially lead to identification of functional variants. Sample size will remain a limiting factor in the discovery of common alleles associated with BD.
USDA-ARS?s Scientific Manuscript database
Moisture content of wood chips is an important factor to be known in their utilization as biomass material. Several moisture measuring instruments are available in the market, but for most of these instruments, some sort of sample preparation is needed that involves sizing, grinding and weighing. T...
ERIC Educational Resources Information Center
Wiley, Kristofor R.
2013-01-01
Many of the social and emotional needs that have historically been associated with gifted students have been questioned on the basis of recent empirical evidence. Research on the topic, however, is often limited by sample size, selection bias, or definition. This study addressed these limitations by applying linear regression methodology to data…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-07
... Clearance for Survey Research Studies. Revision to burden hours may be needed due to changes in the size of the target population, sampling design, and/or questionnaire length. DATES: Comments on this notice... Survey Research Studies. OMB Control Number: 0535-0248. Type of Request: To revise and extend a currently...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-19
... Clearance for Survey Research Studies. Revision to burden hours will be needed due to changes in the size of the target population, sampling design, and/or questionnaire length. DATES: Comments on this notice... Survey Research Studies. OMB Control Number: 0535-0248. Type of Request: To revise and extend a currently...
USDA-ARS?s Scientific Manuscript database
This manuscript is part of a series of manuscripts that to characterize cotton gin emissions from the standpoint of stack sampling. The impetus behind this project was the urgent need to collect additional cotton gin emissions data to address current regulatory issues. A key component of this study ...
Extraction of hydrocarbons from high-maturity Marcellus Shale using supercritical carbon dioxide
Jarboe, Palma B.; Philip A. Candela,; Wenlu Zhu,; Alan J. Kaufman,
2015-01-01
Shale is now commonly exploited as a hydrocarbon resource. Due to the high degree of geochemical and petrophysical heterogeneity both between shale reservoirs and within a single reservoir, there is a growing need to find more efficient methods of extracting petroleum compounds (crude oil, natural gas, bitumen) from potential source rocks. In this study, supercritical carbon dioxide (CO2) was used to extract n-aliphatic hydrocarbons from ground samples of Marcellus shale. Samples were collected from vertically drilled wells in central and western Pennsylvania, USA, with total organic carbon (TOC) content ranging from 1.5 to 6.2 wt %. Extraction temperature and pressure conditions (80 °C and 21.7 MPa, respectively) were chosen to represent approximate in situ reservoir conditions at sample depth (1920−2280 m). Hydrocarbon yield was evaluated as a function of sample matrix particle size (sieve size) over the following size ranges: 1000−500 μm, 250−125 μm, and 63−25 μm. Several methods of shale characterization including Rock-Eval II pyrolysis, organic petrography, Brunauer−Emmett−Teller surface area, and X-ray diffraction analyses were also performed to better understand potential controls on extraction yields. Despite high sample thermal maturity, results show that supercritical CO2 can liberate diesel-range (n-C11 through n-C21) n-aliphatic hydrocarbons. The total quantity of extracted, resolvable n-aliphatic hydrocarbons ranges from approximately 0.3 to 12 mg of hydrocarbon per gram of TOC. Sieve size does have an effect on extraction yield, with highest recovery from the 250−125 μm size fraction. However, the significance of this effect is limited, likely due to the low size ranges of the extracted shale particles. Additional trends in hydrocarbon yield are observed among all samples, regardless of sieve size: 1) yield increases as a function of specific surface area (r2 = 0.78); and 2) both yield and surface area increase with increasing TOC content (r2 = 0.97 and 0.86, respectively). Given that supercritical CO2 is able to mobilize residual organic matter present in overmature shales, this study contributes to a better understanding of the extent and potential factors affecting the extraction process.
NASA Astrophysics Data System (ADS)
Lu, Xinguo; Chen, Dan
2017-08-01
Traditional supervised classifiers neglect a large amount of data which not have sufficient follow-up information, only work with labeled data. Consequently, the small sample size limits the advancement of design appropriate classifier. In this paper, a transductive learning method which combined with the filtering strategy in transductive framework and progressive labeling strategy is addressed. The progressive labeling strategy does not need to consider the distribution of labeled samples to evaluate the distribution of unlabeled samples, can effective solve the problem of evaluate the proportion of positive and negative samples in work set. Our experiment result demonstrate that the proposed technique have great potential in cancer prediction based on gene expression.
Hydrochemical responses among nested catchments of the Sleepers River Research Watershed.
NASA Astrophysics Data System (ADS)
Sebestyen, S. D.; Boyer, E. W.; Shanley, J. B.; Kendall, C.
2005-12-01
We are probing chemical and isotopic tracers of dissolved organic carbon (DOC) and nitrate over both space and time to determine how stream nutrient dynamics change with increasing basin size and differ with flow conditions. At the Sleepers River Research Watershed in northeastern Vermont, USA, 20 to 30 nested sub-basins that ranged in size from 3 to 11,000 ha were sampled repeatedly under baseflow conditions. These synoptic surveys showed a pattern of heterogeneity in headwaters that converged to a consistent response at larger basin sizes and is consistent with findings of other studies. In addition to characterizing spatial patterns under baseflow, we sampled rainfall and snowmelt events over a gradient of basin sizes to investigate scaling responses under different flow conditions. During high flow events, DOC and nitrate flushing responses varied among different basins where high-frequency event samples were collected. While the DOC and nitrate concentration patterns were similar at four headwater basins, the concentration responses of larger basins were markedly different in that the concentration patterns, flushing duration, and maximum concentrations were attenuated from headwaters to the largest basin. We are using these data to explore how flow paths and solute mixing aggregate. Overall, these results highlight the complexities of understanding spatial scaling issues in catchments and underscore the need to consider event responses of hydrology and chemistry among catchments.
Nyflot, Matthew J.; Yang, Fei; Byrd, Darrin; Bowen, Stephen R.; Sandison, George A.; Kinahan, Paul E.
2015-01-01
Abstract. Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes. PMID:26251842
Nyflot, Matthew J; Yang, Fei; Byrd, Darrin; Bowen, Stephen R; Sandison, George A; Kinahan, Paul E
2015-10-01
Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes.
Digital LAMP in a sample self-digitization (SD) chip
Herrick, Alison M.; Dimov, Ivan K.; Lee, Luke P.; Chiu, Daniel T.
2012-01-01
This paper describes the realization of digital loop-mediated DNA amplification (dLAMP) in a sample self-digitization (SD) chip. Digital DNA amplification has become an attractive technique to quantify absolute concentrations of DNA in a sample. While digital polymerase chain reaction is still the most widespread implementation, its use in resource—limited settings is impeded by the need for thermal cycling and robust temperature control. In such situations, isothermal protocols that can amplify DNA or RNA without thermal cycling are of great interest. Here, we showed the successful amplification of single DNA molecules in a stationary droplet array using isothermal digital loop-mediated DNA amplification. Unlike most (if not all) existing methods for sample discretization, our design allows for automated, loss-less digitization of sample volumes on-chip. We demonstrated accurate quantification of relative and absolute DNA concentrations with sample volumes of less than 2 μl. We assessed the homogeneity of droplet size during sample self-digitization in our device, and verified that the size variation was small enough such that straightforward counting of LAMP-active droplets sufficed for data analysis. We anticipate that the simplicity and robustness of our SD chip make it attractive as an inexpensive and easy-to-operate device for DNA amplification, for example in point-of-care settings. PMID:22399016
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.
Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.
Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens
NASA Astrophysics Data System (ADS)
Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl
2016-01-01
As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.
Multi-parameter analysis using photovoltaic cell-based optofluidic cytometer
Yan, Chien-Shun; Wang, Yao-Nan
2016-01-01
A multi-parameter optofluidic cytometer based on two low-cost commercial photovoltaic cells and an avalanche photodetector is proposed. The optofluidic cytometer is fabricated on a polydimethylsiloxane (PDMS) substrate and is capable of detecting side scattered (SSC), extinction (EXT) and fluorescence (FL) signals simultaneously using a free-space light transmission technique without the need for on-chip optical waveguides. The feasibility of the proposed device is demonstrated by detecting fluorescent-labeled polystyrene beads with sizes of 3 μm, 5 μm and 10 μm, respectively, and label-free beads with a size of 7.26 μm. The detection experiments are performed using both single-bead population samples and mixed-bead population samples. The detection results obtained using the SSC/EXT, EXT/FL and SSC/FL signals are compared with those obtained using a commercial flow cytometer. It is shown that the optofluidic cytometer achieves a high detection accuracy for both single-bead population samples and mixed-bead population samples. Consequently, the proposed device provides a versatile, straightforward and low-cost solution for a wide variety of point-of-care (PoC) cytometry applications. PMID:27699122
Seven ways to increase power without increasing N.
Hansen, W B; Collins, L M
1994-01-01
Many readers of this monograph may wonder why a chapter on statistical power was included. After all, by now the issue of statistical power is in many respects mundane. Everyone knows that statistical power is a central research consideration, and certainly most National Institute on Drug Abuse grantees or prospective grantees understand the importance of including a power analysis in research proposals. However, there is ample evidence that, in practice, prevention researchers are not paying sufficient attention to statistical power. If they were, the findings observed by Hansen (1992) in a recent review of the prevention literature would not have emerged. Hansen (1992) examined statistical power based on 46 cohorts followed longitudinally, using nonparametric assumptions given the subjects' age at posttest and the numbers of subjects. Results of this analysis indicated that, in order for a study to attain 80-percent power for detecting differences between treatment and control groups, the difference between groups at posttest would need to be at least 8 percent (in the best studies) and as much as 16 percent (in the weakest studies). In order for a study to attain 80-percent power for detecting group differences in pre-post change, 22 of the 46 cohorts would have needed relative pre-post reductions of greater than 100 percent. Thirty-three of the 46 cohorts had less than 50-percent power to detect a 50-percent relative reduction in substance use. These results are consistent with other review findings (e.g., Lipsey 1990) that have shown a similar lack of power in a broad range of research topics. Thus, it seems that, although researchers are aware of the importance of statistical power (particularly of the necessity for calculating it when proposing research), they somehow are failing to end up with adequate power in their completed studies. This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. It is easy to see how this overemphasis has come about. Sample size is easy to manipulate, has the advantage of being related to power in a straight-forward way, and usually is under the direct control of the researcher, except for limitations imposed by finances or subject availability. Another option for increasing power is to increase the alpha used for hypothesis-testing but, as very few researchers seriously consider significance levels much larger than the traditional .05, this strategy seldom is used. Of course, sample size is important, and the authors of this chapter are not recommending that researchers cease choosing sample sizes carefully. Rather, they argue that researchers should not confine themselves to increasing N to enhance power. It is important to take additional measures to maintain and improve power over and above making sure the initial sample size is sufficient. The authors recommend two general strategies. One strategy involves attempting to maintain the effective initial sample size so that power is not lost needlessly. The other strategy is to take measures to maximize the third factor that determines statistical power: effect size.
Grabitz, Clara R; Button, Katherine S; Munafò, Marcus R; Newbury, Dianne F; Pernet, Cyril R; Thompson, Paul A; Bishop, Dorothy V M
2018-01-01
Genetics and neuroscience are two areas of science that pose particular methodological problems because they involve detecting weak signals (i.e., small effects) in noisy data. In recent years, increasing numbers of studies have attempted to bridge these disciplines by looking for genetic factors associated with individual differences in behavior, cognition, and brain structure or function. However, different methodological approaches to guarding against false positives have evolved in the two disciplines. To explore methodological issues affecting neurogenetic studies, we conducted an in-depth analysis of 30 consecutive articles in 12 top neuroscience journals that reported on genetic associations in nonclinical human samples. It was often difficult to estimate effect sizes in neuroimaging paradigms. Where effect sizes could be calculated, the studies reporting the largest effect sizes tended to have two features: (i) they had the smallest samples and were generally underpowered to detect genetic effects, and (ii) they did not fully correct for multiple comparisons. Furthermore, only a minority of studies used statistical methods for multiple comparisons that took into account correlations between phenotypes or genotypes, and only nine studies included a replication sample or explicitly set out to replicate a prior finding. Finally, presentation of methodological information was not standardized and was often distributed across Methods sections and Supplementary Material, making it challenging to assemble basic information from many studies. Space limits imposed by journals could mean that highly complex statistical methods were described in only a superficial fashion. In summary, methods that have become standard in the genetics literature-stringent statistical standards, use of large samples, and replication of findings-are not always adopted when behavioral, cognitive, or neuroimaging phenotypes are used, leading to an increased risk of false-positive findings. Studies need to correct not just for the number of phenotypes collected but also for the number of genotypes examined, genetic models tested, and subsamples investigated. The field would benefit from more widespread use of methods that take into account correlations between the factors corrected for, such as spectral decomposition, or permutation approaches. Replication should become standard practice; this, together with the need for larger sample sizes, will entail greater emphasis on collaboration between research groups. We conclude with some specific suggestions for standardized reporting in this area.
Body Size Correlates with Fertilization Success but not Gonad Size in Grass Goby Territorial Males
Pujolar, Jose Martin; Locatello, Lisa; Zane, Lorenzo; Mazzoldi, Carlotta
2012-01-01
In fish species with alternative male mating tactics, sperm competition typically occurs when small males that are unsuccessful in direct contests steal fertilization opportunities from large dominant males. In the grass goby Zosterisessor ophiocephalus, large territorial males defend and court females from nest sites, while small sneaker males obtain matings by sneaking into nests. Parentage assignment of 688 eggs from 8 different nests sampled in the 2003–2004 breeding season revealed a high level of sperm competition. Fertilization success of territorial males was very high but in all nests sneakers also contributed to the progeny. In territorial males, fertilization success correlated positively with male body size. Gonadal investment was explored in a sample of 126 grass gobies collected during the period 1995–1996 in the same area (61 territorial males and 65 sneakers). Correlation between body weight and testis weight was positive and significant for sneaker males, while correlation was virtually equal to zero in territorial males. That body size in territorial males is correlated with fertilization success but not gonad size suggests that males allocate much more energy into growth and relatively little into sperm production once the needed size to become territorial is attained. The increased paternity of larger territorial males might be due to a more effective defense of the nest in comparison with smaller territorial males. PMID:23056415
Body size correlates with fertilization success but not gonad size in grass goby territorial males.
Pujolar, Jose Martin; Locatello, Lisa; Zane, Lorenzo; Mazzoldi, Carlotta
2012-01-01
In fish species with alternative male mating tactics, sperm competition typically occurs when small males that are unsuccessful in direct contests steal fertilization opportunities from large dominant males. In the grass goby Zosterisessor ophiocephalus, large territorial males defend and court females from nest sites, while small sneaker males obtain matings by sneaking into nests. Parentage assignment of 688 eggs from 8 different nests sampled in the 2003-2004 breeding season revealed a high level of sperm competition. Fertilization success of territorial males was very high but in all nests sneakers also contributed to the progeny. In territorial males, fertilization success correlated positively with male body size. Gonadal investment was explored in a sample of 126 grass gobies collected during the period 1995-1996 in the same area (61 territorial males and 65 sneakers). Correlation between body weight and testis weight was positive and significant for sneaker males, while correlation was virtually equal to zero in territorial males. That body size in territorial males is correlated with fertilization success but not gonad size suggests that males allocate much more energy into growth and relatively little into sperm production once the needed size to become territorial is attained. The increased paternity of larger territorial males might be due to a more effective defense of the nest in comparison with smaller territorial males.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiser, I; Lu, Z
2014-06-01
Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions includedmore » two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task.« less
Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin
2017-08-17
A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.
On sample size and different interpretations of snow stability datasets
NASA Astrophysics Data System (ADS)
Schirmer, M.; Mitterer, C.; Schweizer, J.
2009-04-01
Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar aspect distributions to the large dataset. We used 100 different subsets for each sample size. Statistical variations obtained in the complete dataset were also tested on the smaller subsets using the Mann-Whitney or the Kruskal-Wallis test. For each subset size, the number of subsets were counted in which the significance level was reached. For these tests no nominal data scale was assumed. (iii) For the same subsets described above, the distribution of the aspect median was determined. A count of how often this distribution was substantially different from the distribution obtained with the complete dataset was made. Since two valid stability interpretations were available (an objective and a subjective interpretation as described above), the effect of the arbitrary choice of the interpretation on spatial variability results was tested. In over one third of the cases the two interpretations came to different results. The effect of these differences were studied in a similar method as described in (iii): the distribution of the aspect median was determined for subsets of the complete dataset using both interpretations, compared against each other as well as to the results of the complete dataset. For the complete dataset the two interpretations showed mainly identical results. Therefore the subset size was determined from the point at which the results of the two interpretations converged. A universal result for the optimal subset size cannot be presented since results differed between different situations contained in the dataset. The optimal subset size is thus dependent on stability variation in a given situation, which is unknown initially. There are indications that for some situations even the complete dataset might be not large enough. At a subset size of approximately 25, the significant differences between aspect groups (as determined using the whole dataset) were only obtained in one out of five situations. In some situations, up to 20% of the subsets showed a substantially different distribution of the aspect median. Thus, in most cases, 25 measurements (which can be achieved by six two-person teams in one day) did not allow to draw reliable conclusions.
Replication and contradiction of highly cited research papers in psychiatry: 10-year follow-up.
Tajika, Aran; Ogawa, Yusuke; Takeshima, Nozomi; Hayasaka, Yu; Furukawa, Toshi A
2015-10-01
Contradictions and initial overestimates are not unusual among highly cited studies. However, this issue has not been researched in psychiatry. Aims: To assess how highly cited studies in psychiatry are replicated by subsequent studies. We selected highly cited studies claiming effective psychiatric treatments in the years 2000 through 2002. For each of these studies we searched for subsequent studies with a better-controlled design, or with a similar design but a larger sample. Among 83 articles recommending effective interventions, 40 had not been subject to any attempt at replication, 16 were contradicted, 11 were found to have substantially smaller effects and only 16 were replicated. The standardised mean differences of the initial studies were overestimated by 132%. Studies with a total sample size of 100 or more tended to produce replicable results. Caution is needed when a study with a small sample size reports a large effect. © The Royal College of Psychiatrists 2015.
Cognitive Behavioral Therapy: A Meta-Analysis of Race and Substance Use Outcomes
Windsor, Liliane Cambraia; Jemal, Alexis; Alessi, Edward
2015-01-01
Cognitive behavioral therapy (CBT) is an effective intervention for reducing substance use. However, because CBT trials have included predominantly White samples caution must be used when generalizing these effects to Blacks and Hispanics. This meta-analysis compared the impact of CBT in reducing substance use between studies with a predominantly non-Hispanic White sample (hereafter NHW studies) and studies with a predominantly Black and/or Hispanic sample (hereafter BH studies). From 322 manuscripts identified in the literature, 17 met criteria for inclusion. Effect sizes between CBT and comparison group at posttest had similar effects on substance abuse across NHW and BH studies. However, when comparing pre-posttest effect sizes from groups receiving CBT between NHW and BH studies, CBT’s impact was significantly stronger in NHW studies. T-test comparisons indicated reduced retention/engagement in BH studies, albeit failing to reach statistical significance. Results highlight the need for further research testing CBT’s impact on substance use among Blacks and Hispanics. PMID:25285527
Lessio, Federico; Alma, Alberto
2006-04-01
The spatial distribution of the nymphs of Scaphoideus titanus Ball (Homoptera Cicadellidae), the vector of grapevine flavescence dorée (Candidatus Phytoplasma vitis, 16Sr-V), was studied by applying Taylor's power law. Studies were conducted from 2002 to 2005, in organic and conventional vineyards of Piedmont, northern Italy. Minimum sample size and fixed precision level stop lines were calculated to develop appropriate sampling plans. Model validation was performed, using independent field data, by means of Resampling Validation of Sample Plans (RVSP) resampling software. The nymphal distribution, analyzed via Taylor's power law, was aggregated, with b = 1.49. A sample of 32 plants was adequate at low pest densities with a precision level of D0 = 0.30; but for a more accurate estimate (D0 = 0.10), the required sample size needs to be 292 plants. Green's fixed precision level stop lines seem to be more suitable for field sampling: RVSP simulations of this sampling plan showed precision levels very close to the desired levels. However, at a prefixed precision level of 0.10, sampling would become too time-consuming, whereas a precision level of 0.25 is easily achievable. How these results could influence the correct application of the compulsory control of S. titanus and Flavescence dorée in Italy is discussed.
Numerical sedimentation particle-size analysis using the Discrete Element Method
NASA Astrophysics Data System (ADS)
Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.
2015-12-01
Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.
Multiscale modeling of porous ceramics using movable cellular automaton method
NASA Astrophysics Data System (ADS)
Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.
2017-10-01
The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.
NASA Astrophysics Data System (ADS)
Kornilin, DV; Kudryavtsev, IA
2016-10-01
One of the most effective ways to diagnose the state of hydraulic system is an investigation of the particles in their liquids. The sizes of such particles range from 2 to 200 gm and their concentration and shape reveal important information about the current state of equipment and the necessity of maintenance. In-line automatic particle counters (APC), which are built into hydraulic system, are widely used for determination of particle size and concentration. These counters are based on a single photodiode and a light emitting diode (LED); however, samples of liquid are needed for analysis using microscope or industrial video camera in order to get information about particle shapes. The act of obtaining the sample leads to contamination by other particles from the air or from the sample tube, meaning that the results are usually corrupted. Using the CMOS or CCD matrix sensor without any lens for inline APC is the solution proposed by authors. In this case the matrix sensors are put into the liquid channel of the hydraulic system and illuminated by LED. This system could be stable in arduous conditions like high pressure and the vibration of the hydraulic system; however, the image or signal from that matrix sensor needs to be processed differently in comparison with the signal from microscope or industrial video camera because of relatively short distance between LED and sensor. This paper introduces mathematical model of a sensor with CMOS and LED, which can be built into hydraulic system. It is also provided a computational algorithm and results, which can be useful for calculation of particle sizes and shapes using the signal from the CMOS matrix sensor.
Assessing the Application of a Geographic Presence-Only Model for Land Suitability Mapping
Heumann, Benjamin W.; Walsh, Stephen J.; McDaniel, Phillip M.
2011-01-01
Recent advances in ecological modeling have focused on novel methods for characterizing the environment that use presence-only data and machine-learning algorithms to predict the likelihood of species occurrence. These novel methods may have great potential for land suitability applications in the developing world where detailed land cover information is often unavailable or incomplete. This paper assesses the adaptation and application of the presence-only geographic species distribution model, MaxEnt, for agricultural crop suitability mapping in a rural Thailand where lowland paddy rice and upland field crops predominant. To assess this modeling approach, three independent crop presence datasets were used including a social-demographic survey of farm households, a remote sensing classification of land use/land cover, and ground control points, used for geodetic and thematic reference that vary in their geographic distribution and sample size. Disparate environmental data were integrated to characterize environmental settings across Nang Rong District, a region of approximately 1,300 sq. km in size. Results indicate that the MaxEnt model is capable of modeling crop suitability for upland and lowland crops, including rice varieties, although model results varied between datasets due to the high sensitivity of the model to the distribution of observed crop locations in geographic and environmental space. Accuracy assessments indicate that model outcomes were influenced by the sample size and the distribution of sample points in geographic and environmental space. The need for further research into accuracy assessments of presence-only models lacking true absence data is discussed. We conclude that the Maxent model can provide good estimates of crop suitability, but many areas need to be carefully scrutinized including geographic distribution of input data and assessment methods to ensure realistic modeling results. PMID:21860606
Li, Peng; Redden, David T.
2014-01-01
SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738
Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G
2012-10-01
Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
Krempa, Heather M.
2015-10-29
Relative percent differences between methods were greater than 10 percent for most analyzed trace elements. Barium, cobalt, manganese, and boron had concentrations that were significantly different between sampling methods. Barium, molybdenum, boron, and uranium method concentrations indicate a close association between pump and grab samples based on bivariate plots and simple linear regressions. Grab sample concentrations were generally larger than pump concentrations for these elements and may be because of using a larger pore sized filter for grab samples. Analysis of zinc blank samples suggests zinc contamination in filtered grab samples. Variations of analyzed trace elements between pump and grab samples could reduce the ability to monitor temporal changes and potential groundwater contamination threats. The degree of precision necessary for monitoring potential groundwater threats and application objectives need to be considered when determining acceptable variation amounts.
Pharmacogenomics in neurology: current state and future steps.
Chan, Andrew; Pirmohamed, Munir; Comabella, Manuel
2011-11-01
In neurology, as in any other clinical specialty, there is a need to develop treatment strategies that allow stratification of therapies to optimize efficacy and minimize toxicity. Pharmacogenomics is one such method for therapy optimization: it aims to elucidate the relationship between human genome sequence variation and differential drug responses. Approaches have focused on candidate approaches investigating absorption-, distribution-, metabolism, and elimination (ADME)-related genes (pharmacokinetic pathways), and potential drug targets (pharmacodynamic pathways). To date, however, only few genetic variants have been incorporated into clinical algorithms. Unfortunately, a large number of studies have thrown up contradictory results due to a number of deficiencies, including small sample sizes, inadequate phenotyping, and genotyping strategies. Thus, there still exists an urgent need to establish biomarkers that could help to select for patients with an optimal benefit to risk relationship. Here we review recent advances, and limitations, in pharmacogenomics for agents used in neuroimmunology, neurodegenerative diseases, ischemic stroke, epilepsy, and primary headaches. Further work is still required in all of these areas, which really needs to progress on several fronts, including better standardized phenotyping, appropriate sample sizes through multicenter collaborations and judicious use of new technological advances such as genome-wide approaches, next generation sequencing and systems biology. In time, this is likely to lead to improvements in the benefit-harm balance of neurological therapies, cost efficiency, and identification of new drugs. Copyright © 2011 American Neurological Association.
No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.
van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B
2016-11-24
Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
Cerón, Alejandro; Méndez-Alburez, Luis Pablo; Lou-Meda, Randall
2017-01-01
Pediatric patients with Chronic Kidney Disease face several barriers to medication adherence that, if addressed, may improve clinical care outcomes. A cross sectional questionnaire was administered in the Foundation for Children with Kidney Disease (FUNDANIER, Guatemala City) from September of 2015 to April of 2016 to identify the predisposing factors, enabling factors and need factors related to medication adherence. Sample size was calculated using simple random sampling with a confidence level of 95%, confidence interval of 0.05 and a proportion of 87%. A total of 103 participants responded to the questionnaire (calculated sample size was 96). Independent variables were defined and described, and the bivariate relationship to dependent variables was determined using Odds Ratio. Multivariate analysis was carried out using logistic regression. The mean adherence of study population was 78% (SD 0.08, max = 96%, min = 55%). The mean adherence in transplant patients was 82% (SD 7.8, max 96%, min 63%), and the mean adherence in dialysis patients was 76% (SD 7.8 max 90%, min 55%). Adherence was positively associated to the mother’s educational level and to higher monthly household income. Together predisposing, enabling and need factors illustrate the complexities surrounding adherence in this pediatric CKD population. Public policy strategies aimed at improving access to comprehensive treatment regimens may facilitate treatment access, alleviating economic strain on caregivers and may improve adherence outcomes. PMID:29036228
Ramay, Brooke M; Cerón, Alejandro; Méndez-Alburez, Luis Pablo; Lou-Meda, Randall
2017-01-01
Pediatric patients with Chronic Kidney Disease face several barriers to medication adherence that, if addressed, may improve clinical care outcomes. A cross sectional questionnaire was administered in the Foundation for Children with Kidney Disease (FUNDANIER, Guatemala City) from September of 2015 to April of 2016 to identify the predisposing factors, enabling factors and need factors related to medication adherence. Sample size was calculated using simple random sampling with a confidence level of 95%, confidence interval of 0.05 and a proportion of 87%. A total of 103 participants responded to the questionnaire (calculated sample size was 96). Independent variables were defined and described, and the bivariate relationship to dependent variables was determined using Odds Ratio. Multivariate analysis was carried out using logistic regression. The mean adherence of study population was 78% (SD 0.08, max = 96%, min = 55%). The mean adherence in transplant patients was 82% (SD 7.8, max 96%, min 63%), and the mean adherence in dialysis patients was 76% (SD 7.8 max 90%, min 55%). Adherence was positively associated to the mother's educational level and to higher monthly household income. Together predisposing, enabling and need factors illustrate the complexities surrounding adherence in this pediatric CKD population. Public policy strategies aimed at improving access to comprehensive treatment regimens may facilitate treatment access, alleviating economic strain on caregivers and may improve adherence outcomes.
An inventory of nursing education research.
Yonge, Olive J; Anderson, Marjorie; Profetto-McGrath, Joanne; Olson, Joanne K; Skillen, D Lynn; Boman, Jeanette; Ranson Ratusz, Ann; Anderson, Arnette; Slater, Linda; Day, Rene
2005-01-01
To describe nursing education research literature in terms of quality, content areas under investigation, geographic location of the research, research designs utilized, sample sizes, instruments used to collect data, and funding sources. Quantitative and qualitative research literature published between January 1991 and December 2000 were identified and classified using an author-generated Relevance Tool. 1286 articles were accepted and entered into the inventory, and an additional 22 were retained as references as they were either literature reviews or meta-analyses. Not surprisingly, 90% of nursing education research was generated in North America and Europe, the industrialised parts of the world. Of the total number of articles accepted into the inventory, 61% were quantitative research based. The bulk of the research was conducted within the confines of a course or within a program, with more than half based in educational settings. Sample sizes of the research conducted were diverse, with a bare majority using a sample between 50 and 99 participants. More than half of the studies used questionnaires to obtain data. Surprising, 80% of the research represented in these articles was not funded. The number of publications of nursing education research generated yearly stabilised at approximately 120 per year. Research programs on teaching and learning environments and practice in nursing education need to be developed. Lobbying is needed to increase funding for this type of research at national and international levels.
Kovač, Marko; Bauer, Arthur; Ståhl, Göran
2014-01-01
Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
Dental health status and treatment needs in the infantry regiment of the Malaysian Territorial Army.
Jasmin, Borhan; Jaafar, Nasruddin
2011-04-01
The aim of this study was to determine the dental health status and treatment needs of personnel in the Infantry Regiment of the Malaysian Territorial Army (TA).This cross-sectional study involved stratified and systematic random sampling with a total sample size of 300. Dental health status and treatment needs were assessed using the standard WHO oral assessment criteria (1997). The prevalence of caries experience was 96% (mean ± SD DMFT [decayed, missing, filled teeth] = 8.0 ± 5.5). Active decay prevalence was high (85%; mean ± SD = 3.6 ± 3.1) indicating high unmet treatment need. Missing teeth prevalence was high (69%; mean ± SD = 2.8 ± 3.7). Filled teeth prevalence was low (56%, mean ± SD = 1.5 ± 2.0). In all, 90% of participants required some form of dental treatment, of whom 85% required restorative treatment, 5% advanced restorative treatment, 36.7% extractions, and 45.3% prosthetic treatment. These findings suggest that there was a high need for dental treatment in the Infantry Battalions of Malaysian TA Regiments and the service must be made available to cater to the needs.
ERIC Educational Resources Information Center
Ntukidem, Peter James; Ntukidem, Eno Peter; Eyo, Eno Etudor
2011-01-01
This study investigated the availability and distribution of staff and facilities/equipment in private and public special needs schools in Cross River State. Sixty-nine (69) teachers and three (3) principals of these schools constituted the sample size of the study. One hypothesis and one research question were postulated to guide the study. The…
Cognitive impairments in cancer patients represent an important clinical problem. Studies to date estimating prevalence of difficulties in memory, executive function, and attention deficits have been limited by small sample sizes and many have lacked healthy control groups. More information is needed on promising biomarkers and allelic variants that may help to determine the
Silvestre, Ellida de Aguiar; Schwarcz, Kaiser Dias; Grando, Carolina; de Campos, Jaqueline Bueno; Sujii, Patricia Sanae; Tambarussi, Evandro Vagner; Macrini, Camila Menezes Trindade; Pinheiro, José Baldin; Brancalion, Pedro Henrique Santin; Zucchi, Maria Imaculada
2018-03-16
The reproductive system of a tree species has substantial impact on genetic diversity and structure within and among natural populations. Such information, should be considered when planning tree planting for forest restoration. Here, we describe the mating system and genetic diversity of an overexploited Neotropical tree, Myroxylon peruiferum L.f. (Fabaceae) sampled from a forest remnant (10 seed trees and 200 seeds) and assess whether the effective population size of nursery-grown seedlings (148 seedlings) is sufficient to prevent inbreeding depression in reintroduced populations. Genetic analyses were performed based on 8 microsatellite loci. M. peruiferum presented a mixed mating system with evidence of biparental inbreeding (t^m-t^s = 0.118). We found low levels of genetic diversity for M. peruiferum species (allelic richness: 1.40 to 4.82; expected heterozygosity: 0.29 to 0.52). Based on Ne(v) within progeny, we suggest a sample size of 47 seed trees to achieve an effective population size of 100. The effective population sizes for the nursery-grown seedlings were much smaller Ne = 27.54-34.86) than that recommended for short term Ne ≥ 100) population conservation. Therefore, to obtain a reasonable genetic representation of native tree species and prevent problems associated with inbreeding depression, seedling production for restoration purposes may require a much larger sampling effort than is currently used, a problem that is further complicated by species with a mixed mating system. This study emphasizes the need to integrate species reproductive biology into seedling production programs and connect conservation genetics with ecological restoration.
Kwon, Sun-Hong; Park, Sun-Kyeong; Byun, Ji-Hye; Lee, Eui-Kyung
2017-08-01
In order to look beyond the cost-effectiveness analysis, this study used a multi-criteria decision analysis (MCDA), which reflects societal values with regard to reimbursement decisions. This study aims to elicit societal preferences of the reimbursement decision criteria for anti cancer drugs from public and healthcare professionals. Eight criteria were defined based on a literature review and focus group sessions: disease severity, disease population size, pediatrics targets, unmet needs, innovation, clinical benefits, cost-effectiveness, and budget impacts. Using quota sampling and purposive sampling, 300 participants from the Korean public and 30 healthcare professionals were selected for the survey. Preferences were elicited using an analytic hierarchy process. Both groups rated clinical benefits the highest, followed by cost-effectiveness and disease severity, but differed with regard to disease population size and unmet needs. Innovation was the least preferred criteria. Clinical benefits and other social values should be reflected appropriately with cost-effectiveness in healthcare coverage. MCDA can be used to assess decision priorities for complicated health policy decisions, including reimbursement decisions. It is a promising method for making logical and transparent drug reimbursement decisions that consider a broad range of factors, which are perceived as important by relevant stakeholders.
Forbes, Andrew B; Akram, Muhammad; Pilcher, David; Cooper, Jamie; Bellomo, Rinaldo
2015-02-01
Cluster randomised crossover trials have been utilised in recent years in the health and social sciences. Methods for analysis have been proposed; however, for binary outcomes, these have received little assessment of their appropriateness. In addition, methods for determination of sample size are currently limited to balanced cluster sizes both between clusters and between periods within clusters. This article aims to extend this work to unbalanced situations and to evaluate the properties of a variety of methods for analysis of binary data, with a particular focus on the setting of potential trials of near-universal interventions in intensive care to reduce in-hospital mortality. We derive a formula for sample size estimation for unbalanced cluster sizes, and apply it to the intensive care setting to demonstrate the utility of the cluster crossover design. We conduct a numerical simulation of the design in the intensive care setting and for more general configurations, and we assess the performance of three cluster summary estimators and an individual-data estimator based on binomial-identity-link regression. For settings similar to the intensive care scenario involving large cluster sizes and small intra-cluster correlations, the sample size formulae developed and analysis methods investigated are found to be appropriate, with the unweighted cluster summary method performing well relative to the more optimal but more complex inverse-variance weighted method. More generally, we find that the unweighted and cluster-size-weighted summary methods perform well, with the relative efficiency of each largely determined systematically from the study design parameters. Performance of individual-data regression is adequate with small cluster sizes but becomes inefficient for large, unbalanced cluster sizes. When outcome prevalences are 6% or less and the within-cluster-within-period correlation is 0.05 or larger, all methods display sub-nominal confidence interval coverage, with the less prevalent the outcome the worse the coverage. As with all simulation studies, conclusions are limited to the configurations studied. We confined attention to detecting intervention effects on an absolute risk scale using marginal models and did not explore properties of binary random effects models. Cluster crossover designs with binary outcomes can be analysed using simple cluster summary methods, and sample size in unbalanced cluster size settings can be determined using relatively straightforward formulae. However, caution needs to be applied in situations with low prevalence outcomes and moderate to high intra-cluster correlations. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Japuntich, Daniel A.; Franklin, Luke M.; Pui, David Y.; Kuehn, Thomas H.; Kim, Seong Chan; Viner, Andrew S.
2007-01-01
Two different air filter test methodologies are discussed and compared for challenges in the nano-sized particle range of 10-400 nm. Included in the discussion are test procedure development, factors affecting variability and comparisons between results from the tests. One test system which gives a discrete penetration for a given particle size is the TSI 8160 Automated Filter tester (updated and commercially available now as the TSI 3160) manufactured by the TSI, Inc., Shoreview, MN. Another filter test system was developed utilizing a Scanning Mobility Particle Sizer (SMPS) to sample the particle size distributions downstream and upstream of an air filter to obtain a continuous percent filter penetration versus particle size curve. Filtration test results are shown for fiberglass filter paper of intermediate filtration efficiency. Test variables affecting the results of the TSI 8160 for NaCl and dioctyl phthalate (DOP) particles are discussed, including condensation particle counter stability and the sizing of the selected particle challenges. Filter testing using a TSI 3936 SMPS sampling upstream and downstream of a filter is also shown with a discussion of test variables and the need for proper SMPS volume purging and filter penetration correction procedure. For both tests, the penetration versus particle size curves for the filter media studied follow the theoretical Brownian capture model of decreasing penetration with decreasing particle diameter down to 10 nm with no deviation. From these findings, the authors can say with reasonable confidence that there is no evidence of particle thermal rebound in the size range.
2012-01-01
Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes. PMID:23057445
Effect of bait and gear type on channel catfish catch and turtle bycatch in a reservoir
Cartabiano, Evan C.; Stewart, David R.; Long, James M.
2014-01-01
Hoop nets have become the preferred gear choice to sample channel catfish Ictalurus punctatus but the degree of bycatch can be high, especially due to the incidental capture of aquatic turtles. While exclusion and escapement devices have been developed and evaluated, few have examined bait choice as a method to reduce turtle bycatch. The use of Zote™ soap has shown considerable promise to reduce bycatch of aquatic turtles when used with trotlines but its effectiveness in hoop nets has not been evaluated. We sought to determine the effectiveness of hoop nets baited with cheese bait or Zote™ soap and trotlines baited with shad or Zote™ soap as a way to sample channel catfish and prevent capture of aquatic turtles. We used a repeated-measures experimental design and treatment combinations were randomly assigned using a Latin-square arrangement. Eight sampling locations were systematically selected and then sampled with either hoop nets or trotlines using Zote™ soap (both gears), waste cheese (hoop nets), or cut shad (trotlines). Catch rates did not statistically differ among the gear–bait-type combinations. Size bias was evident with trotlines consistently capturing larger sized channel catfish compared to hoop nets. Results from a Monte Carlo bootstrapping procedure estimated the number of samples needed to reach predetermined levels of sampling precision to be lowest for trotlines baited with soap. Moreover, trotlines baited with soap caught no aquatic turtles, while hoop nets captured many turtles and had high mortality rates. We suggest that Zote™ soap used in combination with multiple hook sizes on trotlines may be a viable alternative to sample channel catfish and reduce bycatch of aquatic turtles.
Tai, Dean C.S.; Wang, Shi; Cheng, Chee Leong; Peng, Qiwen; Yan, Jie; Chen, Yongpeng; Sun, Jian; Liang, Xieer; Zhu, Youfu; Rajapakse, Jagath C.; Welsch, Roy E.; So, Peter T.C.; Wee, Aileen; Hou, Jinlin; Yu, Hanry
2014-01-01
Background & Aims There is increasing need for accurate assessment of liver fibrosis/cirrhosis. We aimed to develop qFibrosis, a fully-automated assessment method combining quantification of histopathological architectural features, to address unmet needs in core biopsy evaluation of fibrosis in chronic hepatitis B (CHB) patients. Methods qFibrosis was established as a combined index based on 87 parameters of architectural features. Images acquired from 25 Thioacetamide-treated rat samples and 162 CHB core biopsies were used to train and test qFibrosis and to demonstrate its reproducibility. qFibrosis scoring was analyzed employing Metavir and Ishak fibrosis staging as standard references, and collagen proportionate area (CPA) measurement for comparison. Results qFibrosis faithfully and reliably recapitulates Metavir fibrosis scores, as it can identify differences between all stages in both animal samples (p <0.001) and human biopsies (p <0.05). It is robust to sampling size, allowing for discrimination of different stages in samples of different sizes (area under the curve (AUC): 0.93–0.99 for animal samples: 1–16 mm2; AUC: 0.84–0.97 for biopsies: 10–44 mm in length). qFibrosis can significantly predict staging underestimation in suboptimal biopsies (<15 mm) and under- and over-scoring by different pathologists (p <0.001). qFibrosis can also differentiate between Ishak stages 5 and 6 (AUC: 0.73, p = 0.008), suggesting the possibility of monitoring intra-stage cirrhosis changes. Best of all, qFibrosis demonstrates superior performance to CPA on all counts. Conclusions qFibrosis can improve fibrosis scoring accuracy and throughput, thus allowing for reproducible and reliable analysis of efficacies of anti-fibrotic therapies in clinical research and practice. PMID:24583249
Variation Across U.S. Assisted Living Facilities: Admissions, Resident Care Needs, and Staffing.
Han, Kihye; Trinkoff, Alison M; Storr, Carla L; Lerner, Nancy; Yang, Bo Kyum
2017-01-01
Though more people in the United States currently reside in assisted living facilities (ALFs) than nursing homes, little is known about ALF admission policies, resident care needs, and staffing characteristics. We therefore conducted this study using a nationwide sample of ALFs to examine these factors, along with comparison of ALFs by size. Cross-sectional secondary data analysis using data from the 2010 National Survey of Residential Care Facilities. Measures included nine admission policy items, seven items on the proportion of residents with selected conditions or care needs, and six items on staffing characteristics (e.g., access to licensed nurse, aide training). Facilities (n = 2,301) were divided into three categories by size: small, 4 to 10 beds; medium, 11 to 25 beds; and large, 26 or more beds. Analyses took complex sampling design effects into account to project national U.S. estimates. More than half of ALFs admitted residents with considerable healthcare needs and served populations that required nursing care, such as for transfers, medications, and eating or dressing. Staffing was largely composed of patient care aides, and fewer than half of ALFs had licensed care provider (registered nurse, licensed practical nurse) hours. Smaller facilities tended to have more inclusive admission policies and residents with more complex care needs (more mobility, eating and medication assistance required, short-term memory issues, p < .01) and less access to licensed nurses than larger ALFs (p < .01). This study suggests ALFs are caring for and admitting residents with considerable care needs, indicating potential overlap with nursing home populations. Despite this finding, ALF regulations lag far behind those in effect for nursing homes. In addition, measurement of care outcomes is critically needed to ensure appropriate ALF care quality. As more people choose ALFs, outcome measures for ALFs, which are now unavailable, should be developed to allow for oversight and monitoring of care quality. © 2016 Sigma Theta Tau International.
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
Holt, Maxine; Powell, Susan
2015-01-01
Health and well-being in the workplace is a concept that is understood as a fundamental business case for a productive, happy and healthy workforce. The workplace is also a setting by which knowledge and skills about health can be disseminated to assist people, in improving their health and well-being. Public health professionals are in a position to develop workplace health and well-being interventions, which support those in jobs and those seeking employment. They can also influence the extent to which work and the workplace affects health and well-being outcomes. This article aims to identify the main health and well-being needs of a sample of small and medium-sized enterprises (SMEs) across Greater Manchester and the support that public health professionals can offer. The research adopted a Health Needs Assessment (HNA) approach using convenience and opportunistic sampling methods, from the list of SMEs in Greater Manchester. The SMEs varied in size and type of business, and 91 telephone interviews, using semi-structured questions, were used to collect data which identified the health and well-being needs of a sample of SMEs in Greater Manchester. This research resulted in qualitative data using thematic analysis. Two key themes emerged from the study. Acute seasonal sickness was the most pressing reason for employee absence from work (viruses, flu, seasonal disorders) for the SMEs in this research. This accumulated to the theme of sickness presenteeism. This research highlighted that employees will present at work with acute illness that requires rest, is easily transmitted to other employees and most likely will take a longer time to recover from as cross infection and re-infection occur. A subsidiary theme was that of authenticity and the reporting of sickness, contributing further to sickness presenteeism as employees seek to legitimise their illness. This article provides issues which are specific to SMEs in Greater Manchester. In particular, the pressing problem of sickness absence and sickness presenteeism is related to seasonal illness and the effects these have on SMEs in Greater Manchester. Public health preventative services such as the provision of flu vaccines may be one way of supporting SMEs with acute seasonal episodes of illness. © Royal Society for Public Health 2014.
Fowler, Dawnovise N; Faulkner, Monica
2011-12-01
In this article, meta-analytic techniques are used to examine existing intervention studies (n = 11) to determine their effects on substance abuse among female samples of intimate partner abuse (IPA) survivors. This research serves as a starting point for greater attention in research and practice to the implementation of evidence-based, integrated services to address co-occurring substance abuse and IPA victimization among women as major intersecting public health problems. The results show greater effects in three main areas. First, greater effect sizes exist in studies where larger numbers of women experienced current IPA. Second, studies with a lower mean age also showed greater effect sizes than studies with a higher mean age. Lastly, studies with smaller sample sizes have greater effects. This research helps to facilitate cohesion in the knowledge base on this topic, and the findings of this meta-analysis, in particular, contribute needed information to gaps in the literature on the level of promise of existing interventions to impact substance abuse in this underserved population. Published by Elsevier Inc.
Penile length and circumference: an Indian study.
Promodu, K; Shanmughadas, K V; Bhat, S; Nair, K R
2007-01-01
Apprehension about the normal size of penis is a major concern for men. Aim of the present investigation is to estimate the penile length and circumference of Indian males and to compare the results with the data from other countries. Results will help in counseling the patients worried about the penile size and seeking penis enlargement surgery. Penile length in flaccid and stretched conditions and circumference were measured in a group of 301 physically normal men. Erected length and circumference were measured for 93 subjects. Mean flaccid length was found to be 8.21 cm, mean stretched length 10.88 cm and circumference 9.14 cm. Mean erected length was found to be 13.01 cm and erected circumference was 11.46 cm. Penile dimensions are found to be correlated with anthropometric parameters. Insight into the normative data of penile size of Indian males obtained. There are significant differences in the mean penile length and circumference of Indian sample compared to the data reported from other countries. Study need to be continued with a large sample to establish a normative data applicable to the general population.
Dark field imaging system for size characterization of magnetic micromarkers
NASA Astrophysics Data System (ADS)
Malec, A.; Haiden, C.; Kokkinis, G.; Keplinger, F.; Giouroudi, I.
2017-05-01
In this paper we demonstrate a dark field video imaging system for the detection and size characterization of individual magnetic micromarkers suspended in liquid and the detection of pathogens utilizing magnetically labelled E.coli. The system follows dynamic processes and interactions of moving micro/nano objects close to or below the optical resolution limit, and is especially suitable for small sample volumes ( 10 μl). The developed detection method can be used to obtain clinical information about liquid contents when an additional biological protocol is provided, i.e., binding of microorganisms (e.g. E.coli) to specific magnetic markers. Some of the major advantages of our method are the increased sizing precision in the micro- and nano-range as well as the setup's simplicity making it a perfect candidate for miniaturized devices. Measurements can thus be carried out in a quick, inexpensive, and compact manner. A minor limitation is that the concentration range of micromarkers in a liquid sample needs to be adjusted in such a manner that the number of individual particles in the microscope's field of view is sufficient.
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Satake, K.; Goto, T.; Takahashi, T.
2016-12-01
Estimating tsunami amplitude from tsunami sand deposit has been a challenge. The grain size distribution of tsunami sand deposit may have correlation with tsunami inundation process, and further with its source characteristics. In order to test this hypothesis, we need a tsunami sediment transport model that can accurately estimate grain size distribution of tsunami deposit. Here, we built and validate a tsunami sediment transport model that can simulate grain size distribution. Our numerical model has three layers which are suspended load layer, active bed layer, and parent bed layer. The two bed layers contain information about the grain size distribution. This numerical model can handle a wide range of grain sizes from 0.063 (4 ϕ) to 5.657 mm (-2.5 ϕ). We apply the numerical model to simulate the sedimentation process during the 2011 Tohoku earthquake in Numanohama, Iwate prefecture, Japan. The grain size distributions at 15 sample points along a 900 m transect from the beach are used to validate the tsunami sediment transport model. The tsunami deposits are dominated by coarse sand with diameter of 0.5 - 1 mm and their thickness are up to 25 cm. Our tsunami model can well reproduce the observed tsunami run-ups that are ranged from 16 to 34 m along the steep valley in Numanohama. The shapes of the simulated grain size distributions at many sample points located within 300 m from the shoreline are similar to the observations. The differences between observed and simulated peak of grain size distributions are less than 1 ϕ. Our result also shows that the simulated sand thickness distribution along the transect is consistent with the observation.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
Philippe, Allan; Schaumann, Gabriele E.
2014-01-01
In this study, we evaluated hydrodynamic chromatography (HDC) coupled with inductively coupled plasma mass spectrometry (ICP-MS) for the analysis of nanoparticles in environmental samples. Using two commercially available columns (Polymer Labs-PDSA type 1 and 2), a set of well characterised calibrants and a new external time marking method, we showed that flow rate and eluent composition have few influence on the size resolution and, therefore, can be adapted to the sample particularity. Monitoring the agglomeration of polystyrene nanoparticles over time succeeded without observable disagglomeration suggesting that even weak agglomerates can be measured using HDC. Simultaneous determination of gold colloid concentration and size using ICP-MS detection was validated for elemental concentrations in the ppb range. HDC-ICP-MS was successfully applied to samples containing a high organic and ionic background. Indeed, online combination of UV-visible, fluorescence and ICP-MS detectors allowed distinguishing between organic molecules and inorganic colloids during the analysis of Ag nanoparticles in synthetic surface waters and TiO2 and ZnO nanoparticles in commercial sunscreens. Taken together, our results demonstrate that HDC-ICP-MS is a flexible, sensitive and reliable method to measure the size and the concentration of inorganic colloids in complex media and suggest that there may be a promising future for the application of HDC in environmental science. Nonetheless the rigorous measurements of agglomerates and of matrices containing natural colloids still need to be studied in detail. PMID:24587393
Philippe, Allan; Schaumann, Gabriele E
2014-01-01
In this study, we evaluated hydrodynamic chromatography (HDC) coupled with inductively coupled plasma mass spectrometry (ICP-MS) for the analysis of nanoparticles in environmental samples. Using two commercially available columns (Polymer Labs-PDSA type 1 and 2), a set of well characterised calibrants and a new external time marking method, we showed that flow rate and eluent composition have few influence on the size resolution and, therefore, can be adapted to the sample particularity. Monitoring the agglomeration of polystyrene nanoparticles over time succeeded without observable disagglomeration suggesting that even weak agglomerates can be measured using HDC. Simultaneous determination of gold colloid concentration and size using ICP-MS detection was validated for elemental concentrations in the ppb range. HDC-ICP-MS was successfully applied to samples containing a high organic and ionic background. Indeed, online combination of UV-visible, fluorescence and ICP-MS detectors allowed distinguishing between organic molecules and inorganic colloids during the analysis of Ag nanoparticles in synthetic surface waters and TiO₂ and ZnO nanoparticles in commercial sunscreens. Taken together, our results demonstrate that HDC-ICP-MS is a flexible, sensitive and reliable method to measure the size and the concentration of inorganic colloids in complex media and suggest that there may be a promising future for the application of HDC in environmental science. Nonetheless the rigorous measurements of agglomerates and of matrices containing natural colloids still need to be studied in detail.
The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education
ERIC Educational Resources Information Center
Slavin, Robert; Smith, Dewi
2009-01-01
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
Brown, R C; Witt, A; Fegert, J M; Keller, F; Rassenhofer, M; Plener, P L
2017-08-01
Children and adolescents are a vulnerable group to develop post-traumatic stress symptoms after natural or man-made disasters. In the light of increasing numbers of refugees under the age of 18 years worldwide, there is a significant need for effective treatments. This meta-analytic review investigates specific psychosocial treatments for children and adolescents after man-made and natural disasters. In a systematic literature search using MEDLINE, EMBASE and PsycINFO, as well as hand-searching existing reviews and contacting professional associations, 36 studies were identified. Random- and mixed-effects models were applied to test for average effect sizes and moderating variables. Overall, treatments showed high effect sizes in pre-post comparisons (Hedges' g = 1.34) and medium effect sizes as compared with control conditions (Hedges' g = 0.43). Treatments investigated by at least two studies were cognitive-behavioural therapy (CBT), eye movement desensitization and reprocessing (EMDR), narrative exposure therapy for children (KIDNET) and classroom-based interventions, which showed similar effect sizes. However, studies were very heterogenic with regard to their outcomes. Effects were moderated by type of profession (higher level of training leading to higher effect sizes). A number of effective psychosocial treatments for child and adolescent survivors of disasters exist. CBT, EMDR, KIDNET and classroom-based interventions can be equally recommended. Although disasters require immediate reactions and improvisation, future studies with larger sample sizes and rigorous methodology are needed.
Cognitive and Occupational Function in Survivors of Adolescent Cancer.
Nugent, Bethany D; Bender, Catherine M; Sereika, Susan M; Tersak, Jean M; Rosenzweig, Margaret
2018-02-01
Adolescents with cancer have unique developmental considerations. These include brain development, particularly in the frontal lobe, and a focus on completing education and entering the workforce. Cancer and treatment at this stage may prove to uniquely affect survivors' experience of cognitive and occupational function. An exploratory, cross-sectional, descriptive comparative study was employed to describe cognitive and occupational function in adult survivors of adolescent cancer (diagnosed between the ages of 15 and 21 years) and explore differences in age- and gender-matched controls. In total, 23 survivors and 14 controls participated in the study. While significant differences were not found between the groups on measures of cognitive and occupational function, several small and medium effect sizes were found suggesting that survivors may have greater difficulty than controls. Two small effect sizes were found in measures of neuropsychological performance (the Digit Vigilance test [d = 0.396] and Stroop test [d = 0.226]). Small and medium effect sizes ranging from 0.269 to 0.605 were found for aspects of perceived and total cognitive function. A small effect size was also found in work output (d = 0.367). While we did not find significant differences in cognitive or occupational function between survivors and controls, the effect sizes observed point to the need for future research. Future work using a larger sample size and longitudinal design are needed to further explore cognitive and occupational function in this vulnerable and understudied population and assist in the understanding of patterns of change over time.
Brownell, Sara E.; Kloser, Matthew J.; Fukami, Tadashi; Shavelson, Richard J.
2013-01-01
The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course. PMID:24358380
Williams, Jessica A R; Nelson, Candace C; Cabán-Martinez, Alberto J; Katz, Jeffrey N; Wagner, Gregory R; Pronk, Nicolaas P; Sorensen, Glorian; McLellan, Deborah L
2015-09-01
To conduct validation analyses for a new measure of the integration of worksite health protection and health promotion approaches developed in earlier research. A survey of small- to medium-sized employers located in the United States was conducted between October 2013 and March 2014 (n = 111). Cronbach α coefficient was used to assess reliability, and Pearson correlation coefficients were used to assess convergent validity. The integration score was positively associated with the measures of occupational safety and health and health promotion activities/policies-supporting its convergent validity (Pearson correlation coefficients of 0.32 to 0.47). Cronbach α coefficient was 0.94, indicating excellent reliability. The integration score seems to be a promising tool for assessing integration of health promotion and health protection. Further work is needed to test its dimensionality and validate its use in other samples.
Composite outcomes in randomized clinical trials: arguments for and against.
Ross, Sue
2007-02-01
Composite outcomes that combine a number of individual outcomes (such as types of morbidity) are frequently used as primary outcomes in obstetrical trials. The main argument for their use is to ensure that trials can answer important clinical questions in a timely fashion, without needing huge sample sizes. Arguments against their use are that composite outcomes may be difficult to use and interpret, leading to errors in sample size estimation, possible contradictory trial results, and difficulty in interpreting findings. Such problems may reduce the credibility of the research, and may impact on the implementation of findings. Composite outcomes are an attractive solution to help to overcome the problem of limited available resources for clinical trials. However, future studies should carefully consider both the advantages and disadvantages before using composite outcomes. Rigorous development and reporting of composite outcomes is essential if the research is to be useful.
Brownell, Sara E; Kloser, Matthew J; Fukami, Tadashi; Shavelson, Richard J
2013-01-01
The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.
Scott, Frank I; McConnell, Ryan A; Lewis, Matthew E; Lewis, James D
2012-04-01
Significant advances have been made in clinical and epidemiologic research methods over the past 30 years. We sought to demonstrate the impact of these advances on published gastroenterology research from 1980 to 2010. Twenty original clinical articles were randomly selected from each of three journals from 1980, 1990, 2000, and 2010. Each article was assessed for topic, whether the outcome was clinical or physiologic, study design, sample size, number of authors and centers collaborating, reporting of various statistical methods, and external funding. From 1980 to 2010, there was a significant increase in analytic studies, clinical outcomes, number of authors per article, multicenter collaboration, sample size, and external funding. There was increased reporting of P values, confidence intervals, and power calculations, and increased use of large multicenter databases, multivariate analyses, and bioinformatics. The complexity of clinical gastroenterology and hepatology research has increased dramatically, highlighting the need for advanced training of clinical investigators.
Incidental Lewy Body Disease: Clinical Comparison to a Control Cohort
Adler, Charles H.; Connor, Donald J.; Hentz, Joseph G.; Sabbagh, Marwan N.; Caviness, John N.; Shill, Holly A.; Noble, Brie; Beach, Thomas G.
2010-01-01
Limited clinical information has been published on cases pathologically diagnosed with incidental Lewy body disease (ILBD). Standardized, longitudinal movement and cognitive data was collected on a cohort of subjects enrolled in the Sun Health Research Institute Brain and Body Donation Program. Of 277 autopsied subjects who had antemortem clinical evaluations within the previous 3 years, 76 did not have Parkinson’s disease, a related disorder, or dementia of which 15 (20%) had ILBD. Minor extrapyramidal signs were common in subjects with and without ILBD. Cognitive testing revealed an abnormality in the ILBD group in the Trails B test only. ILBD cases had olfactory dysfunction; however, sample size was very small. This preliminary report revealed ILBD cases have movement and cognitive findings that for the most part were not out of proportion to similarly assessed and age-similar cases without Lewy bodies. Larger sample size is needed to have the power to better assess group differences. PMID:20175211
Detecting a Weak Association by Testing its Multiple Perturbations: a Data Mining Approach
NASA Astrophysics Data System (ADS)
Lo, Min-Tzu; Lee, Wen-Chung
2014-05-01
Many risk factors/interventions in epidemiologic/biomedical studies are of minuscule effects. To detect such weak associations, one needs a study with a very large sample size (the number of subjects, n). The n of a study can be increased but unfortunately only to an extent. Here, we propose a novel method which hinges on increasing sample size in a different direction-the total number of variables (p). We construct a p-based `multiple perturbation test', and conduct power calculations and computer simulations to show that it can achieve a very high power to detect weak associations when p can be made very large. As a demonstration, we apply the method to analyze a genome-wide association study on age-related macular degeneration and identify two novel genetic variants that are significantly associated with the disease. The p-based method may set a stage for a new paradigm of statistical tests.
Phylogenetic effective sample size.
Bartoszek, Krzysztof
2016-10-21
In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.
2011-01-01
Background The relationship between urbanicity and adolescent health is a critical issue for which little empirical evidence has been reported. Although an association has been suggested, a dichotomous rural versus urban comparison may not succeed in identifying differences between adolescent contexts. This study aims to assess the influence of locality size on risk behaviors in a national sample of young Mexicans living in low-income households, while considering the moderating effect of socioeconomic status (SES). Methods This is a secondary analysis of three national surveys of low-income households in Mexico in different settings: rural, semi-urban and urban areas. We analyzed risk behaviors in 15-21-year-olds and their potential relation to urbanicity. The risk behaviors explored were: tobacco and alcohol consumption, sexual initiation and condom use. The adolescents' localities of residence were classified according to the number of inhabitants in each locality. We used a logistical model to identify an association between locality size and risk behaviors, including an interaction term with SES. Results The final sample included 17,974 adolescents from 704 localities in Mexico. Locality size was associated with tobacco and alcohol consumption, showing a similar effect throughout all SES levels: the larger the size of the locality, the lower the risk of consuming tobacco or alcohol compared with rural settings. The effect of locality size on sexual behavior was more complex. The odds of adolescent condom use were higher in larger localities only among adolescents in the lowest SES levels. We found no statically significant association between locality size and sexual initiation. Conclusions The results suggest that in this sample of adolescents from low-income areas in Mexico, risk behaviors are related to locality size (number of inhabitants). Furthermore, for condom use, this relation is moderated by SES. Such heterogeneity suggests the need for more detailed analyses of both the effects of urbanicity on behavior, and the responses--which are also heterogeneous--required to address this situation. PMID:22129110
Life-history strategies of the rock hind grouper Epinephelus adscensionis at Ascension Island.
Nolan, E T; Downes, K J; Richardson, A; Arkhipkin, A; Brickle, P; Brown, J; Mrowicki, R J; Shcherbich, Z; Weber, N; Weber, S B
2017-12-01
Epinephelus adscensionis sampled from Ascension Island, South Atlantic Ocean, exhibits distinct life-history traits, including larger maximum size and size at sexual maturity than previous studies have demonstrated for this species in other locations. Otolith analysis yielded a maximum estimated age of 25 years, with calculated von Bertalanffy growth parameters of: L ∞ = 55·14, K = 0·19, t 0 = -0·88. Monthly gonad staging and analysis of gonad-somatic index (I G ) provide evidence for spawning from July to November with an I G peak in August (austral winter), during which time somatic growth is also suppressed. Observed patterns of sexual development were supportive of protogyny, although further work is needed to confirm this. Mean size at sexual maturity for females was 28·9 cm total length (L T ; 95% C.I. 27·1-30·7 cm) and no females were found >12 years and 48·0 cm L T , whereas all confirmed males sampled were mature, >35·1 cm L T with an age range from 3 to 18 years. The modelled size at which 50% of individuals were male was 41·8 cm (95% C.I. 40·4-43·2 cm). As far as is known, this study represents the first comprehensive investigation into the growth and reproduction of E. adscensionis at its type locality of Ascension Island and suggests that the population may be affected less by fisheries than elsewhere in its range. Nevertheless, improved regulation of the recreational fishery and sustained monitoring of abundance, length frequencies and life-history parameters are needed to inform long-term management measures, which could include the creation of marine reserves, size or temporal catch limits and stricter export controls. © 2017 The Fisheries Society of the British Isles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maloney, Daniel J; Monazam, Esmail R; Casleton, Kent H
Char samples representing a range of combustion conditions and extents of burnout were obtained from a well-characterized laminar flow combustion experiment. Individual particles from the parent coal and char samples were characterized to determine distributions in particle volume, mass, and density at different extent of burnout. The data were then compared with predictions from a comprehensive char combustion model referred to as the char burnout kinetics model (CBK). The data clearly reflect the particle- to-particle heterogeneity of the parent coal and show a significant broadening in the size and density distributions of the chars resulting from both devolatilization and combustion.more » Data for chars prepared in a lower oxygen content environment (6% oxygen by vol.) are consistent with zone II type combustion behavior where most of the combustion is occurring near the particle surface. At higher oxygen contents (12% by vol.), the data show indications of more burning occurring in the particle interior. The CBK model does a good job of predicting the general nature of the development of size and density distributions during burning but the input distribution of particle size and density is critical to obtaining good predictions. A significant reduction in particle size was observed to occur as a result of devolatilization. For comprehensive combustion models to provide accurate predictions, this size reduction phenomenon needs to be included in devolatilization models so that representative char distributions are carried through the calculations.« less
ERIC Educational Resources Information Center
Bockenholt, Ulf; Van Der Heijden, Peter G. M.
2007-01-01
Randomized response (RR) is a well-known method for measuring sensitive behavior. Yet this method is not often applied because: (i) of its lower efficiency and the resulting need for larger sample sizes which make applications of RR costly; (ii) despite its privacy-protection mechanism the RR design may not be followed by every respondent; and…
American Business and Older Workers: A Road Map to the 21st Century.
ERIC Educational Resources Information Center
American Association of Retired Persons, Washington, DC.
A survey of a random sample of 400 companies (100 in each of 4 size groupings) was taken in December 1994 to determine business attitudes toward older workers (defined as 50 or older) and to provide insight into how older workers can best position themselves in order to get and keep the jobs they need. In each company the person interviewed was…
Needs and Challenges of Daily Life for People with Down Syndrome Residing in the City of Rome, Italy
ERIC Educational Resources Information Center
Bertoli, M.; Biasini, G.; Calignano, M. T.; Celani, G.; De Grossi, G.; Digilio, M. C.; Fermariello, C. C.; Loffredo, G.; Luchino, F.; Marchese, A.; Mazotti, S.; Menghi, B.; Razzano, C.; Tiano, C.; Zambon Hobart, A.; Zampino, G.; Zuccala, G.
2011-01-01
Background: Population-based surveys on the quality of life of people with Down syndrome (DS) are difficult to perform because of ethical and legal policies regarding privacy and confidential information, but they are essential for service planning. Little is known about the sample size and variability of quality of life of people with DS living…
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Public Opinion Polls, Chicken Soup and Sample Size
ERIC Educational Resources Information Center
Nguyen, Phung
2005-01-01
Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.
Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.
Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto
2007-07-01
To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
Using e-mail recruitment and an online questionnaire to establish effect size: A worked example.
Kirkby, Helen M; Wilson, Sue; Calvert, Melanie; Draper, Heather
2011-06-09
Sample size calculations require effect size estimations. Sometimes, effect size estimations and standard deviation may not be readily available, particularly if efficacy is unknown because the intervention is new or developing, or the trial targets a new population. In such cases, one way to estimate the effect size is to gather expert opinion. This paper reports the use of a simple strategy to gather expert opinion to estimate a suitable effect size to use in a sample size calculation. Researchers involved in the design and analysis of clinical trials were identified at the University of Birmingham and via the MRC Hubs for Trials Methodology Research. An email invited them to participate.An online questionnaire was developed using the free online tool 'Survey Monkey©'. The questionnaire described an intervention, an electronic participant information sheet (e-PIS), which may increase recruitment rates to a trial. Respondents were asked how much they would need to see recruitment rates increased by, based on 90%. 70%, 50% and 30% baseline rates, (in a hypothetical study) before they would consider using an e-PIS in their research.Analyses comprised simple descriptive statistics. The invitation to participate was sent to 122 people; 7 responded to say they were not involved in trial design and could not complete the questionnaire, 64 attempted it, 26 failed to complete it. Thirty-eight people completed the questionnaire and were included in the analysis (response rate 33%; 38/115). Of those who completed the questionnaire 44.7% (17/38) were at the academic grade of research fellow 26.3% (10/38) senior research fellow, and 28.9% (11/38) professor. Dependent upon the baseline recruitment rates presented in the questionnaire, participants wanted recruitment rate to increase from 6.9% to 28.9% before they would consider using the intervention. This paper has shown that in situations where effect size estimations cannot be collected from previous research, opinions from researchers and trialists can be quickly and easily collected by conducting a simple study using email recruitment and an online questionnaire. The results collected from the survey were successfully used in sample size calculations for a PhD research study protocol.
Melé, Enric; Nadal, Anna; Messeguer, Joaquima; Melé-Messeguer, Marina; Palaudelmàs, Montserrat; Peñas, Gisela; Piferrer, Xavier; Capellades, Gemma; Serra, Joan; Pla, Maria
2015-01-01
Genetically modified (GM) crops have been commercially grown for two decades. GM maize is one of 3 species with the highest acreage and specific events. Many countries established a mandatory labeling of products containing GM material, with thresholds for adventitious presence, to support consumers’ freedom of choice. In consequence, coexistence systems need to be introduced to facilitate commercial culture of GM and non-GM crops in the same agricultural area. On modeling adventitious GM cross-pollination distribution within maize fields, we deduced a simple equation to estimate overall GM contents (%GM) of conventional fields, irrespective of its shape and size, and with no previous information on possible GM pollen donor fields. A sampling strategy was designed and experimentally validated in 19 agricultural fields. With 9 samples, %GM quantification requires just one analytical GM determination while identification of the pollen source needs 9 additional analyses. A decision support tool is provided. PMID:26596213
Development of an X-ray fluorescence holographic measurement system for protein crystals
NASA Astrophysics Data System (ADS)
Sato-Tomita, Ayana; Shibayama, Naoya; Happo, Naohisa; Kimura, Koji; Okabe, Takahiro; Matsushita, Tomohiro; Park, Sam-Yong; Sasaki, Yuji C.; Hayashi, Kouichi
2016-06-01
Experimental procedure and setup for obtaining X-ray fluorescence hologram of crystalline metalloprotein samples are described. Human hemoglobin, an α2β2 tetrameric metalloprotein containing the Fe(II) heme active-site in each chain, was chosen for this study because of its wealth of crystallographic data. A cold gas flow system was introduced to reduce X-ray radiation damage of protein crystals that are usually fragile and susceptible to damage. A χ-stage was installed to rotate the sample while avoiding intersection between the X-ray beam and the sample loop or holder, which is needed for supporting fragile protein crystals. Huge hemoglobin crystals (with a maximum size of 8 × 6 × 3 mm3) were prepared and used to keep the footprint of the incident X-ray beam smaller than the sample size during the entire course of the measurement with the incident angle of 0°-70°. Under these experimental and data acquisition conditions, we achieved the first observation of the X-ray fluorescence hologram pattern from the protein crystals with minimal radiation damage, opening up a new and potential method for investigating the stereochemistry of the metal active-sites in biomacromolecules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sato-Tomita, Ayana, E-mail: ayana.sato@jichi.ac.jp, E-mail: shibayam@jichi.ac.jp, E-mail: hayashi.koichi@nitech.ac.jp; Shibayama, Naoya, E-mail: ayana.sato@jichi.ac.jp, E-mail: shibayam@jichi.ac.jp, E-mail: hayashi.koichi@nitech.ac.jp; Okabe, Takahiro
Experimental procedure and setup for obtaining X-ray fluorescence hologram of crystalline metalloprotein samples are described. Human hemoglobin, an α{sub 2}β{sub 2} tetrameric metalloprotein containing the Fe(II) heme active-site in each chain, was chosen for this study because of its wealth of crystallographic data. A cold gas flow system was introduced to reduce X-ray radiation damage of protein crystals that are usually fragile and susceptible to damage. A χ-stage was installed to rotate the sample while avoiding intersection between the X-ray beam and the sample loop or holder, which is needed for supporting fragile protein crystals. Huge hemoglobin crystals (with amore » maximum size of 8 × 6 × 3 mm{sup 3}) were prepared and used to keep the footprint of the incident X-ray beam smaller than the sample size during the entire course of the measurement with the incident angle of 0°-70°. Under these experimental and data acquisition conditions, we achieved the first observation of the X-ray fluorescence hologram pattern from the protein crystals with minimal radiation damage, opening up a new and potential method for investigating the stereochemistry of the metal active-sites in biomacromolecules.« less
Molecular cancer classification using a meta-sample-based regularized robust coding method.
Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen
2014-01-01
Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.
Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.
Rochon, K; Scoles, G A; Lysyk, T J
2012-03-01
A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Dahlberg, Suzanne E; Shapiro, Geoffrey I; Clark, Jeffrey W; Johnson, Bruce E
2014-07-01
Phase I trials have traditionally been designed to assess toxicity and establish phase II doses with dose-finding studies and expansion cohorts but are frequently exceeding the traditional sample size to further assess endpoints in specific patient subsets. The scientific objectives of phase I expansion cohorts and their evolving role in the current era of targeted therapies have yet to be systematically examined. Adult therapeutic phase I trials opened within Dana-Farber/Harvard Cancer Center (DF/HCC) from 1988 to 2012 were identified for sample size details. Statistical designs and study objectives of those submitted in 2011 were reviewed for expansion cohort details. Five hundred twenty-two adult therapeutic phase I trials were identified during the 25 years. The average sample size of a phase I study has increased from 33.8 patients to 73.1 patients over that time. The proportion of trials with planned enrollment of 50 or fewer patients dropped from 93.0% during the time period 1988 to 1992 to 46.0% between 2008 and 2012; at the same time, the proportion of trials enrolling 51 to 100 patients and more than 100 patients increased from 5.3% and 1.8%, respectively, to 40.5% and 13.5% (χ(2) test, two-sided P < .001). Sixteen of the 60 trials (26.7%) in 2011 enrolled patients to three or more sub-cohorts in the expansion phase. Sixty percent of studies provided no statistical justification of the sample size, although 91.7% of trials stated response as an objective. Our data suggest that phase I studies have dramatically changed in size and scientific scope within the last decade. Additional studies addressing the implications of this trend on research processes, ethical concerns, and resource burden are needed. © The Author 2014. Published by Oxford University Press. All rights reserved.
van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald
2017-12-04
Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the measurement of hours worked each week by GPs strongly varied according to the number of GPs included and the frequency of measurements per GP during the week measured. The best balance between both dimensions will depend upon different circumstances, such as the target group and the budget available.
A lower bound on the number of cosmic ray events required to measure source catalogue correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolci, Marco; Romero-Wolf, Andrew; Wissel, Stephanie, E-mail: marco.dolci@polito.it, E-mail: Andrew.Romero-Wolf@jpl.nasa.gov, E-mail: swissel@calpoly.edu
2016-10-01
Recent analyses of cosmic ray arrival directions have resulted in evidence for a positive correlation with active galactic nuclei positions that has weak significance against an isotropic source distribution. In this paper, we explore the sample size needed to measure a highly statistically significant correlation to a parent source catalogue. We compare several scenarios for the directional scattering of ultra-high energy cosmic rays given our current knowledge of the galactic and intergalactic magnetic fields. We find significant correlations are possible for a sample of >1000 cosmic ray protons with energies above 60 EeV.
Dawson, Ree; Lavori, Philip W
2012-01-01
Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.
The relation between statistical power and inference in fMRI
Wager, Tor D.; Yarkoni, Tal
2017-01-01
Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843
Galaxy evolution by color-log(n) type since redshift unity in the Hubble Ultra Deep Field
NASA Astrophysics Data System (ADS)
Cameron, E.; Driver, S. P.
2009-01-01
Aims: We explore the use of the color-log(n) (where n is the global Sérsic index) plane as a tool for subdividing the galaxy population in a physically-motivated manner out to redshift unity. We thereby aim to quantify surface brightness evolution by color-log(n) type, accounting separately for the specific selection and measurement biases against each. Methods: We construct (u-r) color-log(n) diagrams for distant galaxies in the Hubble Ultra Deep Field (UDF) within a series of volume-limited samples to z=1.5. The color-log(n) distributions of these high redshift galaxies are compared against that measured for nearby galaxies in the Millennium Galaxy Catalogue (MGC), as well as to the results of visual morphological classification. Based on this analysis we divide our sample into three color-structure classes. Namely, “red, compact”, “blue, diffuse” and “blue, compact”. Luminosity-size diagrams are constructed for members of the two largest classes (“red, compact” and “blue, diffuse”), both in the UDF and the MGC. Artificial galaxy simulations (for systems with exponential and de Vaucouleurs profile shapes alternately) are used to identify “bias-free” regions of the luminosity-size plane in which galaxies are detected with high completeness, and their fluxes and sizes recovered with minimal surface brightness-dependent biases. Galaxy evolution is quantified via comparison of the low and high redshift luminosity-size relations within these “bias-free” regions. Results: We confirm the correlation between color-log(n) plane position and visual morphological type observed locally and in other high redshift studies in the color and/or structure domain. The combined effects of observational uncertainties, the morphological K-correction and cosmic variance preclude a robust statistical comparison of the shape of the MGC and UDF color-log(n) distributions. However, in the interval 0.75 < z <1.0 where the UDF i-band samples close to rest-frame B-band light (i.e., the morphological K-correction between our samples is negligible) we are able to present tentative evidence of bimodality, albiet for a very small sample size (17 galaxies). Our unique approach to quantifying selection and measurement biases in the luminosity-size plane highlights the need to consider errors in the recovery of both magnitudes and sizes, and their dependence on profile shape. Motivated by these results we divide our sample into the three color-structure classes mentioned above and quantify luminosity-size evolution by galaxy type. Specifically, we detect decreases in B-band, surface brightness of 1.57 ± 0.22 mag arcsec-2 and 1.65 ± 0.22 mag arcsec-2 for our “blue, diffuse” and “red, compact” classes respectively between redshift unity and the present day.
Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method
NASA Astrophysics Data System (ADS)
Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.
2017-10-01
The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.
The Consideration of Future Consequences and Health Behaviour: A Meta-Analysis.
Murphy, Lisa; Dockray, Samantha
2018-06-14
The aim of this meta-analysis was to quantify the direction and strength of associations between the Consideration of Future Consequences (CFC) scale and intended and actual engagement in three categories of health-related behaviour: health risk, health promotive, and illness preventative/detective behaviour. A systematic literature search was conducted to identify studies that measured CFC and health behaviour. In total, sixty-four effect sizes were extracted from 53 independent samples. Effect sizes were synthesised using a random-effects model. Aggregate effect sizes for all behaviour categories were significant, albeit small in magnitude. There were no significant moderating effects of the length of CFC scale (long vs. short), population type (college students vs. non-college students), mean age, or sex proportion of study samples. CFC reliability and study quality score significantly moderated the overall association between CFC and health risk behaviour only. The magnitude of effect sizes is comparable to associations between health behaviour and other individual difference variables, such as the Big Five personality traits. The findings indicate that CFC is an important construct to consider in research on engagement in health risk behaviour in particular. Future research is needed to examine the optimal approach by which to apply the findings to behavioural interventions.
Comparative analyses of basal rate of metabolism in mammals: data selection does matter.
Genoud, Michel; Isler, Karin; Martin, Robert D
2018-02-01
Basal rate of metabolism (BMR) is a physiological parameter that should be measured under strictly defined experimental conditions. In comparative analyses among mammals BMR is widely used as an index of the intensity of the metabolic machinery or as a proxy for energy expenditure. Many databases with BMR values for mammals are available, but the criteria used to select metabolic data as BMR estimates have often varied and the potential effect of this variability has rarely been questioned. We provide a new, expanded BMR database reflecting compliance with standard criteria (resting, postabsorptive state; thermal neutrality; adult, non-reproductive status for females) and examine potential effects of differential selectivity on the results of comparative analyses. The database includes 1739 different entries for 817 species of mammals, compiled from the original sources. It provides information permitting assessment of the validity of each estimate and presents the value closest to a proper BMR for each entry. Using different selection criteria, several alternative data sets were extracted and used in comparative analyses of (i) the scaling of BMR to body mass and (ii) the relationship between brain mass and BMR. It was expected that results would be especially dependent on selection criteria with small sample sizes and with relatively weak relationships. Phylogenetically informed regression (phylogenetic generalized least squares, PGLS) was applied to the alternative data sets for several different clades (Mammalia, Eutheria, Metatheria, or individual orders). For Mammalia, a 'subsampling procedure' was also applied, in which random subsamples of different sample sizes were taken from each original data set and successively analysed. In each case, two data sets with identical sample size and species, but comprising BMR data with different degrees of reliability, were compared. Selection criteria had minor effects on scaling equations computed for large clades (Mammalia, Eutheria, Metatheria), although less-reliable estimates of BMR were generally about 12-20% larger than more-reliable ones. Larger effects were found with more-limited clades, such as sciuromorph rodents. For the relationship between BMR and brain mass the results of comparative analyses were found to depend strongly on the data set used, especially with more-limited, order-level clades. In fact, with small sample sizes (e.g. <100) results often appeared erratic. Subsampling revealed that sample size has a non-linear effect on the probability of a zero slope for a given relationship. Depending on the species included, results could differ dramatically, especially with small sample sizes. Overall, our findings indicate a need for due diligence when selecting BMR estimates and caution regarding results (even if seemingly significant) with small sample sizes. © 2017 Cambridge Philosophical Society.
Ambrosius, Walter T; Polonsky, Tamar S; Greenland, Philip; Goff, David C; Perdue, Letitia H; Fortmann, Stephen P; Margolis, Karen L; Pajewski, Nicholas M
2012-04-01
Although observational evidence has suggested that the measurement of coronary artery calcium (CAC) may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether CAC testing leads to improved patient outcomes. To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of nonfatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, nonfatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. We have proposed a sample size of 30,000 (800 events), which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events), and 27,078 (722 events) provide 80%, 85%, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8-89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9-78.0), 82.5% (82.5-82.6), and 87.2% (87.2-87.3), respectively. These power estimates are dependent on form and parameters of the prior distributions. Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power.
Ambrosius, Walter T.; Polonsky, Tamar S.; Greenland, Philip; Goff, David C.; Perdue, Letitia H.; Fortmann, Stephen P.; Margolis, Karen L.; Pajewski, Nicholas M.
2014-01-01
Background Although observational evidence has suggested that the measurement of CAC may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether coronary artery calcium (CAC) testing leads to improved patient outcomes. Purpose To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. Methods The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of non-fatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, non-fatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including: (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. Results We have proposed a sample size of 30,000 (800 events) which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events) and 27,078 (722 events) provide 80, 85, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8 to 89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9 to 78.0), 82.5% (82.5 to 82.6), and 87.2% (87.2 to 87.3), respectively. Limitations These power estimates are dependent on form and parameters of the prior distributions. Conclusions Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power. PMID:22333998
Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.
McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M
2015-03-01
Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.
Surgeons OverSeas Assessment of Surgical Need (SOSAS) Uganda: Update for Household Survey.
Fuller, Anthony T; Butler, Elissa K; Tran, Tu M; Makumbi, Fredrick; Luboga, Samuel; Muhumza, Christine; Chipman, Jeffrey G; Groen, Reinou S; Gupta, Shailvi; Kushner, Adam L; Galukande, Moses; Haglund, Michael M
2015-12-01
The first step in improving surgical care delivery in low- and middle-income countries (LMICs) is quantifying surgical need. The Surgeons OverSeas Assessment of Surgical Need (SOSAS) is a validated household survey that has been previously implemented in three LMICs with great success. We implemented the SOSAS survey in Uganda, a medium-sized country with comparatively more language and ethnic group diversity. The investigators partnered with the Performance Monitoring and Accountability 2020 (PMA2020) Uganda to access a data collection platform sampling 2520 households in 105 randomly selected enumeration areas. Due to geographic size consideration and language diversity, SOSAS's methodology was updated in three significant dimensions (1) technology, (2) staff management, and (3) questionnaire adaptations. The SOSAS survey was successfully implemented with non-medically trained but field proven research assistants. We sampled 2315 of 2402 eligible households (response rate 96.4 %) and 4248 of 4374 eligible individual respondents (response rate 97.1 %). The female-to-male ratio was 51.1-48.9 %. Total survey cost was USD 73,145 and data collection occurred in 14 days. SOSAS Uganda has demonstrated that non-medically trained, but university-educated, experienced researchers supervised by academic surgeons can successfully perform accurate data collection of SOSAS. SOSAS can be successfully implemented within larger and more diverse LMICs using existing national survey platforms, and SOSAS Uganda provides insights on how SOSAS can be executed specifically within other PMA2020 program countries.
Ultrasonic characterization of single drops of liquids
Sinha, Dipen N.
1998-01-01
Ultrasonic characterization of single drops of liquids. The present invention includes the use of two closely spaced transducers, or one transducer and a closely spaced reflector plate, to form an interferometer suitable for ultrasonic characterization of droplet-size and smaller samples without the need for a container. The droplet is held between the interferometer elements, whose distance apart may be adjusted, by surface tension. The surfaces of the interferometer elements may be readily cleansed by a stream of solvent followed by purified air when it is desired to change samples. A single drop of liquid is sufficient for high-quality measurement. Examples of samples which may be investigated using the apparatus and method of the present invention include biological specimens (tear drops; blood and other body fluid samples; samples from tumors, tissues, and organs; secretions from tissues and organs; snake and bee venom, etc.) for diagnostic evaluation, samples in forensic investigations, and detection of drugs in small quantities.
Hydrogen calibration of GD-spectrometer using Zr-1Nb alloy
NASA Astrophysics Data System (ADS)
Mikhaylov, Andrey A.; Priamushko, Tatiana S.; Babikhina, Maria N.; Kudiiarov, Victor N.; Heller, Rene; Laptev, Roman S.; Lider, Andrey M.
2018-02-01
To study the hydrogen distribution in Zr-1Nb alloy (Э110 alloy) GD-OES was applied in this work. Qualitative analysis needs the standard samples with hydrogen. However, the standard samples with high concentrations of hydrogen in the zirconium alloy which would meet the requirements of the shape, size are absent. In this work method of Zr + H calibration samples production was performed at the first time. Automated Complex Gas Reaction Controller was used for samples hydrogenation. To calculate the parameters of post-hydrogenation incubation of the samples in an inert gas atmosphere the diffusion equations were used. Absolute hydrogen concentrations in the samples were determined by melting in the inert gas atmosphere using RHEN602 analyzer (LECO Company). Hydrogen distribution was studied using nuclear reaction analysis (HZDR, Dresden, Germany). RF GD-OES was used for calibration. The depth of the craters was measured with the help of a Hommel-Etamic profilometer by Jenoptik, Germany.
Problems in determining the surface density of the Galactic disk
NASA Technical Reports Server (NTRS)
Statler, Thomas S.
1989-01-01
A new method is presented for determining the local surface density of the Galactic disk from distance and velocity measurements of stars toward the Galactic poles. The procedure is fully three-dimensional, approximating the Galactic potential by a potential of Staeckel form and using the analytic third integral to treat the tilt and the change of shape of the velocity ellipsoid consistently. Applying the procedure to artificial data superficially resembling the K dwarf sample of Kuijken and Gilmore (1988, 1989), it is shown that the current best estimates of local disk surface density are uncertain by at least 30 percent. Of this, about 25 percent is due to the size of the velocity sample, about 15 percent comes from uncertainties in the rotation curve and the solar galactocentric distance, and about 10 percent from ignorance of the shape of the velocity distribution above z = 1 kpc, the errors adding in quadrature. Increasing the sample size by a factor of 3 will reduce the error to 20 percent. To achieve 10 percent accuracy, observations will be needed along other lines of sight to constrain the shape of the velocity ellipsoid.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kenyon, Fiona; Rinaldi, Laura; McBean, Dave; Pepe, Paola; Bosco, Antonio; Melville, Lynsey; Devin, Leigh; Mitchell, Gillian; Ianniello, Davide; Charlier, Johannes; Vercruysse, Jozef; Cringoli, Giuseppe; Levecke, Bruno
2016-07-30
In small ruminants, faecal egg counts (FECs) and reduction in FECs (FECR) are the most common methods for the assessment of intensity of gastrointestinal (GI) nematodes infections and anthelmintic drug efficacy, respectively. The main limitation of these methods is the time and cost to conduct FECs on a representative number of individual animals. A cost-saving alternative would be to examine pooled faecal samples, however little is known regarding whether pooling can give representative results. In the present study, we compared the FECR results obtained by both an individual and a pooled examination strategy across different pool sizes and analytical sensitivity of the FEC techniques. A survey was conducted on 5 sheep farms in Scotland, where anthelmintic resistance is known to be widespread. Lambs were treated with fenbendazole (4 groups), levamisole (3 groups), ivermectin (3 groups) or moxidectin (1 group). For each group, individual faecal samples were collected from 20 animals, at baseline (D0) and 14 days after (D14) anthelmintic administration. Faecal samples were analyzed as pools of 3-5, 6-10, and 14-20 individual samples. Both individual and pooled samples were screened for GI strongyle and Nematodirus eggs using two FEC techniques with three different levels of analytical sensitivity, including Mini-FLOTAC (analytical sensitivity of 10 eggs per gram of faeces (EPG)) and McMaster (analytical sensitivity of 15 or 50 EPG).For both Mini-FLOTAC and McMaster (analytical sensitivity of 15 EPG), there was a perfect agreement in classifying the efficacy of the anthelmintic as 'normal', 'doubtful' or 'reduced' regardless of pool size. When using the McMaster method (analytical sensitivity of 50 EPG) anthelmintic efficacy was often falsely classified as 'normal' or assessment was not possible due to zero FECs at D0, and this became more pronounced when the pool size increased. In conclusion, pooling ovine faecal samples holds promise as a cost-saving and efficient strategy for assessing GI nematode FECR. However, for the assessment FECR one will need to consider the baseline FEC, pool size and analytical sensitivity of the method. Copyright © 2016. Published by Elsevier B.V.
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
A New On-the-Fly Sampling Method for Incoherent Inelastic Thermal Neutron Scattering Data in MCNP6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlou, Andrew Theodore; Brown, Forrest B.; Ji, Wei
2014-09-02
At thermal energies, the scattering of neutrons in a system is complicated by the comparable velocities of the neutron and target, resulting in competing upscattering and downscattering events. The neutron wavelength is also similar in size to the target's interatomic spacing making the scattering process a quantum mechanical problem. Because of the complicated nature of scattering at low energies, the thermal data files in ACE format used in continuous-energy Monte Carlo codes are quite large { on the order of megabytes for a single temperature and material. In this paper, a new storage and sampling method is introduced that ismore » orders of magnitude less in size and is used to sample scattering parameters at any temperature on-the-fly. In addition to the reduction in storage, the need to pre-generate thermal scattering data tables at fine temperatures has been eliminated. This is advantageous for multiphysics simulations which may involve temperatures not known in advance. A new module was written for MCNP6 that bypasses the current S(α,β) table lookup in favor of the new format. The new on-the-fly sampling method was tested for graphite for two benchmark problems at ten temperatures: 1) an eigenvalue test with a fuel compact of uranium oxycarbide fuel homogenized into a graphite matrix, 2) a surface current test with a \\broomstick" problem with a monoenergetic point source. The largest eigenvalue difference was 152pcm for T= 1200K. For the temperatures and incident energies chosen for the broomstick problem, the secondary neutron spectrum showed good agreement with the traditional S(α,β) sampling method. These preliminary results show that sampling thermal scattering data on-the-fly is a viable option to eliminate both the storage burden of keeping thermal data at discrete temperatures and the need to know temperatures before simulation runtime.« less
Fully automatic characterization and data collection from crystals of biological macromolecules.
Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander; Nurizzo, Didier; Bowler, Matthew W
2015-08-01
Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to the optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.
Isner-Horobeti, M E; Charton, A; Daussin, F; Geny, B; Dufour, S P; Richard, R
2014-05-01
Microbiopsies are increasingly used as an alternative to the standard Bergström technique for skeletal muscle sampling. The potential impact of these two different procedures on mitochondrial respiration rate is unknown. The objective of this work was to compare microbiopsies versus Bergström procedure on mitochondrial respiration in skeletal muscle. 52 vastus lateralis muscle samples were obtained from 13 anesthetized pigs, either with a Bergström [6 gauges (G)] needle or with microbiopsy needles (12, 14, 18G). Maximal mitochondrial respiration (V GM-ADP) was assessed using an oxygraphic method on permeabilized fibers. The weight of the muscle samples and V GM-ADP decreased with the increasing gauge of the needles. A positive nonlinear relationship was observed between the weight of the muscle sample and the level of maximal mitochondrial respiration (r = 0.99, p < 0.05) and between needle size and maximal mitochondrial respiration (r = 0.99, p < 0.05). Microbiopsies give lower muscle sample weight and maximal rate of mitochondrial respiration compared to the standard Bergström needle.Therefore, the higher the gauge (i.e. the smaller the size) of the microbiopsy needle, the lower is the maximal rate of respiration. Microbiopsies of skeletal muscle underestimate the maximal mitochondrial respiration rate, and this finding needs to be highlighted for adequate interpretation and comparison with literature data.
Scale and Sampling Effects on Floristic Quality
2016-01-01
Floristic Quality Assessment (FQA) is increasingly influential for making land management decisions, for directing conservation policy, and for research. But, the basic ecological properties and limitations of its metrics are ill defined and not well understood–especially those related to sample methods and scale. Nested plot data from a remnant tallgrass prairie sampled annually over a 12-year period, were used to investigate FQA properties associated with species detection rates, species misidentification rates, sample year, and sample grain/area. Plot size had no apparent effect on Mean C (an area’s average Floristic Quality level), nor did species detection levels above 65% detection. Simulated species misidentifications only affected Mean C values at greater than 10% in large plots, when the replaced species were randomly drawn from the broader county-wide species pool. Finally, FQA values were stable over the 12-year study, meaning that there was no evidence that the metrics exhibit year effects. The FQA metric Mean C is demonstrated to be robust to varied sample methodologies related to sample intensity (plot size, species detection rate), as well as sample year. These results will make FQA measures even more appealing for informing land-use decisions, policy, and research for two reasons: 1) The sampling effort needed to generate accurate and consistent site assessments with FQA measures is shown to be far lower than what has previously been assumed, and 2) the stable properties and consistent performance of metrics with respect to sample methods will allow for a remarkable level of comparability of FQA values from different sites and datasets compared to other commonly used ecological metrics. PMID:27489959
The efficacy of respondent-driven sampling for the health assessment of minority populations.
Badowski, Grazyna; Somera, Lilnabeth P; Simsiman, Brayan; Lee, Hye-Ryeon; Cassel, Kevin; Yamanaka, Alisha; Ren, JunHao
2017-10-01
Respondent driven sampling (RDS) is a relatively new network sampling technique typically employed for hard-to-reach populations. Like snowball sampling, initial respondents or "seeds" recruit additional respondents from their network of friends. Under certain assumptions, the method promises to produce a sample independent from the biases that may have been introduced by the non-random choice of "seeds." We conducted a survey on health communication in Guam's general population using the RDS method, the first survey that has utilized this methodology in Guam. It was conducted in hopes of identifying a cost-efficient non-probability sampling strategy that could generate reasonable population estimates for both minority and general populations. RDS data was collected in Guam in 2013 (n=511) and population estimates were compared with 2012 BRFSS data (n=2031) and the 2010 census data. The estimates were calculated using the unweighted RDS sample and the weighted sample using RDS inference methods and compared with known population characteristics. The sample size was reached in 23days, providing evidence that the RDS method is a viable, cost-effective data collection method, which can provide reasonable population estimates. However, the results also suggest that the RDS inference methods used to reduce bias, based on self-reported estimates of network sizes, may not always work. Caution is needed when interpreting RDS study findings. For a more diverse sample, data collection should not be conducted in just one location. Fewer questions about network estimates should be asked, and more careful consideration should be given to the kind of incentives offered to participants. Copyright © 2017. Published by Elsevier Ltd.
Algorithms that Defy the Gravity of Learning Curve
2017-04-28
three nearest neighbour-based anomaly detectors, i.e., an ensemble of nearest neigh- bours, a recent nearest neighbour-based ensemble method called iNNE...streams. Note that the change in sample size does not alter the geometrical data characteristics discussed here. 3.1 Experimental Methodology ...need to be answered. 3.6 Comparison with conventional ensemble methods Given the theoretical results, the third aim of this project (i.e., identify the
Daniel W. Gilmore; Douglas N. Kastendick; John C. Zasada; Paula J. Anderson
2003-01-01
Fuel loadings need to be considered in two ways: 1) the total fuel loadings of various size classes and 2) their distribution across a site. Fuel treatments in this study affected both. We conclude that 1) mechanical treatments of machine piling and salvage logging reduced fine and heavy fuel loadings and 2) prescribed fire was successful in reducing fine fuel...
USDA-ARS?s Scientific Manuscript database
This report is part of a project to characterize cotton gin emissions from the standpoint of stack sampling. In 2006, EPA finalized and published a more stringent standard for particulate matter with nominal diameter less than or equal to 2.5 µm (PM2.5). This created an urgent need to collect additi...
Prototypes of Cognitive Measures for Air Force Officers: Test Development and Item Banking
1990-05-01
with the complexity of the relationship. Relationships may be in terms of shape, size, dimensionality, area of embeddedness , rotation, blank/shaded...administration were used. Test directions were read from the booklet. Demographics and test responses were recorded on a machine-scannable answer sheet...as demographic information would be needed to determine whether sample characteristics accounted for the differences in mean scores for NC, FA, and WD
Exploring how to increase response rates to surveys of older people.
Palonen, Mira; Kaunonen, Marja; Åstedt-Kurki, Päivi
2016-05-01
To address the special considerations that need to be taken into account when collecting data from older people in healthcare research. An objective of all research studies is to ensure there is an adequate sample size. The final sample size will be influenced by methods of recruitment and data collection, among other factors. There are some special considerations that need to be addressed when collecting data among older people. Quantitative surveys of people aged 60 or over in 2009-2014 were analysed using statistical methods. A quantitative study of patients aged 75 or over in an emergency department was used as an example. A methodological approach to analysing quantitative studies concerned with older people. The best way to ensure high response rates in surveys involving people aged 60 or over is to collect data in the presence of the researcher; response rates are lowest in posted surveys and settings where the researcher is not present when data are collected. Response rates do not seem to vary according to the database from which information about the study participants is obtained or according to who is responsible for recruitment to the survey. Implications for research/practice To conduct coherent studies with older people, the data collection process should be carefully considered.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Buehler, James W; Bernet, Patrick M; Ogden, Lydia L
2012-01-01
Funding formulas are commonly used by federal agencies to allocate program funds to states. As one approach to evaluating differences in allocations resulting from alternative formula calculations, we propose the use of a measure derived from the Gini index to summarize differences in allocations relative to 2 referent allocations: one based on equal per-capita funding across states and another based on equal funding per person living in poverty, which we define as the "proportionality of allocation" (PA). These referents reflect underlying values that often shape formula-based allocations for public health programs. The size of state populations serves as a general proxy for the amount of funding needed to support programs across states. While the size of state populations living in poverty is correlated with overall population size, allocations based on states' shares of the national population living in poverty reflect variations in funding need shaped by the association between poverty and multiple adverse health outcomes. The PA measure is a summary of the degree of dispersion in state-specific allocations relative to the referent allocations and provides a quick assessment of the impact of selecting alternative funding formula designs. We illustrate the PA values by adjusting a sample allocation, using various measures of the salary costs and in-state wealth, which might modulate states' needs for federal funding.
Sepúlveda, Nuno; Drakeley, Chris
2015-04-03
In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
Sexual dimorphism in human cranial trait scores: effects of population, age, and body size.
Garvin, Heather M; Sholts, Sabrina B; Mosca, Laurel A
2014-06-01
Sex estimation from the skull is commonly performed by physical and forensic anthropologists using a five-trait scoring system developed by Walker. Despite the popularity of this method, validation studies evaluating its accuracy across a variety of samples are lacking. Furthermore, it remains unclear what other intrinsic or extrinsic variables are related to the expression of these traits. In this study, cranial trait scores and postcranial measurements were collected from four diverse population groups (U.S. Whites, U.S. Blacks, medieval Nubians, and Arikara Native Americans) following Walker's protocols (total n = 499). Univariate and multivariate analyses were utilized to evaluate the accuracy of these traits in sex estimation, and to test for the effects of population, age, and body size on trait expressions. Results revealed significant effects of population on all trait scores. Sample-specific correct sex classification rates ranged from 74% to 94%, with an overall accuracy of 85% for the pooled sample. Classification performance varied among the traits (best for glabella and mastoid scores and worst for nuchal scores). Furthermore, correlations between traits were weak or nonsignificant, suggesting that different factors may influence individual traits. Some traits displayed correlations with age and/or postcranial size that were significant but weak, and within-population analyses did not reveal any consistent relationships between these traits across all groups. These results indicate that neither age nor body size plays a large role in trait expression, and thus does not need to be incorporated into sex estimation methods. Copyright © 2014 Wiley Periodicals, Inc.
Carey, Michael P.; Mather, M. E.
2009-01-01
Variation in fish abundance across systems presents a challenge to our understanding of fish populations because it limits our ability to predict and transfer basic ecological principles to applied problems. Yellow perch (Perca flavescens) is an ideal species for exploring environmental and biotic correlates across system because it is widely distributed and physiologically tolerant. In 16 small, adjacent systems that span a wide range of environmental and biotic conditions, yellow perch were sampled with a standard suite of gear. Water quality, morphometry, vegetation, invertebrates and fish communities were concurrently measured. Multimodel inference was used to prioritise regressors for the entire yellow perch sample and three size groups (35-80, 81-180, ≥181 mm TL). Across systems, pH and fish richness were identified as the key drivers of yellow perch abundance. At very low pH (<4.0), few fish species and few yellow perch individuals were found. At ponds with moderately low pH (4.0–4.8), numbers of yellow perch increased. Ponds with high pH (>4.8) had many other species and few yellow perch. Similar patterns for pH and fish community were observed for the two largest‐size classes. Negative interactions were observed between the medium‐ and large‐sized yellow perch and between the largest and smallest yellow perch, although interspecific interactions were weaker than expected. This examination of variability for an indicator species and its component‐size classes provides ecological understanding that can help frame the larger‐scale sampling programs needed for the conservation of freshwater fish.
Rajasekaran, S; Kanna, Rishi Mugesh; Reddy, Ranjani Raja; Natesan, Senthil; Raveendran, Muthuraja; Cheung, Kenneth M C; Chan, Danny; Kao, Patrick Y P; Yee, Anita; Shetty, Ajoy Prasad
2016-11-01
Prospective genetic association study. The aim of this study was to document the variations in the genetic associations, when different magnetic resonance imaging (MRI) phenotypes, age stratification, cohort size, and sequence of cohort inclusion are varied in the same study population. Genetic associations with disc degeneration have shown high inconsistency, generally attributed to hereditary factors and ethnic variations. However, the effect of different phenotypes, size of the study population, age of the cohort, etc have not been documented clearly. Seventy-one single-nucleotide polymorphisms (SNPs) of 41 candidate genes were correlated to six MRI markers of disc degeneration (annular tears, Pfirmann grading, Schmorl nodes, Modic changes, Total Endplate Damage score, and disc bulge) in 809 patients with back pain and/or sciatica. In the same study group, the correlations were then retested for different age groups, different sample, size and sequence of subject inclusion (first 404 and the second 405) and the differences documented. The mean age of population (M: 455, F: 354) was 36.7 ± 10.8 years. Different genetic associations were found with different phenotypes: disc bulge with three SNPs of CILP; annular tears with rs2249350 of ADAMTS5 and rs11247361 IGF1R; modic changes with VDR and MMP20; Pfirmann grading with three SNPs of MMP20 and Schmorl node with SNPs of CALM1 and FN1 and none with Total End Plate Score.Subgroup analysis based on three age groups and dividing the total population into two groups also completely changed the associations for all the six radiographic parameters. In the same study population, SNP associations completely change with different phenotypes. Variations in age, inclusion sequence, and sample size resulted in change of genetic associations. Our study questions the validity of previous studies and necessitates the need for standardizing the description of disc degeneration, phenotype selection, study sample size, age, and other variables in future studies. 4.
Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari
2013-10-01
Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.
Evidence of a chimpanzee-sized ancestor of humans but a gibbon-sized ancestor of apes.
Grabowski, Mark; Jungers, William L
2017-10-12
Body mass directly affects how an animal relates to its environment and has a wide range of biological implications. However, little is known about the mass of the last common ancestor (LCA) of humans and chimpanzees, hominids (great apes and humans), or hominoids (all apes and humans), which is needed to evaluate numerous paleobiological hypotheses at and prior to the root of our lineage. Here we use phylogenetic comparative methods and data from primates including humans, fossil hominins, and a wide sample of fossil primates including Miocene apes from Africa, Europe, and Asia to test alternative hypotheses of body mass evolution. Our results suggest, contrary to previous suggestions, that the LCA of all hominoids lived in an environment that favored a gibbon-like size, but a series of selective regime shifts, possibly due to resource availability, led to a decrease and then increase in body mass in early hominins from a chimpanzee-sized LCA.The pattern of body size evolution in hominids can provide insight into historical human ecology. Here, Grabowski and Jungers use comparative phylogenetic analysis to reconstruct the likely size of the ancestor of humans and chimpanzees and the evolutionary history of selection on body size in primates.
Zeestraten, Eva Anna; Benjamin, Philip; Lambert, Christian; Lawrence, Andrew John; Williams, Owen Alan; Morris, Robin Guy; Barrick, Thomas Richard; Markus, Hugh Stephen
2016-01-01
Cerebral small vessel disease (SVD) is the major cause of vascular cognitive impairment, resulting in significant disability and reduced quality of life. Cognitive tests have been shown to be insensitive to change in longitudinal studies and, therefore, sensitive surrogate markers are needed to monitor disease progression and assess treatment effects in clinical trials. Diffusion tensor imaging (DTI) is thought to offer great potential in this regard. Sensitivity of the various parameters that can be derived from DTI is however unknown. We aimed to evaluate the differential sensitivity of DTI markers to detect SVD progression, and to estimate sample sizes required to assess therapeutic interventions aimed at halting decline based on DTI data. We investigated 99 patients with symptomatic SVD, defined as clinical lacunar syndrome with MRI confirmation of a corresponding infarct as well as confluent white matter hyperintensities over a 3 year follow-up period. We evaluated change in DTI histogram parameters using linear mixed effect models and calculated sample size estimates. Over a three-year follow-up period we observed a decline in fractional anisotropy and increase in diffusivity in white matter tissue and most parameters changed significantly. Mean diffusivity peak height was the most sensitive marker for SVD progression as it had the smallest sample size estimate. This suggests disease progression can be monitored sensitively using DTI histogram analysis and confirms DTI's potential as surrogate marker for SVD.
Berk, Lotte; van Boxtel, Martin; van Os, Jim
2017-11-01
An increased need exists to examine factors that protect against age-related cognitive decline. There is preliminary evidence that meditation can improve cognitive function. However, most studies are cross-sectional and examine a wide variety of meditation techniques. This review focuses on the standard eight-week mindfulness-based interventions (MBIs) such as mindfulness-based stress reduction (MBSR) and mindfulness-based cognitive therapy (MBCT). We searched the PsychINFO, CINAHL, Web of Science, COCHRANE, and PubMed databases to identify original studies investigating the effects of MBI on cognition in older adults. Six reports were included in the review of which three were randomized controlled trials. Studies reported preliminary positive effects on memory, executive function and processing speed. However, most reports had a high risk of bias and sample sizes were small. The only study with low risk of bias, large sample size and active control group reported no significant findings. We conclude that eight-week MBI for older adults are feasible, but results on cognitive improvement are inconclusive due a limited number of studies, small sample sizes, and a high risk of bias. Rather than a narrow focus on cognitive training per se, future research may productively shift to investigate MBI as a tool to alleviate suffering in older adults, and to prevent cognitive problems in later life already in younger target populations.
Power analysis to detect treatment effects in longitudinal clinical trials for Alzheimer's disease.
Huang, Zhiyue; Muniz-Terrera, Graciela; Tom, Brian D M
2017-09-01
Assessing cognitive and functional changes at the early stage of Alzheimer's disease (AD) and detecting treatment effects in clinical trials for early AD are challenging. Under the assumption that transformed versions of the Mini-Mental State Examination, the Clinical Dementia Rating Scale-Sum of Boxes, and the Alzheimer's Disease Assessment Scale-Cognitive Subscale tests'/components' scores are from a multivariate linear mixed-effects model, we calculated the sample sizes required to detect treatment effects on the annual rates of change in these three components in clinical trials for participants with mild cognitive impairment. Our results suggest that a large number of participants would be required to detect a clinically meaningful treatment effect in a population with preclinical or prodromal Alzheimer's disease. We found that the transformed Mini-Mental State Examination is more sensitive for detecting treatment effects in early AD than the transformed Clinical Dementia Rating Scale-Sum of Boxes and Alzheimer's Disease Assessment Scale-Cognitive Subscale. The use of optimal weights to construct powerful test statistics or sensitive composite scores/endpoints can reduce the required sample sizes needed for clinical trials. Consideration of the multivariate/joint distribution of components' scores rather than the distribution of a single composite score when designing clinical trials can lead to an increase in power and reduced sample sizes for detecting treatment effects in clinical trials for early AD.
Which is the Ideal Breast Size?: Some Social Clues for Plastic Surgeons.
Raposio, Edoardo; Belgrano, Valerio; Santi, PierLuigi; Chiorri, Carlo
2016-03-01
To provide plastic surgeons with more detailed information as to factors affecting the perception of female attractiveness, the present study was aimed to investigate whether the interaction effect of breast and body size on ratings of female attractiveness is moderated by sociodemographic variables and whether ratings of shapeliness diverge from those of attractiveness.A community sample of 958 Italian participants rated the attractiveness and the shapeliness of 15 stimuli (5 breast sizes × 3 body sizes) in which frontal, 3/4, and profile views of the head and torso of a faceless woman were jointly shown.Bigger breast sizes obtained the highest attractiveness ratings, but the breast-by-body size interaction was also significant. Evidence was found of a moderator role of sex, marital status, and age. When the effects of breast and body size and their interaction had been ruled out, sex differences were at best very slight and limited to very specific combinations of breast and body sizes. Ratings of attractiveness and shapeliness were highly correlated and did not significantly differ.Results suggest that to address women's psychological needs, concerns, and expectations about their appearance, plastic surgeons should not simply focus on breast size but should carefully consider the 'big picture': the body in its entirety.
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.
Hashimoto, Yuichiro
2017-01-01
The development of a robust ionization source using the counter-flow APCI, miniature mass spectrometer, and an automated sampling system for detecting explosives are described. These development efforts using mass spectrometry were made in order to improve the efficiencies of on-site detection in areas such as security, environmental, and industrial applications. A development team, including the author, has struggled for nearly 20 years to enhance the robustness and reduce the size of mass spectrometers to meet the requirements needed for on-site applications. This article focuses on the recent results related to the detection of explosive materials where automated particle sampling using a cyclone concentrator permitted the inspection time to be successfully reduced to 3 s. PMID:28337396
Haugan, Gørill; Drageset, Jorunn
2014-08-01
Depression and anxiety are particularly common among individuals living in long-term care facilities. Therefore, access to a valid and reliable measure of anxiety and depression among nursing home patients is highly warranted. To investigate the dimensionality, reliability and construct validity of the Hospital Anxiety and Depression scale (HADS) in a cognitively intact nursing home population. Cross-sectional data were collected from two samples; 429 cognitively intact nursing home patients participated, representing 74 different Norwegian nursing homes. Confirmative factor analyses and correlations with selected constructs were used. The two-factor model provided a good fit in Sample1, revealing a poorer fit in Sample2. Good-acceptable measurement reliability was demonstrated, and construct validity was supported. Using listwise deletion the sample sizes were 227 and 187, for Sample1 and Sample2, respectively. Greater sample sizes would have strengthen the statistical power in the tests. The researchers visited the participants to help fill in the questionnaires; this might have introduced some bias into the respondents׳ reporting. The 14 HADS items were part of greater questionnaires. Thus, frail, older NH patients might have tired during the interview causing a possible bias. Low reliability for depression was disclosed, mainly resulting from three items appearing to be inappropriate indicators for depression in this population. Further research is needed exploring which items might perform as more reliably indicators for depression among nursing home patients. Copyright © 2014 Elsevier B.V. All rights reserved.
Optical and size characterization of dissolved organic matter from the lower Yukon River
NASA Astrophysics Data System (ADS)
Guo, L.; Lin, H.
2017-12-01
The Arctic rivers have experienced significant climate and environmental changes over the last several decades and their export fluxes and environmental fate of dissolved organic matter (DOM) have received considerable attention. Monthly or bimonthly water samples were collected from the Yukon River, one of the Arctic rivers, between July 2004 and September 2005 for size fractionation to isolate low-molecular-weight (LMW, <1 kDa) and high-molecular-weight (HMW, >1 kDa) DOM. The freeze-dried HMW-DOM was then characterized for their optical properties using fluorescence spectroscopy and colloidal size spectra using asymmetrical flow field-flow fractionation techniques. Ratios of biological index (BIX) to humification index (HIX) show a seasonal change, with lower values in river open seasons and higher values under the ice, and the influence of rive discharge. Three major fluorescence DOM components were identified, including two humic-like components (Ex/Em at 260/480 nm and 250/420 nm, respectively) and one protein-like component (Ex/Em=250/330). The ratio of protein-like to humic-like components was broadly correlated with discharge, with low values during spring freshet and high values under the ice. The relatively high protein-like/humic-like ratio during the ice-covered season suggested sources from macro-organisms and/or ice-algae. Both protein-like and humic-like colloidal fluorophores were partitioned mostly in the 1-5 kDa size fraction although the protein-like fluorophores in some samples also contained larger colloidal size. The relationship between chemical/biological reactivity and size/optical characteristics of DOM needs to be further investigated.
Evaluating common de-identification heuristics for personal health information.
El Emam, Khaled; Jabbouri, Sam; Sams, Scott; Drouet, Youenn; Power, Michael
2006-11-21
With the growing adoption of electronic medical records, there are increasing demands for the use of this electronic clinical data in observational research. A frequent ethics board requirement for such secondary use of personal health information in observational research is that the data be de-identified. De-identification heuristics are provided in the Health Insurance Portability and Accountability Act Privacy Rule, funding agency and professional association privacy guidelines, and common practice. The aim of the study was to evaluate whether the re-identification risks due to record linkage are sufficiently low when following common de-identification heuristics and whether the risk is stable across sample sizes and data sets. Two methods were followed to construct identification data sets. Re-identification attacks were simulated on these. For each data set we varied the sample size down to 30 individuals, and for each sample size evaluated the risk of re-identification for all combinations of quasi-identifiers. The combinations of quasi-identifiers that were low risk more than 50% of the time were considered stable. The identification data sets we were able to construct were the list of all physicians and the list of all lawyers registered in Ontario, using 1% sampling fractions. The quasi-identifiers of region, gender, and year of birth were found to be low risk more than 50% of the time across both data sets. The combination of gender and region was also found to be low risk more than 50% of the time. We were not able to create an identification data set for the whole population. Existing Canadian federal and provincial privacy laws help explain why it is difficult to create an identification data set for the whole population. That such examples of high re-identification risk exist for mainstream professions makes a strong case for not disclosing the high-risk variables and their combinations identified here. For professional subpopulations with published membership lists, many variables often needed by researchers would have to be excluded or generalized to ensure consistently low re-identification risk. Data custodians and researchers need to consider other statistical disclosure techniques for protecting privacy.
Evaluating Common De-Identification Heuristics for Personal Health Information
Jabbouri, Sam; Sams, Scott; Drouet, Youenn; Power, Michael
2006-01-01
Background With the growing adoption of electronic medical records, there are increasing demands for the use of this electronic clinical data in observational research. A frequent ethics board requirement for such secondary use of personal health information in observational research is that the data be de-identified. De-identification heuristics are provided in the Health Insurance Portability and Accountability Act Privacy Rule, funding agency and professional association privacy guidelines, and common practice. Objective The aim of the study was to evaluate whether the re-identification risks due to record linkage are sufficiently low when following common de-identification heuristics and whether the risk is stable across sample sizes and data sets. Methods Two methods were followed to construct identification data sets. Re-identification attacks were simulated on these. For each data set we varied the sample size down to 30 individuals, and for each sample size evaluated the risk of re-identification for all combinations of quasi-identifiers. The combinations of quasi-identifiers that were low risk more than 50% of the time were considered stable. Results The identification data sets we were able to construct were the list of all physicians and the list of all lawyers registered in Ontario, using 1% sampling fractions. The quasi-identifiers of region, gender, and year of birth were found to be low risk more than 50% of the time across both data sets. The combination of gender and region was also found to be low risk more than 50% of the time. We were not able to create an identification data set for the whole population. Conclusions Existing Canadian federal and provincial privacy laws help explain why it is difficult to create an identification data set for the whole population. That such examples of high re-identification risk exist for mainstream professions makes a strong case for not disclosing the high-risk variables and their combinations identified here. For professional subpopulations with published membership lists, many variables often needed by researchers would have to be excluded or generalized to ensure consistently low re-identification risk. Data custodians and researchers need to consider other statistical disclosure techniques for protecting privacy. PMID:17213047
Lo, Nathan C; Coulibaly, Jean T; Bendavid, Eran; N'Goran, Eliézer K; Utzinger, Jürg; Keiser, Jennifer; Bogoch, Isaac I; Andrews, Jason R
2016-08-01
A key epidemiologic feature of schistosomiasis is its focal distribution, which has important implications for the spatial targeting of preventive chemotherapy programs. We evaluated the diagnostic accuracy of a urine pooling strategy using a point-of-care circulating cathodic antigen (POC-CCA) cassette test for detection of Schistosoma mansoni, and employed simulation modeling to test the classification accuracy and efficiency of this strategy in determining where preventive chemotherapy is needed in low-endemicity settings. We performed a cross-sectional study involving 114 children aged 6-15 years in six neighborhoods in Azaguié Ahoua, south Côte d'Ivoire to characterize the sensitivity and specificity of the POC-CCA cassette test with urine samples that were tested individually and in pools of 4, 8, and 12. We used a Bayesian latent class model to estimate test characteristics for individual POC-CCA and quadruplicate Kato-Katz thick smears on stool samples. We then developed a microsimulation model and used lot quality assurance sampling to test the performance, number of tests, and total cost per school for each pooled testing strategy to predict the binary need for school-based preventive chemotherapy using a 10% prevalence threshold for treatment. The sensitivity of the urine pooling strategy for S. mansoni diagnosis using pool sizes of 4, 8, and 12 was 85.9%, 79.5%, and 65.4%, respectively, when POC-CCA trace results were considered positive, and 61.5%, 47.4%, and 30.8% when POC-CCA trace results were considered negative. The modeled specificity ranged from 94.0-97.7% for the urine pooling strategies (when POC-CCA trace results were considered negative). The urine pooling strategy, regardless of the pool size, gave comparable and often superior classification performance to stool microscopy for the same number of tests. The urine pooling strategy with a pool size of 4 reduced the number of tests and total cost compared to classical stool microscopy. This study introduces a method for rapid and efficient S. mansoni prevalence estimation through examining pooled urine samples with POC-CCA as an alternative to widely used stool microscopy.
Coulibaly, Jean T.; Bendavid, Eran; N’Goran, Eliézer K.; Utzinger, Jürg; Keiser, Jennifer; Bogoch, Isaac I.; Andrews, Jason R.
2016-01-01
Background A key epidemiologic feature of schistosomiasis is its focal distribution, which has important implications for the spatial targeting of preventive chemotherapy programs. We evaluated the diagnostic accuracy of a urine pooling strategy using a point-of-care circulating cathodic antigen (POC-CCA) cassette test for detection of Schistosoma mansoni, and employed simulation modeling to test the classification accuracy and efficiency of this strategy in determining where preventive chemotherapy is needed in low-endemicity settings. Methodology We performed a cross-sectional study involving 114 children aged 6–15 years in six neighborhoods in Azaguié Ahoua, south Côte d’Ivoire to characterize the sensitivity and specificity of the POC-CCA cassette test with urine samples that were tested individually and in pools of 4, 8, and 12. We used a Bayesian latent class model to estimate test characteristics for individual POC-CCA and quadruplicate Kato-Katz thick smears on stool samples. We then developed a microsimulation model and used lot quality assurance sampling to test the performance, number of tests, and total cost per school for each pooled testing strategy to predict the binary need for school-based preventive chemotherapy using a 10% prevalence threshold for treatment. Principal Findings The sensitivity of the urine pooling strategy for S. mansoni diagnosis using pool sizes of 4, 8, and 12 was 85.9%, 79.5%, and 65.4%, respectively, when POC-CCA trace results were considered positive, and 61.5%, 47.4%, and 30.8% when POC-CCA trace results were considered negative. The modeled specificity ranged from 94.0–97.7% for the urine pooling strategies (when POC-CCA trace results were considered negative). The urine pooling strategy, regardless of the pool size, gave comparable and often superior classification performance to stool microscopy for the same number of tests. The urine pooling strategy with a pool size of 4 reduced the number of tests and total cost compared to classical stool microscopy. Conclusions/Significance This study introduces a method for rapid and efficient S. mansoni prevalence estimation through examining pooled urine samples with POC-CCA as an alternative to widely used stool microscopy. PMID:27504954
Opsahl, Stephen P.; Crow, Cassi L.
2014-01-01
During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.
Gebler, J.B.
2004-01-01
The related topics of spatial variability of aquatic invertebrate community metrics, implications of spatial patterns of metric values to distributions of aquatic invertebrate communities, and ramifications of natural variability to the detection of human perturbations were investigated. Four metrics commonly used for stream assessment were computed for 9 stream reaches within a fairly homogeneous, minimally impaired stream segment of the San Pedro River, Arizona. Metric variability was assessed for differing sampling scenarios using simple permutation procedures. Spatial patterns of metric values suggest that aquatic invertebrate communities are patchily distributed on subsegment and segment scales, which causes metric variability. Wide ranges of metric values resulted in wide ranges of metric coefficients of variation (CVs) and minimum detectable differences (MDDs), and both CVs and MDDs often increased as sample size (number of reaches) increased, suggesting that any particular set of sampling reaches could yield misleading estimates of population parameters and effects that can be detected. Mean metric variabilities were substantial, with the result that only fairly large differences in metrics would be declared significant at ?? = 0.05 and ?? = 0.20. The number of reaches required to obtain MDDs of 10% and 20% varied with significance level and power, and differed for different metrics, but were generally large, ranging into tens and hundreds of reaches. Study results suggest that metric values from one or a small number of stream reach(es) may not be adequate to represent a stream segment, depending on effect sizes of interest, and that larger sample sizes are necessary to obtain reasonable estimates of metrics and sample statistics. For bioassessment to progress, spatial variability may need to be investigated in many systems and should be considered when designing studies and interpreting data.
Study samples are too small to produce sufficiently precise reliability coefficients.
Charter, Richard A
2003-04-01
In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Primary and Aggregate Size Distributions of PM in Tail Pipe Emissions form Diesel Engines
NASA Astrophysics Data System (ADS)
Arai, Masataka; Amagai, Kenji; Nakaji, Takayuki; Hayashi, Shinji
Particulate matter (PM) emission exhausted from diesel engine should be reduced to keep the clean air environment. PM emission was considered that it consisted of coarse and aggregate particles, and nuclei-mode particles of which diameter was less than 50nm. However the detail characteristics about these particles of the PM were still unknown and they were needed for more physically accurate measurement and more effective reduction of exhaust PM emission. In this study, the size distributions of solid particles in PM emission were reported. PMs in the tail-pipe emission were sampled from three type diesel engines. Sampled PM was chemically treated to separate the solid carbon fraction from other fractions such as soluble organic fraction (SOF). The electron microscopic and optical-manual size measurement procedures were used to determine the size distribution of primary particles those were formed through coagulation process from nuclei-mode particles and consisted in aggregate particles. The centrifugal sedimentation method was applied to measure the Stokes diameter of dry-soot. Aerodynamic diameters of nano and aggregate particles were measured with scanning mobility particle sizer (SMPS). The peak aggregate diameters detected by SMPS were fallen in the same size regime as the Stokes diameter of dry-soot. Both of primary and Stokes diameters of dry-soot decreased with increases of engine speed and excess air ratio. Also, the effects of fuel properties and engine types on primary and aggregate particle diameters were discussed.
Population entropies estimates of proteins
NASA Astrophysics Data System (ADS)
Low, Wai Yee
2017-05-01
The Shannon entropy equation provides a way to estimate variability of amino acids sequences in a multiple sequence alignment of proteins. Knowledge of protein variability is useful in many areas such as vaccine design, identification of antibody binding sites, and exploration of protein 3D structural properties. In cases where the population entropies of a protein are of interest but only a small sample size can be obtained, a method based on linear regression and random subsampling can be used to estimate the population entropy. This method is useful for comparisons of entropies where the actual sequence counts differ and thus, correction for alignment size bias is needed. In the current work, an R based package named EntropyCorrect that enables estimation of population entropy is presented and an empirical study on how well this new algorithm performs on simulated dataset of various combinations of population and sample sizes is discussed. The package is available at https://github.com/lloydlow/EntropyCorrect. This article, which was originally published online on 12 May 2017, contained an error in Eq. (1), where the summation sign was missing. The corrected equation appears in the Corrigendum attached to the pdf.
Enhanced Raman spectroscopy of 2,4,6-TNT in anatase and rutile titania nanocrystals
NASA Astrophysics Data System (ADS)
De La Cruz-Montoya, Edwin; Jeréz, Jaqueline I.; Balaguera-Gelves, Marcia; Luna-Pineda, Tatiana; Castro, Miguel E.; Hernández-Rivera, Samuel P.
2006-05-01
The majority of explosives found in antipersonnel and antitank landmines contain 2,4,6-trinitrotoluene (TNT). Chemical sensing of landmines and Improvised Explosive Devices (IED) requires detecting the chemical signatures of the explosive components in these devices. Nanotechnology is ideally suited to needs in microsensors development by providing new materials and methods that can be employed for trace explosive detection. This work is focused on modification of nano-scaled colloids of titanium dioxide (Titania: anatase, rutile and brookite) and thin layer of the oxides as substrates for use in Enhanced Raman Scattering (ERS) spectroscopy. Ultrafine particles have been generated by hydrothermally treating the sol-gel derived hydrous oxides. ERS spectra of nanocrystalline anatase Titania samples prepared with different average sizes: 38 nm (without acid), 24 nm (without acid) and 7 nm (with HCl). Bulk phase (commercial) and KBr were also used to prepare mixtures with TNT to look for Enhanced Raman Effect of the nitroaromatic explosive on the test surfaces. The studies clearly indicated that the anatase crystal size affects the enhancement of the TNT Raman signal. This enhancement was highest for the samples with Titania average crystal size of 7 nm.
Flocculation and aggregation in a microgravity environment (FAME)
NASA Technical Reports Server (NTRS)
Ansari, Rafat R.; Dhadwal, Harbans S.; Suh, Kwang I.
1994-01-01
An experiment to study flocculation phenomena in the constrained microgravity environment of a space shuttle or space station is described. The small size and light weight experiment easily fits in a Spacelab Glovebox. Using an integrated fiber optic dynamic light scattering (DLS) system we obtain high precision particle size measurements from dispersions of colloidal particles within seconds, needs no onboard optical alignment, no index matching fluid, and offers sample mixing and shear melting capabilities to study aggregation (flocculation and coagulation) phenomena under both quiescent and controlled agitation conditions. The experimental system can easily be adapted for other microgravity experiments requiring the use of DLS. Preliminary results of ground-based study are reported.
Bekemeier, Betty; Marlowe, Justin; Squires, Linda Sharee; Tebaldi, Jennifer; Park, Seungeun
Our objective was to estimate the gap between the costs for local health jurisdictions (LHJs) to provide foundational public health services (FPHS) and actual spending on FPHS and to examine factors associated with that gap. We employed resource-based cost estimation methods for this observational study and conducted multivariate analyses with measures derived from secondary administrative data. We used primary data collected from LHJ leaders that depicted 2014 spending and perceived need. We also included secondary administrative data depicting annual 2000-2013 expenditures organized into categories containing key elements of FPHS areas. We included primary data from a representative sample of 10 LHJs in Washington State and secondary data for all 35 LHJs in Washington. Participants were public health practice leaders from each sample LHJ. Our main outcome of interest was the gap identified between current spending and the perceived spending needed to provide FPHS in a jurisdiction. Actual FPHS spending was approximately 65% of spending needed to provide overall FPHS for our sample LHJs, but the size of the gap varied substantially by program. Some gaps also varied widely by LHJ, with spending gaps widest among rural and high poverty communities. Percent poverty and the metropolitan nature of a jurisdiction were factors significantly related to FPHS spending in our multivariate analyses. Actual spending lags far behind local officials' estimates of spending needed to provide FPHS and is likely influenced by local conditions. Major apparent gaps between spending and need, particularly in areas such as costly Business Competencies, underscore the need for cross-cutting capabilities to support public health system responsiveness and for attention to be paid to local conditions.
NASA Astrophysics Data System (ADS)
Mann, Griffin
The area that comprises the Northwest Shelf in Lea Co., New Mexico has been heavily drilled over the past half century. The main target being shallow reservoirs within the Permian section (San Andres and Grayburg Formations). With a focus shifting towards deeper horizons, there is a need for more petrophysical data pertaining to these formations, which is the focus of this study through a variety of techniques. This study involves the use of contact angle measurements, fluid imbibition tests, Mercury Injection Capillary Pressure (MICP) and log analysis to evaluate the nano-petrophysical properties of the Yeso, Abo and Cisco Formation within the Northwest Shelf area of southeast New Mexico. From contact angle measurements, all of the samples studied were found to be oil-wetting as n-decane spreads on to the rock surface much quicker than the other fluids (deionized water and API brine) tested. Imbibition tests resulted in a well-connected pore network being observed for all of the samples with the highest values of imbibition slopes being recorded for the Abo samples. MICP provided a variety of pore structure data which include porosity, pore-throat size distributions, permeability and tortuosity. The Abo samples saw the highest porosity percentages, which were above 15%, with all the other samples ranging from 4 - 7%. The majority of the pore-throat sizes for most of the samples fell within the 1 - 10 mum range. The only exceptions to this being the Paddock Member within the Yeso Formation, which saw a higher percentage of larger pores (10 - 1000mum) and one of the Cisco Formation samples, which had the majority of its pore sizes fall in the 0.1 - 1 mum range. The log analysis created log calculations and curves for cross-plot porosity and water saturation that were then used to derive a value for permeability. The porosity and permeability values were comparable with those measured from our MICP and literature values.
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...
Measurement of the bed material of gravel-bed rivers
Milhous, R.T.; ,
2002-01-01
The measurement of the physical properties of a gravel-bed river is important in the calculation of sediment transport and physical habitat values for aquatic animals. These properties are not always easy to measure. One recent report on flushing of fines from the Klamath River did not contain information on one location because the grain size distribution of the armour could not be measured on a dry river bar. The grain size distribution could have been measured using a barrel sampler and converting the measurements to the same as would have been measured if a dry bar existed at the site. In another recent paper the porosity was calculated from an average value relation from the literature. The results of that paper may be sensitive to the actual value of porosity. Using the bulk density sampling technique based on a water displacement process presented in this paper the porosity could have been calculated from the measured bulk density. The principle topics of this paper are the measurement of the size distribution of the armour, and measurement of the porosity of the substrate. The 'standard' method of sampling of the armour is to do a Wolman-type count of the armour on a dry section of the river bed. When a dry bar does not exist the armour in an area of the wet streambed is to sample and the measurements transformed analytically to the same type of results that would have been obtained from the standard Wolman procedure. A comparison of the results for the San Miguel River in Colorado shows significant differences in the median size of the armour. The method use to determine the porosity is not 'high-tech' and there is a need improve knowledge of the porosity because of the importance of porosity in the aquatic ecosystem. The technique is to measure the in-situ volume of a substrate sample by measuring the volume of a frame over the substrate and then repeated the volume measurement after the sample is obtained from within the frame. The difference in the volumes is the volume of the sample.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Apollo rocks, fines and soil cores
NASA Astrophysics Data System (ADS)
Allton, J.; Bevill, T.
Apollo rocks and soils not only established basic lunar properties and ground truth for global remote sensing, they also provided important lessons for planetary protection (Adv. Space Res ., 1998, v. 22, no. 3 pp. 373-382). The six Apollo missions returned 2196 samples weighing 381.7 kg, comprised of rocks, fines, soil cores and 2 gas samples. By examining which samples were allocated for scientific investigations, information was obtained on usefulness of sampling strategy, sampling devices and containers, sample types and diversity, and on size of sample needed by various disciplines. Diversity was increased by using rakes to gather small rocks on the Moon and by removing fragments >1 mm from soils by sieving in the laboratory. Breccias and soil cores are diverse internally. Per unit weight these samples were more often allocated for research. Apollo investigators became adept at wringing information from very small sample sizes. By pushing the analytical limits, the main concern was adequate size for representative sampling. Typical allocations for trace element analyses were 750 mg for rocks, 300 mg for fines and 70 mg for core subsamples. Age-dating and isotope systematics allocations were typically 1 g for rocks and fines, but only 10% of that amount for core depth subsamples. Historically, allocations for organics and microbiology were 4 g (10% for cores). Modern allocations for biomarker detection are 100mg. Other disciplines supported have been cosmogenic nuclides, rock and soil petrology, sedimentary volatiles, reflectance, magnetics, and biohazard studies . Highly applicable to future sample return missions was the Apollo experience with organic contamination, estimated to be from 1 to 5 ng/g sample for Apollo 11 (Simonheit &Flory, 1970; Apollo 11, 12 &13 Organic contamination Monitoring History, U.C. Berkeley; Burlingame et al., 1970, Apollo 11 LSC , pp. 1779-1792). Eleven sources of contaminants, of which 7 are applicable to robotic missions, were identified and reduced; thus, improving Apollo 12 samples to 0.1 ng/g. Apollo sample documentation preserves the parentage, orientation, and location, packaging, handling and environmental histories of each of the 90,000 subsamples currently curated. Active research on Apollo samples continues today, and because 80% by weight of the Apollo collection remains pristine, researchers have a reservoir of material to support studies well into the future.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz
2017-01-01
Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.
NASA Technical Reports Server (NTRS)
Baird, A. K.; Castro, A. J.; Clark, B. C.; Toulmin, P., III; Rose, H., Jr.; Keil, K.; Gooding, J. L.
1977-01-01
Ten samples of Mars regolith material (six on Viking Lander 1 and four on Viking Lander 2) have been delivered to the X ray fluorescence spectrometers as of March 31, 1977. An additional six samples at least are planned for acquisition in the remaining Extended Mission (to January 1979) for each lander. All samples acquired are Martian fines from the near surface (less than 6-cm depth) of the landing sites except the latest on Viking Lander 1, which is fine material from the bottom of a trench dug to a depth of 25 cm. Several attempts on each lander to acquire fresh rock material (in pebble sizes) for analysis have yielded only cemented surface crustal material (duricrust). Laboratory simulation and experimentation are required both for mission planning of sampling and for interpretation of data returned from Mars. This paper is concerned with the rationale for sample site selections, surface sampler operations, and the supportive laboratory studies needed to interpret X ray results from Mars.
2009-10-01
efficacy of methylselenocysteine (MSC) and finasteride in preventing the clonal expansion of early stage, small volume prostate cancer using a tumor...xenograft model. When used alone, MSC had little effect on tumor growth, whereas finasteride was only effective for a short duration. However, the...repeat of the experiment with larger sample size is needed to corroborate the findings. We also demonstrated a synergy between emodin and finasteride in
Risk factors for lower extremity injury: a review of the literature
Murphy, D; Connolly, D; Beynnon, B
2003-01-01
Prospective studies on risk factors for lower extremity injury are reviewed. Many intrinsic and extrinsic risk factors have been implicated; however, there is little agreement with respect to the findings. Future prospective studies are needed using sufficient sample sizes of males and females, including collection of exposure data, and using established methods for identifying and classifying injury severity to conclusively determine addtional risk factors for lower extremity injury. PMID:12547739
NASA Technical Reports Server (NTRS)
Williams, George O., Jr.
1996-01-01
This study is a continuation of the summer of 1994 NASA/ASEE Summer Faculty Fellowship Program. This effort is a portion of the ongoing work by the Biophysics Branch of the Marshall Space Flight Center. The work has focused recently on the separation of macromolecules using capillary electrophoresis (CE). Two primary goals were established for the effort this summer. First, we wanted to use capillary electrophoresis to study the electrohydrodynamics of a sample stream. Secondly, there was a need to develop a methodology for using CE for separation of DNA molecules of various sizes. In order to achieve these goals we needed to establish a procedure for detection of a sample plug under the influence of an electric field Detection of the sample with the microscope and image analysis system would be helpful in studying the electrohydrodynamics of this stream under load. Videotaping this process under the influence of an electric field in real time would also be useful. Imaging and photography of the sample/background electrolyte interface would be vital to this study. Finally, detection and imaging of electroosmotic flow and pressure driven flow must be accomplished.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriesel, Jason M.; Makarem, Camille N.; Phillips, Mark C.
We describe a versatile mid-infrared (Mid-IR) spectroscopy system developed to measure the concentration of a wide range of gases with an ultra-low sample size. The system combines a rapidly-swept external cavity quantum cascade laser (ECQCL) with a hollow fiber gas cell. The ECQCL has sufficient spectral resolution and reproducibility to measure gases with narrow features (e.g., water, methane, ammonia, etc.), and also the spectral tuning range needed to measure volatile organic compounds (VOCs), (e.g., aldehydes, ketones, hydrocarbons), sulfur compounds, chlorine compounds, etc. The hollow fiber is a capillary tube having an internal reflective coating optimized for transmitting the Mid-IR lasermore » beam to a detector. Sample gas introduced into the fiber (e.g., internal volume = 0.6 ml) interacts strongly with the laser beam, and despite relatively modest path lengths (e.g., L ~ 3 m), the requisite quantity of sample needed for sensitive measurements can be significantly less than what is required using conventional IR laser spectroscopy systems. Example measurements are presented including quantification of VOCs relevant for human breath analysis with a sensitivity of ~2 picomoles at a 1 Hz data rate.« less
NASA Astrophysics Data System (ADS)
Kriesel, Jason M.; Makarem, Camille N.; Phillips, Mark C.; Moran, James J.; Coleman, Max L.; Christensen, Lance E.; Kelly, James F.
2017-05-01
We describe a versatile mid-infrared (Mid-IR) spectroscopy system developed to measure the concentration of a wide range of gases with an ultra-low sample size. The system combines a rapidly-swept external cavity quantum cascade laser (ECQCL) with a hollow fiber gas cell. The ECQCL has sufficient spectral resolution and reproducibility to measure gases with narrow features (e.g., water, methane, ammonia, etc.), and also the spectral tuning range needed to measure volatile organic compounds (VOCs), (e.g., aldehydes, ketones, hydrocarbons), sulfur compounds, chlorine compounds, etc. The hollow fiber is a capillary tube having an internal reflective coating optimized for transmitting the Mid-IR laser beam to a detector. Sample gas introduced into the fiber (e.g., internal volume = 0.6 ml) interacts strongly with the laser beam, and despite relatively modest path lengths (e.g., L 3 m), the requisite quantity of sample needed for sensitive measurements can be significantly less than what is required using conventional IR laser spectroscopy systems. Example measurements are presented including quantification of VOCs relevant for human breath analysis with a sensitivity of 2 picomoles at a 1 Hz data rate.
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Nomogram for sample size calculation on a straightforward basis for the kappa statistic.
Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo
2014-09-01
Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.
Disease-Concordant Twins Empower Genetic Association Studies.
Tan, Qihua; Li, Weilong; Vandin, Fabio
2017-01-01
Genome-wide association studies with moderate sample sizes are underpowered, especially when testing SNP alleles with low allele counts, a situation that may lead to high frequency of false-positive results and lack of replication in independent studies. Related individuals, such as twin pairs concordant for a disease, should confer increased power in genetic association analysis because of their genetic relatedness. We conducted a computer simulation study to explore the power advantage of the disease-concordant twin design, which uses singletons from disease-concordant twin pairs as cases and ordinary healthy samples as controls. We examined the power gain of the twin-based design for various scenarios (i.e., cases from monozygotic and dizygotic twin pairs concordant for a disease) and compared the power with the ordinary case-control design with cases collected from the unrelated patient population. Simulation was done by assigning various allele frequencies and allelic relative risks for different mode of genetic inheritance. In general, for achieving a power estimate of 80%, the sample sizes needed for dizygotic and monozygotic twin cases were one half and one fourth of the sample size of an ordinary case-control design, with variations depending on genetic mode. Importantly, the enriched power for dizygotic twins also applies to disease-concordant sibling pairs, which largely extends the application of the concordant twin design. Overall, our simulation revealed a high value of disease-concordant twins in genetic association studies and encourages the use of genetically related individuals for highly efficiently identifying both common and rare genetic variants underlying human complex diseases without increasing laboratory cost. © 2016 John Wiley & Sons Ltd/University College London.
Graham, Simon; O'Connor, Catherine C; Morgan, Stephen; Chamberlain, Catherine; Hocking, Jane
2017-06-01
Background Aboriginal and Torres Strait Islanders (Aboriginal) are Australia's first peoples. Between 2006 and 2015, HIV notifications increased among Aboriginal people; however, among non-Aboriginal people, notifications remained relatively stable. This systematic review and meta-analysis aims to examine the prevalence of HIV among Aboriginal people overall and by subgroups. In November 2015, a search of PubMed and Web of Science, grey literature and abstracts from conferences was conducted. A study was included if it reported the number of Aboriginal people tested and those who tested positive for HIV. The following variables were extracted: gender; Aboriginal status; population group (men who have sex with men, people who inject drugs, adults, youth in detention and pregnant females) and geographical location. An assessment of between study heterogeneity (I 2 test) and within study bias (selection, measurement and sample size) was also conducted. Seven studies were included; all were cross-sectional study designs. The overall sample size was 3772 and the prevalence of HIV was 0.1% (I 2 =38.3%, P=0.136). Five studies included convenient samples of people attending Australian Needle and Syringe Program Centres, clinics, hospitals and a youth detention centre, increasing the potential of selection bias. Four studies had a sample size, thus decreasing the ability to report pooled estimates. The prevalence of HIV among Aboriginal people in Australia is low. Community-based programs that include both prevention messages for those at risk of infection and culturally appropriate clinical management and support for Aboriginal people living with HIV are needed to prevent HIV increasing among Aboriginal people.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Infrared thermal wave nondestructive technology on the defect in the shell of solid rocket motor
NASA Astrophysics Data System (ADS)
Zhang, Wei; Song, Yuanjia; Yang, Zhengwei; Li, Ming; Tian, Gan
2010-10-01
Based on the active infrared thermography nondestructive testing (NDT) technology, which is an emerging method and developed in the areas of aviation, spaceflight and national defence, the samples including glass fiber flat bottom hole sample, glass fiber inclusion sample and steel flat bottom hole sample that the shell materials of Solid Rocket Motor (SRM) were heated by a high energy flash lamp. The subsurface flaws can be detected through measuring temperature difference between flaws and materials. The results of the experiments show that: 1) the technique is a fast and effective inspection method, which is used for detecting the composites more easily than the metals. And it also can primarily identify the defect position and size according to the thermal image maps. 2) A best inspection time at when the area of hot spot is the same with that of defect is exited, which can be used to estimate the defect size. The bigger the defect area, the easier it could be detected and also the less of the error for estimating defect area. 3). The infrared thermal images obtained from experiments always have high noise, especially for metal materials due to high reflectivity and environmental factors, which need to be further processed.
Simulation of possible regolith optical alteration effects on carbonaceous chondrite meteorites
NASA Technical Reports Server (NTRS)
Clark, Beth E.; Fanale, Fraser P.; Robinson, Mark S.
1993-01-01
As the spectral reflectance search continues for links between meteorites and their parent body asteroids, the effects of optical surface alteration processes need to be considered. We present the results of an experimental simulation of the melting and recrystallization that occurs to a carbonaceous chondrite meteorite regolith powder upon heating. As done for the ordinary chondrite meteorites, we show the effects of possible parent-body regolith alteration processes on reflectance spectra of carbonaceous chondrites (CC's). For this study, six CC's of different mineralogical classes were obtained from the Antarctic Meteorite Collection: two CM meteorites, two CO meteorites, one CK, and one CV. Each sample was ground with a ceramic mortar and pestle to powders with maximum grain sizes of 180 and 90 microns. The reflectance spectra of these powders were measured at RELAB (Brown University) from 0.3 to 2.5 microns. Following comminution, the 90 micron grain size was melted in a nitrogen controlled-atmosphere fusion furnace at an approximate temperature of 1700 C. The fused sample was immediately held above a flow of nitrogen at 0 C for quenching. Following melting and recrystallization, the samples were reground to powders, and the reflectance spectra were remeasured. The effects on spectral reflectance for a sample of the CM carbonaceous chondrite called Murchison are shown.
Sample size determination in group-sequential clinical trials with two co-primary endpoints
Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi
2014-01-01
We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799
Myster, Randall W; Malahy, Michael P
2012-09-01
Spatial patterns of tropical trees and shrubs are important to understanding their interaction and the resultant structure of tropical rainforests. To assess this issue, we took advantage of previously collected data, on Neotropical tree and shrub stem identified to species and mapped for spatial coordinates in a 50ha plot, with a frequency of every five years and over a 20 year period. These stems data were first placed into four groups, regardless of species, depending on their location in the vertical strata of the rainforest (shrubs, understory trees, mid-sized trees, tall trees) and then used to generate aggregation patterns for each sampling year. We found shrubs and understory trees clumped at small spatial scales of a few meters for several of the years sampled. Alternatively, mid-sized trees and tall trees did not clump, nor did they show uniform (regular) patterns, during any sampling period. In general (1) groups found higher in the canopy did not show aggregation on the ground and (2) the spatial patterns of all four groups showed similarity among different sampling years, thereby supporting a "shifting mosaic" view of plant communities over large areas. Spatial analysis, such as this one, are critical to understanding and predicting tree spaces, tree-tree replacements and the Neotropical forest patterns, such as biodiversity and those needed for sustainability efforts, they produce.
Using long ssDNA polynucleotides to amplify STRs loci in degraded DNA samples
Pérez Santángelo, Agustín; Corti Bielsa, Rodrigo M.; Sala, Andrea; Ginart, Santiago; Corach, Daniel
2017-01-01
Obtaining informative short tandem repeat (STR) profiles from degraded DNA samples is a challenging task usually undermined by locus or allele dropouts and peak-high imbalances observed in capillary electrophoresis (CE) electropherograms, especially for those markers with large amplicon sizes. We hereby show that the current STR assays may be greatly improved for the detection of genetic markers in degraded DNA samples by using long single stranded DNA polynucleotides (ssDNA polynucleotides) as surrogates for PCR primers. These long primers allow a closer annealing to the repeat sequences, thereby reducing the length of the template required for the amplification in fragmented DNA samples, while at the same time rendering amplicons of larger sizes suitable for multiplex assays. We also demonstrate that the annealing of long ssDNA polynucleotides does not need to be fully complementary in the 5’ region of the primers, thus allowing for the design of practically any long primer sequence for developing new multiplex assays. Furthermore, genotyping of intact DNA samples could also benefit from utilizing long primers since their close annealing to the target STR sequences may overcome wrong profiling generated by insertions/deletions present between the STR region and the annealing site of the primers. Additionally, long ssDNA polynucleotides might be utilized in multiplex PCR assays for other types of degraded or fragmented DNA, e.g. circulating, cell-free DNA (ccfDNA). PMID:29099837
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elder, J.C.; Littlefield, L.G.; Tillery, M.I.
1978-06-01
A preliminary design of a prototype particulate stack sampler (PPSS) has been prepared, and development of several components is under way. The objective of this Environmental Protection Agency (EPA)-sponsored program is to develop and demonstrate a prototype sampler with capabilities similar to EPA Method 5 apparatus but without some of the more troublesome aspects. Features of the new design include higher sampling flow; display (on demand) of all variables and periodic calculation of percent isokinetic, sample volume, and stack velocity; automatic control of probe and filter heaters; stainless steel surfaces in contact with the sample stream; single-point particle size separationmore » in the probe nozzle; null-probe capability in the nozzle; and lower weight in the components of the sampling train. Design considerations will limit use of the PPSS to stack gas temperatures under approximately 300/sup 0/C, which will exclude sampling some high-temperature stacks such as incinerators. Although need for filter weighing has not been eliminated in the new design, introduction of a variable-slit virtual impactor nozzle may eliminate the need for mass analysis of particles washed from the probe. Component development has shown some promise for continuous humidity measurement by an in-line wet-bulb, dry-bulb psychrometer.« less
Mattarozzi, Monica; Suman, Michele; Cascio, Claudia; Calestani, Davide; Weigel, Stefan; Undas, Anna; Peters, Ruud
2017-01-01
Estimating consumer exposure to nanomaterials (NMs) in food products and predicting their toxicological properties are necessary steps in the assessment of the risks of this technology. To this end, analytical methods have to be available to detect, characterize and quantify NMs in food and materials related to food, e.g. food packaging and biological samples following metabolization of food. The challenge for the analytical sciences is that the characterization of NMs requires chemical as well as physical information. This article offers a comprehensive analysis of methods available for the detection and characterization of NMs in food and related products. Special attention was paid to the crucial role of sample preparation methods since these have been partially neglected in the scientific literature so far. The currently available instrumental methods are grouped as fractionation, counting and ensemble methods, and their advantages and limitations are discussed. We conclude that much progress has been made over the last 5 years but that many challenges still exist. Future perspectives and priority research needs are pointed out. Graphical Abstract Two possible analytical strategies for the sizing and quantification of Nanoparticles: Asymmetric Flow Field-Flow Fractionation with multiple detectors (allows the determination of true size and mass-based particle size distribution); Single Particle Inductively Coupled Plasma Mass Spectrometry (allows the determination of a spherical equivalent diameter of the particle and a number-based particle size distribution).
NASA Astrophysics Data System (ADS)
Chakraborty, Abhishek; Ervens, Barbara; Gupta, Tarun; Tripathi, Sachchida N.
2016-04-01
Size-resolved fog water samples were collected in two consecutive winters at Kanpur, a heavily polluted urban area of India. Samples were analyzed by an aerosol mass spectrometer after drying and directly in other instruments. Residues of fine fog droplets (diameter: 4-16 µm) are found to be more enriched with oxidized (oxygen to carbon ratio, O/C = 0.88) and low volatility organics than residues of coarse (diameter > 22 µm) and medium size (diameter: 16-22 µm) droplets with O/C of 0.68 and 0.74, respectively. These O/C ratios are much higher than those observed for background ambient organic aerosols, indicating efficient oxidation in fog water. Accompanying box model simulations reveal that longer residence times, together with high aqueous OH concentrations in fine droplets, can explain these trends. High aqueous OH concentrations in smaller droplets are caused by their highest surface-volume ratio and high Fe and Cu concentrations, allowing more uptake of gas phase OH and enhanced Fenton reaction rates, respectively. Although some volatile organic species may have escaped during droplet evaporation, these findings indicate that aqueous processing of dissolved organics varies with droplet size. Therefore, large (regional, global)-scale models need to consider the variable reaction rates, together with metal-catalyzed radical formation throughout droplet populations for accurately predicting aqueous secondary organic aerosol formation.
Experimental evidence for stochastic switching of supercooled phases in NdNiO3 nanostructures
NASA Astrophysics Data System (ADS)
Kumar, Devendra; Rajeev, K. P.; Alonso, J. A.
2018-03-01
A first-order phase transition is a dynamic phenomenon. In a multi-domain system, the presence of multiple domains of coexisting phases averages out the dynamical effects, making it nearly impossible to predict the exact nature of phase transition dynamics. Here, we report the metal-insulator transition in samples of sub-micrometer size NdNiO3 where the effect of averaging is minimized by restricting the number of domains under study. We observe the presence of supercooled metallic phases with supercooling of 40 K or more. The transformation from the supercooled metallic to the insulating state is a stochastic process that happens at different temperatures and times in different experimental runs. The experimental results are understood without incorporating material specific properties, suggesting that the behavior is of universal nature. The size of the sample needed to observe individual switching of supercooled domains, the degree of supercooling, and the time-temperature window of switching are expected to depend on the parameters such as quenched disorder, strain, and magnetic field.
Study design in high-dimensional classification analysis.
Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen
2016-10-01
Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Waks, Zeev; Weissbrod, Omer; Carmeli, Boaz; Norel, Raquel; Utro, Filippo; Goldschmidt, Yaara
2016-12-23
Compiling a comprehensive list of cancer driver genes is imperative for oncology diagnostics and drug development. While driver genes are typically discovered by analysis of tumor genomes, infrequently mutated driver genes often evade detection due to limited sample sizes. Here, we address sample size limitations by integrating tumor genomics data with a wide spectrum of gene-specific properties to search for rare drivers, functionally classify them, and detect features characteristic of driver genes. We show that our approach, CAnceR geNe similarity-based Annotator and Finder (CARNAF), enables detection of potentially novel drivers that eluded over a dozen pan-cancer/multi-tumor type studies. In particular, feature analysis reveals a highly concentrated pool of known and putative tumor suppressors among the <1% of genes that encode very large, chromatin-regulating proteins. Thus, our study highlights the need for deeper characterization of very large, epigenetic regulators in the context of cancer causality.
Mourning dove population trend estimates from Call-Count and North American Breeding Bird Surveys
Sauer, J.R.; Dolton, D.D.; Droege, S.
1994-01-01
The mourning dove (Zenaida macroura) Callcount Survey and the North American Breeding Bird Survey provide information on population trends of mourning doves throughout the continental United States. Because surveys are an integral part of the development of hunting regulations, a need exists to determine which survey provides precise information. We estimated population trends from 1966 to 1988 by state and dove management unit, and assessed the relative efficiency of each survey. Estimates of population trend differ (P lt 0.05) between surveys in 11 of 48 states; 9 of 11 states with divergent results occur in the Eastern Management Unit. Differences were probably a consequence of smaller sample sizes in the Callcount Survey. The Breeding Bird Survey generally provided trend estimates with smaller variances than did the Callcount Survey. Although the Callcount Survey probably provides more withinroute accuracy because of survey methods and timing, the Breeding Bird Survey has a larger sample size of survey routes and greater consistency of coverage in the Eastern Unit.
Köllner, Martin G.; Schultheiss, Oliver C.
2014-01-01
The correlation between implicit and explicit motive measures and potential moderators of this relationship were examined meta-analytically, using Hunter and Schmidt's (2004) approach. Studies from a comprehensive search in PsycINFO, data sets of our research group, a literature list compiled by an expert, and the results of a request for gray literature were examined for relevance and coded. Analyses were based on 49 papers, 56 independent samples, 6151 subjects, and 167 correlations. The correlations (ρ) between implicit and explicit measures were 0.130 (CI: 0.077–0.183) for the overall relationship, 0.116 (CI: 0.050–0.182) for affiliation, 0.139 (CI: 0.080–0.198) for achievement, and 0.038 (CI: −0.055–0.131) for power. Participant age did not moderate the size of these relationships. However, a greater proportion of males in the samples and an earlier publication year were associated with larger effect sizes. PMID:25152741
Bogdanova, Yelena; Yee, Megan K; Ho, Vivian T; Cicerone, Keith D
Comprehensive review of the use of computerized treatment as a rehabilitation tool for attention and executive function in adults (aged 18 years or older) who suffered an acquired brain injury. Systematic review of empirical research. Two reviewers independently assessed articles using the methodological quality criteria of Cicerone et al. Data extracted included sample size, diagnosis, intervention information, treatment schedule, assessment methods, and outcome measures. A literature review (PubMed, EMBASE, Ovid, Cochrane, PsychINFO, CINAHL) generated a total of 4931 publications. Twenty-eight studies using computerized cognitive interventions targeting attention and executive functions were included in this review. In 23 studies, significant improvements in attention and executive function subsequent to training were reported; in the remaining 5, promising trends were observed. Preliminary evidence suggests improvements in cognitive function following computerized rehabilitation for acquired brain injury populations including traumatic brain injury and stroke. Further studies are needed to address methodological issues (eg, small sample size, inadequate control groups) and to inform development of guidelines and standardized protocols.
Emond, Mary J; Louie, Tin; Emerson, Julia; Zhao, Wei; Mathias, Rasika A; Knowles, Michael R; Wright, Fred A; Rieder, Mark J; Tabor, Holly K; Nickerson, Deborah A; Barnes, Kathleen C; Gibson, Ronald L; Bamshad, Michael J
2012-07-08
Exome sequencing has become a powerful and effective strategy for the discovery of genes underlying Mendelian disorders. However, use of exome sequencing to identify variants associated with complex traits has been more challenging, partly because the sample sizes needed for adequate power may be very large. One strategy to increase efficiency is to sequence individuals who are at both ends of a phenotype distribution (those with extreme phenotypes). Because the frequency of alleles that contribute to the trait are enriched in one or both phenotype extremes, a modest sample size can potentially be used to identify novel candidate genes and/or alleles. As part of the National Heart, Lung, and Blood Institute (NHLBI) Exome Sequencing Project (ESP), we used an extreme phenotype study design to discover that variants in DCTN4, encoding a dynactin protein, are associated with time to first P. aeruginosa airway infection, chronic P. aeruginosa infection and mucoid P. aeruginosa in individuals with cystic fibrosis.
Brain Stimulation in Alzheimer's Disease.
Chang, Chun-Hung; Lane, Hsien-Yuan; Lin, Chieh-Hsin
2018-01-01
Brain stimulation techniques can modulate cognitive functions in many neuropsychiatric diseases. Pilot studies have shown promising effects of brain stimulations on Alzheimer's disease (AD). Brain stimulations can be categorized into non-invasive brain stimulation (NIBS) and invasive brain stimulation (IBS). IBS includes deep brain stimulation (DBS), and invasive vagus nerve stimulation (VNS), whereas NIBS includes transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS), transcranial alternating current stimulation (tACS), electroconvulsive treatment (ECT), magnetic seizure therapy (MST), cranial electrostimulation (CES), and non-invasive VNS. We reviewed the cutting-edge research on these brain stimulation techniques and discussed their therapeutic effects on AD. Both IBS and NIBS may have potential to be developed as novel treatments for AD; however, mixed findings may result from different study designs, patients selection, population, or samples sizes. Therefore, the efficacy of NIBS and IBS in AD remains uncertain, and needs to be further investigated. Moreover, more standardized study designs with larger sample sizes and longitudinal follow-up are warranted for establishing a structural guide for future studies and clinical application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir
Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less
Reducing the number of reconstructions needed for estimating channelized observer performance
NASA Astrophysics Data System (ADS)
Pineda, Angel R.; Miedema, Hope; Brenner, Melissa; Altaf, Sana
2018-03-01
A challenge for task-based optimization is the time required for each reconstructed image in applications where reconstructions are time consuming. Our goal is to reduce the number of reconstructions needed to estimate the area under the receiver operating characteristic curve (AUC) of the infinitely-trained optimal channelized linear observer. We explore the use of classifiers which either do not invert the channel covariance matrix or do feature selection. We also study the assumption that multiple low contrast signals in the same image of a non-linear reconstruction do not significantly change the estimate of the AUC. We compared the AUC of several classifiers (Hotelling, logistic regression, logistic regression using Firth bias reduction and the least absolute shrinkage and selection operator (LASSO)) with a small number of observations both for normal simulated data and images from a total variation reconstruction in magnetic resonance imaging (MRI). We used 10 Laguerre-Gauss channels and the Mann-Whitney estimator for AUC. For this data, our results show that at small sample sizes feature selection using the LASSO technique can decrease bias of the AUC estimation with increased variance and that for large sample sizes the difference between these classifiers is small. We also compared the use of multiple signals in a single reconstructed image to reduce the number of reconstructions in a total variation reconstruction for accelerated imaging in MRI. We found that AUC estimation using multiple low contrast signals in the same image resulted in similar AUC estimates as doing a single reconstruction per signal leading to a 13x reduction in the number of reconstructions needed.
NASA Astrophysics Data System (ADS)
Dietze, M. C.; Davidson, C. D.; Desai, A. R.; Feng, X.; Kelly, R.; Kooper, R.; LeBauer, D. S.; Mantooth, J.; McHenry, K.; Serbin, S. P.; Wang, D.
2012-12-01
Ecosystem models are designed to synthesize our current understanding of how ecosystems function and to predict responses to novel conditions, such as climate change. Reducing uncertainties in such models can thus improve both basic scientific understanding and our predictive capacity, but rarely have the models themselves been employed in the design of field campaigns. In the first part of this paper we provide a synthesis of uncertainty analyses conducted using the Predictive Ecosystem Analyzer (PEcAn) ecoinformatics workflow on the Ecosystem Demography model v2 (ED2). This work spans a number of projects synthesizing trait databases and using Bayesian data assimilation techniques to incorporate field data across temperate forests, grasslands, agriculture, short rotation forestry, boreal forests, and tundra. We report on a number of data needs that span a wide array diverse biomes, such as the need for better constraint on growth respiration. We also identify other data needs that are biome specific, such as reproductive allocation in tundra, leaf dark respiration in forestry and early-successional trees, and root allocation and turnover in mid- and late-successional trees. Future data collection needs to balance the unequal distribution of past measurements across biomes (temperate biased) and processes (aboveground biased) with the sensitivities of different processes. In the second part we present the development of a power analysis and sampling optimization module for the the PEcAn system. This module uses the results of variance decomposition analyses to estimate the further reduction in model predictive uncertainty for different sample sizes of different variables. By assigning a cost to each measurement type, we apply basic economic theory to optimize the reduction in model uncertainty for any total expenditure, or to determine the cost required to reduce uncertainty to a given threshold. Using this system we find that sampling switches among multiple measurement types but favors those with no prior measurements due to the need integrate over prior uncertainty in within and among site variability. When starting from scratch in a new system, the optimal design favors initial measurements of SLA due to high sensitivity and low cost. The value of many data types, such as photosynthetic response curves, depends strongly on whether one includes initial equipment costs or just per-sample costs. Similarly, sampling at previously measured locations is favored when infrastructure costs are high, otherwise across-site sampling is favored over intensive sampling except when within-site variability strongly dominates.
Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs
NASA Astrophysics Data System (ADS)
Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.
2016-07-01
Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.
Feedback Augmented Sub-Ranging (FASR) Quantizer
NASA Technical Reports Server (NTRS)
Guilligan, Gerard
2012-01-01
This innovation is intended to reduce the size, power, and complexity of pipeline analog-to-digital converters (ADCs) that require high resolution and speed along with low power. Digitizers are important components in any application where analog signals (such as light, sound, temperature, etc.) need to be digitally processed. The innovation implements amplification of a sampled residual voltage in a switched capacitor amplifier stage that does not depend on charge redistribution. The result is less sensitive to capacitor mismatches that cause gain errors, which are the main limitation of such amplifiers in pipeline ADCs. The residual errors due to mismatch are reduced by at least a factor of 16, which is equivalent to at least 4 bits of improvement. The settling time is also faster because of a higher feedback factor. In traditional switched capacitor residue amplifiers, closed-loop amplification of a sampled and held residue signal is achieved by redistributing sampled charge onto a feedback capacitor around a high-gain transconductance amplifier. The residual charge that was sampled during the acquisition or sampling phase is stored on two or more capacitors, often equal in value or integral multiples of each other. During the hold or amplification phase, all of the charge is redistributed onto one capacitor in the feedback loop of the amplifier to produce an amplified voltage. The key error source is the non-ideal ratios of feedback and input capacitors caused by manufacturing tolerances, called mismatches. The mismatches cause non-ideal closed-loop gain, leading to higher differential non-linearity. Traditional solutions to the mismatch errors are to use larger capacitor values (than dictated by thermal noise requirements) and/or complex calibration schemes, both of which increase the die size and power dissipation. The key features of this innovation are (1) the elimination of the need for charge redistribution to achieve an accurate closed-loop gain of two, (2) a higher feedback factor in the amplifier stage giving a higher closed-loop bandwidth compared to the prior art, and (3) reduced requirement for calibration. The accuracy of the new amplifier is mainly limited by the sampling networks parasitic capacitances, which should be minimized in relation to the sampling capacitors.
ERIC Educational Resources Information Center
Sahin, Alper; Weiss, David J.
2015-01-01
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Sample size calculations for case-control studies
This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.
Stehman, S.V.; Wickham, J.D.; Wade, T.G.; Smith, J.H.
2008-01-01
The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land-cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. A multi-support approach is needed because these objectives require spatial units of different sizes for reference data collection and analysis. Determining a sampling design that meets the full suite of desirable objectives for the NLCD 2001 accuracy assessment requires reconciling potentially conflicting design features that arise from targeting the different objectives. Multi-stage cluster sampling provides the general structure to achieve a multi-support assessment, and the flexibility to target different objectives at different stages of the design. We describe the implementation of two-stage cluster sampling for the initial phase of the NLCD 2001 assessment, and identify gaps in existing knowledge where research is needed to allow full implementation of a multi-objective, multi-support assessment. ?? 2008 American Society for Photogrammetry and Remote Sensing.
Laboratory evaluation of the Sequoia Scientific LISST-ABS acoustic backscatter sediment sensor
Snazelle, Teri T.
2017-12-18
Sequoia Scientific’s LISST-ABS is an acoustic backscatter sensor designed to measure suspended-sediment concentration at a point source. Three LISST-ABS were evaluated at the U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF). Serial numbers 6010, 6039, and 6058 were assessed for accuracy in solutions with varying particle-size distributions and for the effect of temperature on sensor accuracy. Certified sediment samples composed of different ranges of particle size were purchased from Powder Technology Inc. These sediment samples were 30–80-micron (µm) Arizona Test Dust; less than 22-µm ISO 12103-1, A1 Ultrafine Test Dust; and 149-µm MIL-STD 810E Silica Dust. The sensor was able to accurately measure suspended-sediment concentration when calibrated with sediment of the same particle-size distribution as the measured. Overall testing demonstrated that sensors calibrated with finer sized sediments overdetect sediment concentrations with coarser sized sediments, and sensors calibrated with coarser sized sediments do not detect increases in sediment concentrations from small and fine sediments. These test results are not unexpected for an acoustic-backscatter device and stress the need for using accurate site-specific particle-size distributions during sensor calibration. When calibrated for ultrafine dust with a less than 22-µm particle size (silt) and with the Arizona Test Dust with a 30–80-µm range, the data from sensor 6039 were biased high when fractions of the coarser (149-µm) Silica Dust were added. Data from sensor 6058 showed similar results with an elevated response to coarser material when calibrated with a finer particle-size distribution and a lack of detection when subjected to finer particle-size sediment. Sensor 6010 was also tested for the effect of dissimilar particle size during the calibration and showed little effect. Subsequent testing revealed problems with this sensor, including an inadequate temperature compensation, making this data questionable. The sensor was replaced by Sequoia Scientific with serial number 6039. Results from the extended temperature testing showed proper temperature compensation for sensor 6039, and results from the dissimilar calibration/testing particle-size distribution closely corroborated the results from sensor 6058.
Laboratory characterization of shale pores
NASA Astrophysics Data System (ADS)
Nur Listiyowati, Lina
2018-02-01
To estimate the potential of shale gas reservoir, one needs to understand the characteristics of pore structures. Characterization of shale gas reservoir microstructure is still a challenge due to ultra-fine grained micro-fabric and micro level heterogeneity of these sedimentary rocks. The sample used in the analysis is a small portion of any reservoir. Thus, each measurement technique has a different result. It raises the question which methods are suitable for characterizing pore shale. The goal of this paper is to summarize some of the microstructure analysis tools of shale rock to get near-real results. The two analyzing pore structure methods are indirect measurement (MIP, He, NMR, LTNA) and direct observation (SEM, TEM, Xray CT). Shale rocks have a high heterogeneity; thus, it needs multiscale quantification techniques to understand their pore structures. To describe the complex pore system of shale, several measurement techniques are needed to characterize the surface area and pore size distribution (LTNA, MIP), shapes, size and distribution of pore (FIB-SEM, TEM, Xray CT), and total porosity (He pycnometer, NMR). The choice of techniques and methods should take into account the purpose of the analysis and also the time and budget.
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
Effectiveness of massage therapy for shoulder pain: a systematic review and meta-analysis.
Yeun, Young-Ran
2017-05-01
[Purpose] This study performed an effect-size analysis of massage therapy for shoulder pain. [Subjects and Methods] The database search was conducted using PubMed, CINAHL, Embase, PsycINFO, RISS, NDSL, NANET, DBpia, and KoreaMed. The meta-analysis was based on 15 studies, covering a total of 635 participants, and used a random effects model. [Results] The effect size estimate showed that massage therapy had a significant effect on reducing shoulder pain for short-term efficacy (SMD: -1.08, 95% CI: -1.51 to -0.65) and for long-term efficacy (SMD: -0.47, 95% CI: -0.71 to -0.23). [Conclusion] The findings from this review suggest that massage therapy is effective at improving shoulder pain. However, further research is needed, especially a randomized controlled trial design or a large sample size, to provide evidence-based recommendations.
Mindfulness Meditation for Substance Use Disorders: A Systematic Review
Zgierska, Aleksandra; Rabago, David; Chawla, Neharika; Kushner, Kenneth; Koehler, Robert; Marlatt, Allan
2009-01-01
Relapse is common in substance use disorders (SUDs), even among treated individuals. The goal of this article was to systematically review the existing evidence on mindfulness meditation-based interventions (MM) for SUDs. The comprehensive search for and review of literature found over 2,000 abstracts and resulted in 25 eligible manuscripts (22 published, 3 unpublished: 8 RCTs, 7 controlled non-randomized, 6 non-controlled prospective, 2 qualitative studies, 1 case report). When appropriate, methodological quality, absolute risk reduction, number needed to treat, and effect size (ES) were assessed. Overall, although preliminary evidence suggests MM efficacy and safety, conclusive data for MM as a treatment of SUDs are lacking. Significant methodological limitations exist in most studies. Further, it is unclear which persons with SUDs might benefit most from MM. Future trials must be of sufficient sample size to answer a specific clinical question and should target both assessment of effect size and mechanisms of action. PMID:19904664
NASA Astrophysics Data System (ADS)
Noor, N. A. W. Mohd; Hassan, H.; Hashim, M. F.; Hasini, H.; Munisamy, K. M.
2017-04-01
This paper presents an investigation on the effects of primary airflow to coal fineness in coal-fired boilers. In coal fired power plant, coal is pulverized in a pulverizer, and it is then transferred to boiler for combustion. Coal need to be ground to its desired size to obtain maximum combustion efficiency. Coarse coal particle size may lead to many performance problems such as formation of clinker. In this study, the effects of primary airflow to coal particles size and coal flow distribution were investigated by using isokinetic coal sampling and computational fluid dynamic (CFD) modelling. Four different primary airflows were tested and the effects to resulting coal fineness were recorded. Results show that the optimum coal fineness distribution is obtained at design primary airflow. Any reduction or increase of air flow rate results in undesirable coal fineness distribution.
Siamian, Hasan; Yaminfirooz, Moosa; Dehghan, Zahra; Shahrabi, Afsaneh
2013-01-01
This study seeks to determine the expertise, use, and satisfaction of faculty members of Babol University of Medical Sciences, using the provided online information services by the university. This study is descriptive and analytical survey and the information gathering was through the questionnaireand the samples, based on the random of Kerjesi and Morgan Table sample size determination that was selected through stratified sampling proportionately to the size of the departments which summed up to 155 of which 113 responded to the mailed questionnaire. The results of the study show that among the various data sources such as books, journals and internet, faculty members have more undemandingand convenient access to the Internet compared to other resources use, however, half of the information needs of faculty members, 57 (50.4 percent) are provided by the printed books;and the databases available to the University and used by faculty members are PubMed with 76.1% and Science direct with 53.1% and Iranmedex with 46.9%.Only 17% of faculty members have the absolute contentment of the Internet information services,and more than half of the respondents (58.4%) expressed the low speed of Internet service as their major reason for their dissatisfaction of the provided services. Use and Satisfaction of Internet-Based Information Services of Faculty Members. Using the Internet to provide the needed information with an index of 46%is a significant issue. The results of the study show that among the various data sources such as books, journals and internet, faculty members have more undemandingand convenient access to the Internet and their access to printed books was really hard and limited, although the internet was more convenient to acquire information, most of the information needs of faculty members are provided by the printed books based on what they expressed. The study showed that the use and acquaintance of the sample of the information databases is very lowand only a few of them have the full satisfaction of the provided Internet information services and the main foremost reason for this major dissatisfaction is the low-speed Internet services at the University.
Uemoto, Yoshinobu; Sasaki, Shinji; Kojima, Takatoshi; Sugimoto, Yoshikazu; Watanabe, Toshio
2015-11-19
Genetic variance that is not captured by single nucleotide polymorphisms (SNPs) is due to imperfect linkage disequilibrium (LD) between SNPs and quantitative trait loci (QTLs), and the extent of LD between SNPs and QTLs depends on different minor allele frequencies (MAF) between them. To evaluate the impact of MAF of QTLs on genomic evaluation, we performed a simulation study using real cattle genotype data. In total, 1368 Japanese Black cattle and 592,034 SNPs (Illumina BovineHD BeadChip) were used. We simulated phenotypes using real genotypes under different scenarios, varying the MAF categories, QTL heritability, number of QTLs, and distribution of QTL effect. After generating true breeding values and phenotypes, QTL heritability was estimated and the prediction accuracy of genomic estimated breeding value (GEBV) was assessed under different SNP densities, prediction models, and population size by a reference-test validation design. The extent of LD between SNPs and QTLs in this population was higher in the QTLs with high MAF than in those with low MAF. The effect of MAF of QTLs depended on the genetic architecture, evaluation strategy, and population size in genomic evaluation. In genetic architecture, genomic evaluation was affected by the MAF of QTLs combined with the QTL heritability and the distribution of QTL effect. The number of QTL was not affected on genomic evaluation if the number of QTL was more than 50. In the evaluation strategy, we showed that different SNP densities and prediction models affect the heritability estimation and genomic prediction and that this depends on the MAF of QTLs. In addition, accurate QTL heritability and GEBV were obtained using denser SNP information and the prediction model accounted for the SNPs with low and high MAFs. In population size, a large sample size is needed to increase the accuracy of GEBV. The MAF of QTL had an impact on heritability estimation and prediction accuracy. Most genetic variance can be captured using denser SNPs and the prediction model accounted for MAF, but a large sample size is needed to increase the accuracy of GEBV under all QTL MAF categories.
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology
Vavrek, Matthew J.
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770
In Situ Balloon-Borne Ice Particle Imaging in High-Latitude Cirrus
NASA Astrophysics Data System (ADS)
Kuhn, Thomas; Heymsfield, Andrew J.
2016-09-01
Cirrus clouds reflect incoming solar radiation, creating a cooling effect. At the same time, these clouds absorb the infrared radiation from the Earth, creating a greenhouse effect. The net effect, crucial for radiative transfer, depends on the cirrus microphysical properties, such as particle size distributions and particle shapes. Knowledge of these cloud properties is also needed for calibrating and validating passive and active remote sensors. Ice particles of sizes below 100 µm are inherently difficult to measure with aircraft-mounted probes due to issues with resolution, sizing, and size-dependent sampling volume. Furthermore, artefacts are produced by shattering of particles on the leading surfaces of the aircraft probes when particles several hundred microns or larger are present. Here, we report on a series of balloon-borne in situ measurements that were carried out at a high-latitude location, Kiruna in northern Sweden (68N 21E). The method used here avoids these issues experienced with the aircraft probes. Furthermore, with a balloon-borne instrument, data are collected as vertical profiles, more useful for calibrating or evaluating remote sensing measurements than data collected along horizontal traverses. Particles are collected on an oil-coated film at a sampling speed given directly by the ascending rate of the balloon, 4 m s-1. The collecting film is advanced uniformly inside the instrument so that an always unused section of the film is exposed to ice particles, which are measured by imaging shortly after sampling. The high optical resolution of about 4 µm together with a pixel resolution of 1.65 µm allows particle detection at sizes of 10 µm and larger. For particles that are 20 µm (12 pixel) in size or larger, the shape can be recognized. The sampling volume, 130 cm3 s-1, is well defined and independent of particle size. With the encountered number concentrations of between 4 and 400 L-1, this required about 90- to 4-s sampling times to determine particle size distributions of cloud layers. Depending on how ice particles vary through the cloud, several layers per cloud with relatively uniform properties have been analysed. Preliminary results of the balloon campaign, targeting upper tropospheric, cold cirrus clouds, are presented here. Ice particles in these clouds were predominantly very small, with a median size of measured particles of around 50 µm and about 80 % of all particles below 100 µm in size. The properties of the particle size distributions at temperatures between -36 and -67 °C have been studied, as well as particle areas, extinction coefficients, and their shapes (area ratios). Gamma and log-normal distribution functions could be fitted to all measured particle size distributions achieving very good correlation with coefficients R of up to 0.95. Each distribution features one distinct mode. With decreasing temperature, the mode diameter decreases exponentially, whereas the total number concentration increases by two orders of magnitude with decreasing temperature in the same range. The high concentrations at cold temperatures also caused larger extinction coefficients, directly determined from cross-sectional areas of single ice particles, than at warmer temperatures. The mass of particles has been estimated from area and size. Ice water content (IWC) and effective diameters are then determined from the data. IWC did vary only between 1 × 10-3 and 5 × 10-3 g m-3 at temperatures below -40 °C and did not show a clear temperature trend. These measurements are part of an ongoing study.
An elutriation apparatus for assessing settleability of combined sewer overflows (CSOs).
Marsalek, J; Krishnappan, B G; Exall, K; Rochfort, Q; Stephens, R P
2006-01-01
An elutriation apparatus was proposed for testing the settleability of combined sewer outflows (CSOs) and applied to 12 CSO samples. In this apparatus, solids settling is measured under dynamic conditions created by flow through a series of settling chambers of varying diameters and upward flow velocities. Such a procedure reproduces better turbulent settling in CSO tanks than the conventional settling columns, and facilitates testing coagulant additions under dynamic conditions. Among the limitations, one could name the relatively large size of the apparatus and samples (60 L), and inadequate handling of floatables. Settleability results obtained for the elutriation apparatus and a conventional settling column indicate large inter-event variation in CSO settleability. Under such circumstances, settling tanks need to be designed for "average" conditions and, within some limits, the differences in test results produced by various settleability testing apparatuses and procedures may be acceptable. Further development of the elutriation apparatus is under way, focusing on reducing flow velocities in the tubing connecting settling chambers and reducing the number of settling chambers employed. The first measure would reduce the risk of floc breakage in the connecting tubing and the second one would reduce the required sample size.
Trochoidal X-ray Vector Radiography: Directional dark-field without grating stepping
NASA Astrophysics Data System (ADS)
Sharma, Y.; Bachche, S.; Kageyama, M.; Kuribayashi, M.; Pfeiffer, F.; Lasser, T.; Momose, A.
2018-03-01
X-ray Vector Radiography (XVR) is an imaging technique that reveals the orientations of sub-pixel sized structures within a sample. Several dark-field radiographs are acquired by rotating the sample around the beam propagation direction and stepping one of the gratings to several positions for every pose of the sample in an X-ray grating interferometry setup. In this letter, we present a method of performing XVR of a continuously moving sample without the need of any grating motion. We reconstruct the orientations within a sample by analyzing the change in the background moire fringes caused by the sample moving and simultaneously rotating in plane (trochoidal trajectory) across the detector field-of-view. Avoiding the motion of gratings provides significant advantages in terms of stability and repeatability, while the continuous motion of the sample makes this kind of system adaptable for industrial applications such as the scanning of samples on a conveyor belt. Being the first step in the direction of utilizing advanced sample trajectories to replace grating motion, this work also lays the foundations for a full three dimensional reconstruction of scattering function without grating motion.
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Sample size and power for cost-effectiveness analysis (part 1).
Glick, Henry A
2011-03-01
Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.
Estimation of sample size and testing power (Part 4).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-01-01
Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.
Mayer, B; Muche, R
2013-01-01
Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.
Ultrasonic characterization of single drops of liquids
Sinha, D.N.
1998-04-14
Ultrasonic characterization of single drops of liquids is disclosed. The present invention includes the use of two closely spaced transducers, or one transducer and a closely spaced reflector plate, to form an interferometer suitable for ultrasonic characterization of droplet-size and smaller samples without the need for a container. The droplet is held between the interferometer elements, whose distance apart may be adjusted, by surface tension. The surfaces of the interferometer elements may be readily cleansed by a stream of solvent followed by purified air when it is desired to change samples. A single drop of liquid is sufficient for high-quality measurement. Examples of samples which may be investigated using the apparatus and method of the present invention include biological specimens (tear drops; blood and other body fluid samples; samples from tumors, tissues, and organs; secretions from tissues and organs; snake and bee venom, etc.) for diagnostic evaluation, samples in forensic investigations, and detection of drugs in small quantities. 5 figs.
Ultrasonic characterization of single drops of liquids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinha, D.N.
Ultrasonic characterization of single drops of liquids is disclosed. The present invention includes the use of two closely spaced transducers, or one transducer and a closely spaced reflector plate, to form an interferometer suitable for ultrasonic characterization of droplet-size and smaller samples without the need for a container. The droplet is held between the interferometer elements, whose distance apart may be adjusted, by surface tension. The surfaces of the interferometer elements may be readily cleansed by a stream of solvent followed by purified air when it is desired to change samples. A single drop of liquid is sufficient for high-qualitymore » measurement. Examples of samples which may be investigated using the apparatus and method of the present invention include biological specimens (tear drops; blood and other body fluid samples; samples from tumors, tissues, and organs; secretions from tissues and organs; snake and bee venom, etc.) for diagnostic evaluation, samples in forensic investigations, and detection of drugs in small quantities. 5 figs.« less
Sampling strategies and biodiversity of influenza A subtypes in wild birds
Olson, Sarah H.; Parmley, Jane; Soos, Catherine; Gilbert, Martin; Latore-Margalef, Neus; Hall, Jeffrey S.; Hansbro, Phillip M.; Leighton, Frank; Munster, Vincent; Joly, Damien
2014-01-01
Wild aquatic birds are recognized as the natural reservoir of avian influenza A viruses (AIV), but across high and low pathogenic AIV strains, scientists have yet to rigorously identify most competent hosts for the various subtypes. We examined 11,870 GenBank records to provide a baseline inventory and insight into patterns of global AIV subtype diversity and richness. Further, we conducted an extensive literature review and communicated directly with scientists to accumulate data from 50 non-overlapping studies and over 250,000 birds to assess the status of historic sampling effort. We then built virus subtype sample-based accumulation curves to better estimate sample size targets that capture a specific percentage of virus subtype richness at seven sampling locations. Our study identifies a sampling methodology that will detect an estimated 75% of circulating virus subtypes from a targeted bird population and outlines future surveillance and research priorities that are needed to explore the influence of host and virus biodiversity on emergence and transmission.
NASA Astrophysics Data System (ADS)
Shah, S. M.; Crawshaw, J. P.; Gray, F.; Yang, J.; Boek, E. S.
2017-06-01
In the last decade, the study of fluid flow in porous media has developed considerably due to the combination of X-ray Micro Computed Tomography (micro-CT) and advances in computational methods for solving complex fluid flow equations directly or indirectly on reconstructed three-dimensional pore space images. In this study, we calculate porosity and single phase permeability using micro-CT imaging and Lattice Boltzmann (LB) simulations for 8 different porous media: beadpacks (with bead sizes 50 μm and 350 μm), sandpacks (LV60 and HST95), sandstones (Berea, Clashach and Doddington) and a carbonate (Ketton). Combining the observed porosity and calculated single phase permeability, we shed new light on the existence and size of the Representative Element of Volume (REV) capturing the different scales of heterogeneity from the pore-scale imaging. Our study applies the concept of the 'Convex Hull' to calculate the REV by considering the two main macroscopic petrophysical parameters, porosity and single phase permeability, simultaneously. The shape of the hull can be used to identify strong correlation between the parameters or greatly differing convergence rates. To further enhance computational efficiency we note that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size so that only a few small simulations are needed to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.
Marizzoni, Moira; Ferrari, Clarissa; Jovicich, Jorge; Albani, Diego; Babiloni, Claudio; Cavaliere, Libera; Didic, Mira; Forloni, Gianluigi; Galluzzi, Samantha; Hoffmann, Karl-Titus; Molinuevo, José Luis; Nobili, Flavio; Parnetti, Lucilla; Payoux, Pierre; Ribaldi, Federica; Rossini, Paolo Maria; Schönknecht, Peter; Soricelli, Andrea; Hensch, Tilman; Tsolaki, Magda; Visser, Pieter Jelle; Wiltfang, Jens; Richardson, Jill C; Bordet, Régis; Blin, Olivier; Frisoni, Giovanni B
2018-06-09
Early Alzheimer's disease (AD) detection using cerebrospinal fluid (CSF) biomarkers has been recommended as enrichment strategy for trials involving mild cognitive impairment (MCI) patients. To model a prodromal AD trial for identifying MRI structural biomarkers to improve subject selection and to be used as surrogate outcomes of disease progression. APOE ɛ4 specific CSF Aβ42/P-tau cut-offs were used to identify MCI with prodromal AD (Aβ42/P-tau positive) in the WP5-PharmaCog (E-ADNI) cohort. Linear mixed models were performed 1) with baseline structural biomarker, time, and biomarker×time interaction as factors to predict longitudinal changes in ADAS-cog13, 2) with Aβ42/P-tau status, time, and Aβ42/P-tau status×time interaction as factors to explain the longitudinal changes in MRI measures, and 3) to compute sample size estimation for a trial implemented with the selected biomarkers. Only baseline lateral ventricle volume was able to identify a subgroup of prodromal AD patients who declined faster (interaction, p = 0.003). Lateral ventricle volume and medial temporal lobe measures were the biomarkers most sensitive to disease progression (interaction, p≤0.042). Enrichment through ventricular volume reduced the sample size that a clinical trial would require from 13 to 76%, depending on structural outcome variable. The biomarker needing the lowest sample size was the hippocampal subfield GC-ML-DG (granule cells of molecular layer of the dentate gyrus) (n = 82 per arm to demonstrate a 20% atrophy reduction). MRI structural biomarkers can enrich prodromal AD with fast progressors and significantly decrease group size in clinical trials of disease modifying drugs.
Batista, Cristiane B; Carvalho, Márcia L de; Vasconcelos, Ana Glória G
To analyze the factors associated with neonatal mortality related to health services accessibility and use. Case-control study of live births in 2008 in small- and medium-sized municipalities in the North, Northeast, and Vale do Jequitinhonha regions, Brazil. A probabilistic sample stratified by region, population size, and information adequacy was generated for the choice of municipalities. Of these, all municipalities with 20,000 inhabitants or less were included in the study (36 municipalities), whereas the remainder were selected according to the probability method proportional to population size, totaling 20 cities with 20,001-50,000 inhabitants and 19 municipalities with 50,001-200,000 inhabitants. All deaths of live births in these cities were included. Controls were randomly sampled, considered as four times the number of cases. The sample size comprised 412 cases and 1772 controls. Hierarchical multiple logistic regression was used for data analysis. The risk factors for neonatal death were socioeconomic class D and E (OR=1.28), history of child death (OR=1.74), high-risk pregnancy (OR=4.03), peregrination in antepartum (OR=1.46), lack of prenatal care (OR=2.81), absence of professional for the monitoring of labor (OR=3.34), excessive time waiting for delivery (OR=1.97), borderline preterm birth (OR=4.09) and malformation (OR=13.66). These results suggest multiple causes of neonatal mortality, as well as the need to improve access to good quality maternal-child health care services in the assessed places of study. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
Orth, Patrick; Zurakowski, David; Alini, Mauro; Cucchiarini, Magali
2013-01-01
Advanced tissue engineering approaches for articular cartilage repair in the knee joint rely on translational animal models. In these investigations, cartilage defects may be established either in one joint (unilateral design) or in both joints of the same animal (bilateral design). We hypothesized that a lower intraindividual variability following the bilateral strategy would reduce the number of required joints. Standardized osteochondral defects were created in the trochlear groove of 18 rabbits. In 12 animals, defects were produced unilaterally (unilateral design; n=12 defects), while defects were created bilaterally in 6 animals (bilateral design; n=12 defects). After 3 weeks, osteochondral repair was evaluated histologically applying an established grading system. Based on intra- and interindividual variabilities, required sample sizes for the detection of discrete differences in the histological score were determined for both study designs (α=0.05, β=0.20). Coefficients of variation (%CV) of the total histological score values were 1.9-fold increased following the unilateral design when compared with the bilateral approach (26 versus 14%CV). The resulting numbers of joints needed to treat were always higher for the unilateral design, resulting in an up to 3.9-fold increase in the required number of experimental animals. This effect was most pronounced for the detection of small-effect sizes and estimating large standard deviations. The data underline the possible benefit of bilateral study designs for the decrease of sample size requirements for certain investigations in articular cartilage research. These findings might also be transferred to other scoring systems, defect types, or translational animal models in the field of cartilage tissue engineering. PMID:23510128
NASA Astrophysics Data System (ADS)
Smits, K. M.; Sakaki, T.; Limsuwat, A.; Illangasekare, T. H.
2009-05-01
It is widely recognized that liquid water, water vapor and temperature movement in the subsurface near the land/atmosphere interface are strongly coupled, influencing many agricultural, biological and engineering applications such as irrigation practices, the assessment of contaminant transport and the detection of buried landmines. In these systems, a clear understanding of how variations in water content, soil drainage/wetting history, porosity conditions and grain size affect the soil's thermal behavior is needed, however, the consideration of all factors is rare as very few experimental data showing the effects of these variations are available. In this study, the effect of soil moisture, drainage/wetting history, and porosity on the thermal conductivity of sandy soils with different grain sizes was investigated. For this experimental investigation, several recent sensor based technologies were compiled into a Tempe cell modified to have a network of sampling ports, continuously monitoring water saturation, capillary pressure, temperature, and soil thermal properties. The water table was established at mid elevation of the cell and then lowered slowly. The initially saturated soil sample was subjected to slow drainage, wetting, and secondary drainage cycles. After liquid water drainage ceased, evaporation was induced at the surface to remove soil moisture from the sample to obtain thermal conductivity data below the residual saturation. For the test soils studied, thermal conductivity increased with increasing moisture content, soil density and grain size while thermal conductivity values were similar for soil drying/wetting behavior. Thermal properties measured in this study were then compared with independent estimates made using empirical models from literature. These soils will be used in a proposed set of experiments in intermediate scale test tanks to obtain data to validate methods and modeling tools used for landmine detection.
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
Sample Size Determination for One- and Two-Sample Trimmed Mean Tests
ERIC Educational Resources Information Center
Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng
2008-01-01
Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…
Doig, Lorne E; Carr, Meghan K; Meissner, Anna G N; Jardine, Tim D; Jones, Paul D; Bharadwaj, Lalita; Lindenschmidt, Karl-Erich
2017-11-01
Across the circumpolar world, intensive anthropogenic activities in the southern reaches of many large, northward-flowing rivers can cause sediment contamination in the downstream depositional environment. The influence of ice cover on concentrations of inorganic contaminants in bed sediment (i.e., sediment quality) is unknown in these rivers, where winter is the dominant season. A geomorphic response unit approach was used to select hydraulically diverse sampling sites across a northern test-case system, the Slave River and delta (Northwest Territories, Canada). Surface sediment samples (top 1 cm) were collected from 6 predefined geomorphic response units (12 sites) to assess the relationships between bed sediment physicochemistry (particle size distribution and total organic carbon content) and trace element content (mercury and 18 other trace elements) during open-water conditions. A subset of sites was resampled under-ice to assess the influence of season on these relationships and on total trace element content. Concentrations of the majority of trace elements were strongly correlated with percent fines and proxies for grain size (aluminum and iron), with similar trace element grain size/grain size proxy relationships between seasons. However, finer materials were deposited under ice with associated increases in sediment total organic carbon content and the concentrations of most trace elements investigated. The geomorphic response unit approach was effective at identifying diverse hydrological environments for sampling prior to field operations. Our data demonstrate the need for under-ice sampling to confirm year-round consistency in trace element-geochemical relationships in fluvial systems and to define the upper extremes of these relationships. Whether contaminated or not, under-ice bed sediment can represent a "worst-case" scenario in terms of trace element concentrations and exposure for sediment-associated organisms in northern fluvial systems. Environ Toxicol Chem 2017;36:2916-2924. © 2017 SETAC. © 2017 SETAC.
Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A
2013-01-01
Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.
Performance monitoring in hip fracture surgery--how big a database do we really need?
Edwards, G A D; Metcalfe, A J; Johansen, A; O'Doherty, D
2010-04-01
Systems for collecting information about patient care are increasingly common in orthopaedic practice. Databases can allow various comparisons to be made over time. Significant decisions regarding service delivery and clinical practice may be made based on their results. We set out to determine the number of cases needed for comparison of 30-day mortality, inpatient wound infection rates and mean hospital length of stay, with a power of 80% for the demonstration of an effect at a significance level of p<0.05. We analysed 2 years of prospectively collected data on 1050 hip fracture patients admitted to a city teaching hospital. Detection of a 10% difference in 30-day mortality would require 14,065 patients in each arm of any comparison, demonstration of a 50% difference would require 643 patients in each arm; for wound infections, demonstration of a 10% difference in incidence would require 23,921 patients in each arm and 1127 patients for demonstration of a 50% difference; for length of stay, a difference of 10% would require 1479 patients and 6660 patients for a 50% difference. This study demonstrates the importance of considering the population sizes before comparisons are made on the basis of basic hip fracture outcome data. Our data also help illustrate the impact of sample size considerations when interpreting the results of performance monitoring. Many researchers will be used to the fact that rare outcomes such as inpatient mortality or wound infection require large sample sizes before differences can be reliably demonstrated between populations. This study gives actual figures that researchers could use when planning studies. Statistically meaningful analyses will only be possible with major multi-centre collaborations, as will be possible if hospital Trusts participate in the National Hip Fracture Database. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Using presence of sign to measure habitats used by Roosevelt elk
Weckerly, Floyd W.; Ricca, Mark A.
2000-01-01
tract Radiotelemetry and pellet-group surveys are methods used commonly to measure habi- tat use by large ungulates. However, telemetry can be expensive and analysis of data col- lected from pellet-group surveys is restricted to rank analysis. We explored the feasibil- ity of recording the presence of Roosevelt elk (Cervus elaphus roosevelti) sign to identify habitats used by elk. We surveyed stations (1-ha circular plots) about 0.72 km apart for the presence of 0- to 4-day-old elk sign (tracks and feces) from October to April 1994-1997 at 2 sites in northwestern California. Our objectives were to: 1) measure errors in detecting and classifying elk presence at stations from sign, 2) determine auto- correlation of elk sign at stations to assess what is an independent data point, 3) examine the effect of 2 station sizes on the rate of sign detections, and 4) determine sample sizes needed to detect habitat use. We detected elk sign 96.6% of the time (n=68) when elk were observed at stations within 0-4 days. Elk sign was misclassified only 3 times (n= 70). No autocorrelations in sign detections across time or space were detected because observed data were similar to sign generated randomly at stations. The proportion of 1-ha (0.12) and 2-ha stations (0.13) with sign was similar. Sample sizes >400 were need- ed to have power >0.8 to detect relationships among habitat variables and frequency of sign at stations. Recording the presence of sign in stations appears to be a reliable and feasible technique to measure habitats used by elk.
An overview of the characterization of occupational exposure to nanoaerosols in workplaces
NASA Astrophysics Data System (ADS)
Castellano, Paola; Ferrante, Riccardo; Curini, Roberta; Canepari, Silvia
2009-05-01
Currently, there is a lack of standardized sampling and metric methods that can be applied to measure the level of exposure to nanosized aerosols. Therefore, any attempt to characterize exposure to nanoparticles (NP) in a workplace must involve a multifaceted approach characterized by different sampling and analytical techniques to measure all relevant characteristics of NP exposure. Furthermore, as NP aerosols are always complex mixtures of multiple origins, sampling and analytical methods need to be improved to selectively evaluate the apportionment from specific sources to the final nanomaterials. An open question at the world's level is how to relate specific toxic effects of NP with one or more among several different parameters (such as particle size, mass, composition, surface area, number concentration, aggregation or agglomeration state, water solubility and surface chemistry). As the evaluation of occupational exposure to NP in workplaces needs dimensional and chemical characterization, the main problem is the choice of the sampling and dimensional separation techniques. Therefore a convenient approach to allow a satisfactory risk assessment could be the contemporary use of different sampling and measuring techniques for particles with known toxicity in selected workplaces. Despite the lack of specific NP exposure limit values, exposure metrics, appropriate to nanoaerosols, are discussed in the Technical Report ISO/TR 27628:2007 with the aim to enable occupational hygienists to characterize and monitor nanoaerosols in workplaces. Moreover, NIOSH has developed the Document Approaches to Safe Nanotechnology (intended to be an information exchange with NIOSH) in order to address current and future research needs to understanding the potential risks that nanotechnology may have to workers.
Sampling and Data Gathering Strategies for Future USAF Anthropometry
1976-02-01
of USAF body size data. The approach we suggest would be less costly and more responsive to the needs of the USAF than periodic massive surveys...has been that many of these photographs were taken primarily for somatotyping rather than for measure- ment. Another source of difficulty has been...goals and we have recently i j accepted responsibility under an AMRL research contract to demonstrate that this is so. V Of all the non-standard
Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.
2015-01-01
Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.
Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang
2017-01-01
Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size. PMID:28045443
Engaging workplace representatives in research: what recruitment strategies work best?
Coole, C; Nouri, F; Narayanasamy, M; Baker, P; Khan, S; Drummond, A
2018-05-23
Workplaces are key stakeholders in work and health but little is known about the methods used to recruit workplace representatives (WRs), including managers, occupational health advisers and colleagues, to externally funded healthcare research studies. To detail the strategies used in recruiting WRs from three areas of the UK to a qualitative study concerning their experience of employees undergoing hip or knee replacement, to compare the strategies and inform recruitment methods for future studies. Six strategies were used to recruit WRs from organizations of different sizes and sectors. Data on numbers approached and responses received were analysed descriptively. Twenty-five WRs were recruited. Recruitment had to be extended outside the main three study areas, and took several months. It proved more difficult to recruit from non-service sectors and small- and medium-sized enterprises. The most successful strategies were approaching organizations that had participated in previous research studies, or known professionally or personally to team members. Recruiting a diverse sample of WRs to healthcare research requires considerable resources and persistence, and a range of strategies. Recruitment is easier where local relationships already exist; the importance of building and maintaining these relationships cannot be underestimated. However, the potential risks of bias and participant fatigue need to be acknowledged and managed. Further studies are needed to explore how WRs can be recruited to health research, and to identify the researcher effort and costs involved in achieving unbiased and representative samples.
Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang
2017-01-01
Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size.
Essers, Geurt; van Dulmen, Sandra; van Es, Judy; van Weel, Chris; van der Vleuten, Cees; Kramer, Anneke
2013-12-01
Acquiring adequate communication skills is an essential part of general practice (GP) specialty training. In assessing trainee proficiency, the context in which trainees communicate is usually not taken into account. The present paper aims to explore what context factors can be found in regular GP trainee consultations and how these influence their communication performance. In a randomly selected sample of 44 videotaped, real-life GP trainee consultations, we searched for context factors previously identified in GP consultations and explored how trainee ratings change if context factors are taken into account. Trainee performance was rated twice using the MAAS-Global, first without and then with incorporating context factors. Item score differences were calculated using a paired samples t-test and effect sizes were computed. All previously identified context factors were again observed in GP trainee consultations. In communication assessment scores, we found a significant difference in 5 out of 13 MAAS-Global items, mostly in a positive direction. The effect size was moderate (0.57). GP trainee communication is influenced by contextual factors; they seem to adapt to context in a professional way. GP specialty training needs to focus on a context-specific application of communication skills. Communication raters need to be taught how to incorporate context factors into their assessments. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.
Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S
2016-10-01
The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Jeffrey H. Gove
2003-01-01
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Yoon, Richard K; Chussid, Steven
2009-01-01
The purpose of this prospective study was to compare the efficacy of Oraqix to benzocaine while placing a rubber dam clamp during sealant placement on children. A sample size of 45 7- to 12-year-old patients who presented for bilateral sealants on permanent first molars participated in this study. A split-mouth design was implemented with Oraqix applied to one side and 20 percent benzocaine gel applied to the other. After placing the topical anesthetic and the rubber dam clamp, patients completed a Feces Pain Scale (FPS) to rate the level of discomfort after clamp placement. Twenty-seven subjects (60%) were female and 18 subjects (40%) were mole; 15 (33%) were younger than 9 years old and 30 (67%) were at least 9 years old. The overall difference in mean FPS ratings was not statistically significant (P = .27). Regarding gender, there was no statistically significant difference in males (P = .65) or females (P = .26). There was also no difference in mean FPS ratings when looking at age groups younger than 9 years old with P=.77 In the 9 years and older age groups, however there was a statistically significant difference, with P = .04. Application of Oraqix did not reduce discomfort when compared to benzocaine in this small sample size. Oraqix was more effective than benzocaine in the age group 9 and older. A larger sample size is needed to determine its efficacy in children younger than 9years old.
Zeestraten, Eva Anna; Benjamin, Philip; Lambert, Christian; Lawrence, Andrew John; Williams, Owen Alan; Morris, Robin Guy; Barrick, Thomas Richard; Markus, Hugh Stephen
2016-01-01
Cerebral small vessel disease (SVD) is the major cause of vascular cognitive impairment, resulting in significant disability and reduced quality of life. Cognitive tests have been shown to be insensitive to change in longitudinal studies and, therefore, sensitive surrogate markers are needed to monitor disease progression and assess treatment effects in clinical trials. Diffusion tensor imaging (DTI) is thought to offer great potential in this regard. Sensitivity of the various parameters that can be derived from DTI is however unknown. We aimed to evaluate the differential sensitivity of DTI markers to detect SVD progression, and to estimate sample sizes required to assess therapeutic interventions aimed at halting decline based on DTI data. We investigated 99 patients with symptomatic SVD, defined as clinical lacunar syndrome with MRI confirmation of a corresponding infarct as well as confluent white matter hyperintensities over a 3 year follow-up period. We evaluated change in DTI histogram parameters using linear mixed effect models and calculated sample size estimates. Over a three-year follow-up period we observed a decline in fractional anisotropy and increase in diffusivity in white matter tissue and most parameters changed significantly. Mean diffusivity peak height was the most sensitive marker for SVD progression as it had the smallest sample size estimate. This suggests disease progression can be monitored sensitively using DTI histogram analysis and confirms DTI’s potential as surrogate marker for SVD. PMID:26808982
Bayesian methods for the design and interpretation of clinical trials in very rare diseases
Hampson, Lisa V; Whitehead, John; Eleftheriou, Despina; Brogan, Paul
2014-01-01
This paper considers the design and interpretation of clinical trials comparing treatments for conditions so rare that worldwide recruitment efforts are likely to yield total sample sizes of 50 or fewer, even when patients are recruited over several years. For such studies, the sample size needed to meet a conventional frequentist power requirement is clearly infeasible. Rather, the expectation of any such trial has to be limited to the generation of an improved understanding of treatment options. We propose a Bayesian approach for the conduct of rare-disease trials comparing an experimental treatment with a control where patient responses are classified as a success or failure. A systematic elicitation from clinicians of their beliefs concerning treatment efficacy is used to establish Bayesian priors for unknown model parameters. The process of determining the prior is described, including the possibility of formally considering results from related trials. As sample sizes are small, it is possible to compute all possible posterior distributions of the two success rates. A number of allocation ratios between the two treatment groups can be considered with a view to maximising the prior probability that the trial concludes recommending the new treatment when in fact it is non-inferior to control. Consideration of the extent to which opinion can be changed, even by data from the best feasible design, can help to determine whether such a trial is worthwhile. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24957522
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Successful Sampling Strategy Advances Laboratory Studies of NMR Logging in Unconsolidated Aquifers
NASA Astrophysics Data System (ADS)
Behroozmand, Ahmad A.; Knight, Rosemary; Müller-Petke, Mike; Auken, Esben; Barfod, Adrian A. S.; Ferré, Ty P. A.; Vilhelmsen, Troels N.; Johnson, Carole D.; Christiansen, Anders V.
2017-11-01
The nuclear magnetic resonance (NMR) technique has become popular in groundwater studies because it responds directly to the presence and mobility of water in a porous medium. There is a need to conduct laboratory experiments to aid in the development of NMR hydraulic conductivity models, as is typically done in the petroleum industry. However, the challenge has been obtaining high-quality laboratory samples from unconsolidated aquifers. At a study site in Denmark, we employed sonic drilling, which minimizes the disturbance of the surrounding material, and extracted twelve 7.6 cm diameter samples for laboratory measurements. We present a detailed comparison of the acquired laboratory and logging NMR data. The agreement observed between the laboratory and logging data suggests that the methodologies proposed in this study provide good conditions for studying NMR measurements of unconsolidated near-surface aquifers. Finally, we show how laboratory sample size and condition impact the NMR measurements.
Efficient Sample Tracking With OpenLabFramework
List, Markus; Schmidt, Steffen; Trojnar, Jakub; Thomas, Jochen; Thomassen, Mads; Kruse, Torben A.; Tan, Qihua; Baumbach, Jan; Mollenhauer, Jan
2014-01-01
The advance of new technologies in biomedical research has led to a dramatic growth in experimental throughput. Projects therefore steadily grow in size and involve a larger number of researchers. Spreadsheets traditionally used are thus no longer suitable for keeping track of the vast amounts of samples created and need to be replaced with state-of-the-art laboratory information management systems. Such systems have been developed in large numbers, but they are often limited to specific research domains and types of data. One domain so far neglected is the management of libraries of vector clones and genetically engineered cell lines. OpenLabFramework is a newly developed web-application for sample tracking, particularly laid out to fill this gap, but with an open architecture allowing it to be extended for other biological materials and functional data. Its sample tracking mechanism is fully customizable and aids productivity further through support for mobile devices and barcoded labels. PMID:24589879
Baixench, M T; Al-Sheikh, M; Paugam, A
2005-01-01
The study included 37 urine samples which have been artificially infected with low levels (10(3) CFU/mL) of various fungi strains. We compared the effects of sample storage, up to 48 hours, at room temperature, in a urine evacuated tube containing specific additives with storage at + 4 degrees C, for the same length of time, in a urine evacuated tube without any additives. There have been no differences of results (speed of growth and colony size) between the 2 modes of storage. However, the experience has shown that samples needed a careful mixing before seeding to avoid underdetection of the strains. Based on the study results, the BD Vacutainer C&S tubes are suitable for delayed testing for the diagnosis of urine fungal infection.
Micromachined chemical jet dispenser
Swierkowski, S.P.
1999-03-02
A dispenser is disclosed for chemical fluid samples that need to be precisely ejected in size, location, and time. The dispenser is a micro-electro-mechanical systems (MEMS) device fabricated in a bonded silicon wafer and a substrate, such as glass or silicon, using integrated circuit-like fabrication technology which is amenable to mass production. The dispensing is actuated by ultrasonic transducers that efficiently produce a pressure wave in capillaries that contain the chemicals. The 10-200 {micro}m diameter capillaries can be arranged to focus in one spot or may be arranged in a larger dense linear array (ca. 200 capillaries). The dispenser is analogous to some ink jet print heads for computer printers but the fluid is not heated, thus not damaging certain samples. Major applications are in biological sample handling and in analytical chemical procedures such as environmental sample analysis, medical lab analysis, or molecular biology chemistry experiments. 4 figs.
Micromachined chemical jet dispenser
Swierkowski, Steve P.
1999-03-02
A dispenser for chemical fluid samples that need to be precisely ejected in size, location, and time. The dispenser is a micro-electro-mechanical systems (MEMS) device fabricated in a bonded silicon wafer and a substrate, such as glass or silicon, using integrated circuit-like fabrication technology which is amenable to mass production. The dispensing is actuated by ultrasonic transducers that efficiently produce a pressure wave in capillaries that contain the chemicals. The 10-200 .mu.m diameter capillaries can be arranged to focus in one spot or may be arranged in a larger dense linear array (.about.200 capillaries). The dispenser is analogous to some ink jet print heads for computer printers but the fluid is not heated, thus not damaging certain samples. Major applications are in biological sample handling and in analytical chemical procedures such as environmental sample analysis, medical lab analysis, or molecular biology chemistry experiments.
Securing quality of camera-based biomedical optics
NASA Astrophysics Data System (ADS)
Guse, Frank; Kasper, Axel; Zinter, Bob
2009-02-01
As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.
Interim analysis: A rational approach of decision making in clinical trial.
Kumar, Amal; Chakraborty, Bhaswat S
2016-01-01
Interim analysis of especially sizeable trials keeps the decision process free of conflict of interest while considering cost, resources, and meaningfulness of the project. Whenever necessary, such interim analysis can also call for potential termination or appropriate modification in sample size, study design, and even an early declaration of success. Given the extraordinary size and complexity today, this rational approach helps to analyze and predict the outcomes of a clinical trial that incorporate what is learned during the course of a study or a clinical development program. Such approach can also fill the gap by directing the resources toward relevant and optimized clinical trials between unmet medical needs and interventions being tested currently rather than fulfilling only business and profit goals.
Perception of dental appearance using Index of Treatment Need (Aesthetic Component) assessments.
Abdullah, M S B; Rock, W P
2002-09-01
To compare assessments of malocclusion made by an orthodontist with the perceptions of children and their parents. A sample of 5,112 Malaysian schoolchildren was selected by a system of stratified random sampling based on state, ethnicity and gender. Each child was first allocated an IOTN (AC) grade by an orthodontist, after which the child and then the parents also recorded a grade. A smaller sub-sample of 720 children was also asked to identify the three worst AC pictures and to give reasons for their choice. The orthodontist scored 22.8% of the subjects in AC grades 8-10, 'Definite Need for Treatment' whilst only 5.8% of children and 4.8% of parents recorded these grades. If AC grade 6 is taken as the cut off point the proportions needing treatment would be 41.8%, 9.7% and 9.9% respectively. Similar proportions of boys and girls scored their own teeth in the 8-10 range but more girls than boys scored themselves in grades 1-3, 'No Need for Treatment'. Ethnic origin had no effect upon the perception of malocclusion by the children. Crowding, deep bite and tooth size were the three occlusal features that children liked least. The IOTN (AC) index appears robust in its reflection of the perception of malocclusion by children and parents respectively. Assessments were little affected by gender or ethnicity. However the scores of children and parents were much lower than those of an orthodontist trained in the use of IOTN.
Chefs' opinions of restaurant portion sizes.
Condrasky, Marge; Ledikwe, Jenny H; Flood, Julie E; Rolls, Barbara J
2007-08-01
The objectives were to determine who establishes restaurant portion sizes and factors that influence these decisions, and to examine chefs' opinions regarding portion size, nutrition information, and weight management. A survey was distributed to chefs to obtain information about who is responsible for determining restaurant portion sizes, factors influencing restaurant portion sizes, what food portion sizes are being served in restaurants, and chefs' opinions regarding nutrition information, health, and body weight. The final sample consisted of 300 chefs attending various culinary meetings. Executive chefs were identified as being primarily responsible for establishing portion sizes served in restaurants. Factors reported to have a strong influence on restaurant portion sizes included presentation of foods, food cost, and customer expectations. While 76% of chefs thought that they served "regular" portions, the actual portions of steak and pasta they reported serving were 2 to 4 times larger than serving sizes recommended by the U.S government. Chefs indicated that they believe that the amount of food served influences how much patrons consume and that large portions are a problem for weight control, but their opinions were mixed regarding whether it is the customer's responsibility to eat an appropriate amount when served a large portion of food. Portion size is a key determinant of energy intake, and the results from this study suggest that cultural norms and economic value strongly influence the determination of restaurant portion sizes. Strategies are needed to encourage chefs to provide and promote portions that are appropriate for customers' energy requirements.
NASA Astrophysics Data System (ADS)
Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.
2018-01-01
Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.
Kellner, Elliott; Hubbart, Jason A
2017-11-15
Given the importance of suspended sediment to biogeochemical functioning of aquatic ecosystems, and the increasing concern of mixed-land-use effects on pollutant loading, there is an urgent need for research that quantitatively characterizes spatiotemporal variation of suspended sediment dynamics in contemporary watersheds. A study was conducted in a representative watershed of the central United States utilizing a nested-scale experimental watershed design, including five gauging sites (n=5) partitioning the catchment into five sub-watersheds. Hydroclimate stations at gauging sites were used to monitor air temperature, precipitation, and stream stage at 30-min intervals during the study (Oct. 2009-Feb. 2014). Streamwater grab samples were collected four times per week, at each site, for the duration of the study (Oct. 2009-Feb. 2014). Water samples were analyzed for suspended sediment using laser particle diffraction. Results showed significant differences (p<0.05) between monitoring sites for total suspended sediment concentration, mean particle size, and silt volume. Total concentration and silt volume showed a decreasing trend from the primarily agricultural upper watershed to the urban mid-watershed, and a subsequent increasing trend to the more suburban lower watershed. Conversely, mean particle size showed an opposite spatial trend. Results are explained by a combination of land use (e.g. urban stormwater dilution) and surficial geology (e.g. supply-controlled spatial variation of particle size). Correlation analyses indicated weak relationships with both hydroclimate and land use, indicating non-linear sediment dynamics. Suspended sediment parameters displayed consistent seasonality during the study, with total concentration decreasing through the growing season and mean particle size inversely tracking air temperature. Likely explanations include vegetation influences and climate-driven weathering cycles. Results reflect unique observations of spatiotemporal variation of suspended sediment particle size class. Such information is crucial for land and water resource managers working to mitigate aquatic ecosystem degradation and improve water resource sustainability in mixed-land-use watersheds globally. Copyright © 2017 Elsevier B.V. All rights reserved.
Hobin, E; Sacco, J; Vanderlee, L; White, C M; Zuo, F; Sheeshka, J; McVey, G; Fodor O'Brien, M; Hammond, D
2015-12-01
Given the proposed changes to nutrition labelling in Canada and the dearth of research examining comprehension and use of nutrition facts tables (NFts) by adolescents and young adults, our objective was to experimentally test the efficacy of modifications to NFts on young Canadians' ability to interpret, compare and mathematically manipulate nutrition information in NFts on prepackaged food. An online survey was conducted among 2010 Canadians aged 16 to 24 years drawn from a consumer sample. Participants were randomized to view two NFts according to one of six experimental conditions, using a between-groups 2 x 3 factorial design: serving size (current NFt vs. standardized serving-sizes across similar products) x percent daily value (% DV) (current NFt vs. "low/med/high" descriptors vs. colour coding). The survey included seven performance tasks requiring participants to interpret, compare and mathematically manipulate nutrition information on NFts. Separate modified Poisson regression models were conducted for each of the three outcomes. The ability to compare two similar products was significantly enhanced in NFt conditions that included standardized serving-sizes (p ≤ .001 for all). Adding descriptors or colour coding of % DV next to calories and nutrients on NFts significantly improved participants' ability to correctly interpret % DV information (p ≤ .001 for all). Providing both standardized serving-sizes and descriptors of % DV had a modest effect on participants' ability to mathematically manipulate nutrition information to calculate the nutrient content of multiple servings of a product (relative ratio = 1.19; 95% confidence limit: 1.04-1.37). Standardizing serving-sizes and adding interpretive % DV information on NFts improved young Canadians' comprehension and use of nutrition information. Some caution should be exercised in generalizing these findings to all Canadian youth due to the sampling issues associated with the study population. Further research is needed to replicate this study in a more heterogeneous sample in Canada and across a range of food products and categories.
Implications of sampling design and sample size for national carbon accounting systems.
Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel
2011-11-08
Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.
76 FR 56141 - Notice of Intent To Request New Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... level surveys of similar scope and size. The sample for each selected community will be strategically... of 2 hours per sample community. Full Study: The maximum sample size for the full study is 2,812... questionnaires. The initial sample size for this phase of the research is 100 respondents (10 respondents per...
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
ERIC Educational Resources Information Center
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
A comparison of fitness-case sampling methods for genetic programming
NASA Astrophysics Data System (ADS)
Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel
2017-11-01
Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.
Yan, Rui; Edwards, Thomas J.; Pankratz, Logan M.; Kuhn, Richard J.; Lanman, Jason K.; Liu, Jun; Jiang, Wen
2015-01-01
Cryo-electron tomography (cryo-ET) is an emerging technique that can elucidate the architecture of macromolecular complexes and cellular ultrastructure in a near-native state. Some important sample parameters, such as thickness and tilt, are needed for 3-D reconstruction. However, these parameters can currently only be determined using trial 3-D reconstructions. Accurate electron mean free path plays a significant role in modeling image formation process essential for simulation of electron microscopy images and model-based iterative 3-D reconstruction methods; however, their values are voltage and sample dependent and have only been experimentally measured for a limited number of sample conditions. Here, we report a computational method, tomoThickness, based on the Beer-Lambert law, to simultaneously determine the sample thickness, tilt and electron inelastic mean free path by solving an overdetermined nonlinear least square optimization problem utilizing the strong constraints of tilt relationships. The method has been extensively tested with both stained and cryo datasets. The fitted electron mean free paths are consistent with reported experimental measurements. The accurate thickness estimation eliminates the need for a generous assignment of Z-dimension size of the tomogram. Interestingly, we have also found that nearly all samples are a few degrees tilted relative to the electron beam. Compensation of the intrinsic sample tilt can result in horizontal structure and reduced Z-dimension of tomograms. Our fast, pre-reconstruction method can thus provide important sample parameters that can help improve performance of tomographic reconstruction of a wide range of samples. PMID:26433027