Accounting for twin births in sample size calculations for randomised trials.
Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J
2018-05-04
Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.
Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Sample size calculations for case-control studies
This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.
Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review
Morris, Tom; Gray, Laura
2017-01-01
Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637
Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review.
Kristunas, Caroline; Morris, Tom; Gray, Laura
2017-11-15
To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Any, not limited to healthcare settings. Any taking part in an SW-CRT published up to March 2016. The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22-0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Estimation of sample size and testing power (Part 4).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-01-01
Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.
Determining the Population Size of Pond Phytoplankton.
ERIC Educational Resources Information Center
Hummer, Paul J.
1980-01-01
Discusses methods for determining the population size of pond phytoplankton, including water sampling techniques, laboratory analysis of samples, and additional studies worthy of investigation in class or as individual projects. (CS)
Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.
Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra
2016-11-20
The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
ERIC Educational Resources Information Center
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Usami, Satoshi
2017-03-01
Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Methodological quality of behavioural weight loss studies: a systematic review
Lemon, S. C.; Wang, M. L.; Haughton, C. F.; Estabrook, D. P.; Frisard, C. F.; Pagoto, S. L.
2018-01-01
Summary This systematic review assessed the methodological quality of behavioural weight loss intervention studies conducted among adults and associations between quality and statistically significant weight loss outcome, strength of intervention effectiveness and sample size. Searches for trials published between January, 2009 and December, 2014 were conducted using PUBMED, MEDLINE and PSYCINFO and identified ninety studies. Methodological quality indicators included study design, anthropometric measurement approach, sample size calculations, intent-to-treat (ITT) analysis, loss to follow-up rate, missing data strategy, sampling strategy, report of treatment receipt and report of intervention fidelity (mean = 6.3). Indicators most commonly utilized included randomized design (100%), objectively measured anthropometrics (96.7%), ITT analysis (86.7%) and reporting treatment adherence (76.7%). Most studies (62.2%) had a follow-up rate >75% and reported a loss to follow-up analytic strategy or minimal missing data (69.9%). Describing intervention fidelity (34.4%) and sampling from a known population (41.1%) were least common. Methodological quality was not associated with reporting a statistically significant result, effect size or sample size. This review found the published literature of behavioural weight loss trials to be of high quality for specific indicators, including study design and measurement. Identified for improvement include utilization of more rigorous statistical approaches to loss to follow up and better fidelity reporting. PMID:27071775
Aerosol mobility size spectrometer
Wang, Jian; Kulkarni, Pramod
2007-11-20
A device for measuring aerosol size distribution within a sample containing aerosol particles. The device generally includes a spectrometer housing defining an interior chamber and a camera for recording aerosol size streams exiting the chamber. The housing includes an inlet for introducing a flow medium into the chamber in a flow direction, an aerosol injection port adjacent the inlet for introducing a charged aerosol sample into the chamber, a separation section for applying an electric field to the aerosol sample across the flow direction and an outlet opposite the inlet. In the separation section, the aerosol sample becomes entrained in the flow medium and the aerosol particles within the aerosol sample are separated by size into a plurality of aerosol flow streams under the influence of the electric field. The camera is disposed adjacent the housing outlet for optically detecting a relative position of at least one aerosol flow stream exiting the outlet and for optically detecting the number of aerosol particles within the at least one aerosol flow stream.
Rock sampling. [apparatus for controlling particle size
NASA Technical Reports Server (NTRS)
Blum, P. (Inventor)
1971-01-01
An apparatus for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The device includes grinding means for cutting grooves in the rock surface and to provide a grouping of thin, shallow, parallel ridges and cutter means to reduce these ridges to a powder specimen. Collection means is provided for the powder. The invention relates to rock grinding and particularly to the sampling of rock specimens with good size control.
Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.
McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M
2015-03-01
Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size
Rubin, David M.; Chezar, Henry; Harney, Jodi N.; Topping, David J.; Melis, Theodore S.; Sherwood, Christopher R.
2007-01-01
For more than a century, studies of sedimentology and sediment transport have measured bed-sediment grain size by collecting samples and transporting them back to the laboratory for grain-size analysis. This process is slow and expensive. Moreover, most sampling systems are not selective enough to sample only the surficial grains that interact with the flow; samples typically include sediment from at least a few centimeters beneath the bed surface. New hardware and software are available for in situ measurement of grain size. The new technology permits rapid measurement of surficial bed sediment. Here we describe several systems we have deployed by boat, by hand, and by tripod in rivers, oceans, and on beaches.
Underwater Microscope for Measuring Spatial and Temporal Changes in Bed-Sediment Grain Size
Rubin, David M.; Chezar, Henry; Harney, Jodi N.; Topping, David J.; Melis, Theodore S.; Sherwood, Christopher R.
2006-01-01
For more than a century, studies of sedimentology and sediment transport have measured bed-sediment grain size by collecting samples and transporting them back to the lab for grain-size analysis. This process is slow and expensive. Moreover, most sampling systems are not selective enough to sample only the surficial grains that interact with the flow; samples typically include sediment from at least a few centimeters beneath the bed surface. New hardware and software are available for in-situ measurement of grain size. The new technology permits rapid measurement of surficial bed sediment. Here we describe several systems we have deployed by boat, by hand, and by tripod in rivers, oceans, and on beaches.
Ma, Li-Xin; Liu, Jian-Ping
2012-01-01
To investigate whether the power of the effect size was based on adequate sample size in randomized controlled trials (RCTs) for the treatment of patients with type 2 diabetes mellitus (T2DM) using Chinese medicine. China Knowledge Resource Integrated Database (CNKI), VIP Database for Chinese Technical Periodicals (VIP), Chinese Biomedical Database (CBM), and Wangfang Data were systematically recruited using terms like "Xiaoke" or diabetes, Chinese herbal medicine, patent medicine, traditional Chinese medicine, randomized, controlled, blinded, and placebo-controlled. Limitation was set on the intervention course > or = 3 months in order to identify the information of outcome assessement and the sample size. Data collection forms were made according to the checking lists found in the CONSORT statement. Independent double data extractions were performed on all included trials. The statistical power of the effects size for each RCT study was assessed using sample size calculation equations. (1) A total of 207 RCTs were included, including 111 superiority trials and 96 non-inferiority trials. (2) Among the 111 superiority trials, fasting plasma glucose (FPG) and glycosylated hemoglobin HbA1c (HbA1c) outcome measure were reported in 9% and 12% of the RCTs respectively with the sample size > 150 in each trial. For the outcome of HbA1c, only 10% of the RCTs had more than 80% power. For FPG, 23% of the RCTs had more than 80% power. (3) In the 96 non-inferiority trials, the outcomes FPG and HbA1c were reported as 31% and 36% respectively. These RCTs had a samples size > 150. For HbA1c only 36% of the RCTs had more than 80% power. For FPG, only 27% of the studies had more than 80% power. The sample size for statistical analysis was distressingly low and most RCTs did not achieve 80% power. In order to obtain a sufficient statistic power, it is recommended that clinical trials should establish clear research objective and hypothesis first, and choose scientific and evidence-based study design and outcome measurements. At the same time, calculate required sample size to ensure a precise research conclusion.
Inventory implications of using sampling variances in estimation of growth model coefficients
Albert R. Stage; William R. Wykoff
2000-01-01
Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...
Pye, Kenneth; Blott, Simon J
2004-08-11
Particle size is a fundamental property of any sediment, soil or dust deposit which can provide important clues to nature and provenance. For forensic work, the particle size distribution of sometimes very small samples requires precise determination using a rapid and reliable method with a high resolution. The Coulter trade mark LS230 laser granulometer offers rapid and accurate sizing of particles in the range 0.04-2000 microm for a variety of sample types, including soils, unconsolidated sediments, dusts, powders and other particulate materials. Reliable results are possible for sample weights of just 50 mg. Discrimination between samples is performed on the basis of the shape of the particle size curves and statistical measures of the size distributions. In routine forensic work laser granulometry data can rarely be used in isolation and should be considered in combination with results from other techniques to reach an overall conclusion.
Moustakas, Aristides; Evans, Matthew R
2015-02-28
Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.
Liu, Yuefeng; Luo, Jingjie; Shin, Yooleemi; Moldovan, Simona; Ersen, Ovidiu; Hébraud, Anne; Schlatter, Guy; Pham-Huu, Cuong; Meny, Christian
2016-01-01
Assemblies of nanoparticles are studied in many research fields from physics to medicine. However, as it is often difficult to produce mono-dispersed particles, investigating the key parameters enhancing their efficiency is blurred by wide size distributions. Indeed, near-field methods analyse a part of the sample that might not be representative of the full size distribution and macroscopic methods give average information including all particle sizes. Here, we introduce temperature differential ferromagnetic nuclear resonance spectra that allow sampling the crystallographic structure, the chemical composition and the chemical order of non-interacting ferromagnetic nanoparticles for specific size ranges within their size distribution. The method is applied to cobalt nanoparticles for catalysis and allows extracting the size effect from the crystallographic structure effect on their catalytic activity. It also allows sampling of the chemical composition and chemical order within the size distribution of alloyed nanoparticles and can thus be useful in many research fields. PMID:27156575
The cost of large numbers of hypothesis tests on power, effect size and sample size.
Lazzeroni, L C; Ray, A
2012-01-01
Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
Sample allocation balancing overall representativeness and stratum precision.
Diaz-Quijano, Fredi Alexander
2018-05-07
In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.
Photo series for quantifying natural forest residues: southern Cascades, northern Sierra Nevada
Kenneth S. Blonski; John L. Schramel
1981-01-01
A total of 56 photographs shows different levels of natural fuel loadings for selected size classes in seven forest types of the southern Cascade and northern Sierra-Nevada ranges. Data provided with each photo include size, weight, volumes, residue depths, and percent of ground coverage. Stand information includes sizes, weights, and volumes of the trees sampled for...
Code of Federal Regulations, 2013 CFR
2013-01-01
... between 1 and k. The value of k is dependent upon the estimated size of the universe and the sample size... included in the active universe defined in paragraph (e)(1) of this section) during the annual review... review (i.e., households which are part of the negative universe defined in paragraph (e)(2) of this...
Code of Federal Regulations, 2012 CFR
2012-01-01
... between 1 and k. The value of k is dependent upon the estimated size of the universe and the sample size... included in the active universe defined in paragraph (e)(1) of this section) during the annual review... review (i.e., households which are part of the negative universe defined in paragraph (e)(2) of this...
Code of Federal Regulations, 2014 CFR
2014-01-01
... between 1 and k. The value of k is dependent upon the estimated size of the universe and the sample size... included in the active universe defined in paragraph (e)(1) of this section) during the annual review... review (i.e., households which are part of the negative universe defined in paragraph (e)(2) of this...
Interpreting and Reporting Effect Sizes in Research Investigations.
ERIC Educational Resources Information Center
Tapia, Martha; Marsh, George E., II
Since 1994, the American Psychological Association (APA) has advocated the inclusion of effect size indices in reporting research to elucidate the statistical significance of studies based on sample size. In 2001, the fifth edition of the APA "Publication Manual" stressed the importance of including an index of effect size to clarify…
Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette
2018-03-01
In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reproducibility of preclinical animal research improves with heterogeneity of study samples
Vogt, Lucile; Sena, Emily S.; Würbel, Hanno
2018-01-01
Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495
Hierarchical modeling of cluster size in wildlife surveys
Royle, J. Andrew
2008-01-01
Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).
Accounting for Incomplete Species Detection in Fish Community Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less
The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.
Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J
2018-07-01
This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Sample size considerations for clinical research studies in nuclear cardiology.
Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J
2015-12-01
Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.
The large sample size fallacy.
Lantz, Björn
2013-06-01
Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.
Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas
2014-01-01
Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…
NASA Technical Reports Server (NTRS)
Heymann, D.; Lakatos, S.; Walton, J. R.
1973-01-01
Review of the results of inert gas measurements performed on six grain-size fractions and two single particles from four samples of Luna 20 material. Presented and discussed data include the inert gas contents, element and isotope systematics, radiation ages, and Ar-36/Ar-40 systematics.
Laboratory theory and methods for sediment analysis
Guy, Harold P.
1969-01-01
The diverse character of fluvial sediments makes the choice of laboratory analysis somewhat arbitrary and the pressing of sediment samples difficult. This report presents some theories and methods used by the Water Resources Division for analysis of fluvial sediments to determine the concentration of suspended-sediment samples and the particle-size distribution of both suspended-sediment and bed-material samples. Other analyses related to these determinations may include particle shape, mineral content, and specific gravity, the organic matter and dissolved solids of samples, and the specific weight of soils. The merits and techniques of both the evaporation and filtration methods for concentration analysis are discussed. Methods used for particle-size analysis of suspended-sediment samples may include the sieve pipet, the VA tube-pipet, or the BW tube-VA tube depending on the equipment available, the concentration and approximate size of sediment in the sample, and the settling medium used. The choice of method for most bed-material samples is usually limited to procedures suitable for sand or to some type of visual analysis for large sizes. Several tested forms are presented to help insure a well-ordered system in the laboratory to handle the samples, to help determine the kind of analysis required for each, to conduct the required processes, and to assist in the required computations. Use of the manual should further 'standardize' methods of fluvial sediment analysis among the many laboratories and thereby help to achieve uniformity and precision of the data.
Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...
2015-11-23
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
NASA Astrophysics Data System (ADS)
Atapour, Hadi; Mortazavi, Ali
2018-04-01
The effects of textural characteristics, especially grain size, on index properties of weakly solidified artificial sandstones are studied. For this purpose, a relatively large number of laboratory tests were carried out on artificial sandstones that were produced in the laboratory. The prepared samples represent fifteen sandstone types consisting of five different median grain sizes and three different cement contents. Indices rock properties including effective porosity, bulk density, point load strength index, and Schmidt hammer values (SHVs) were determined. Experimental results showed that the grain size has significant effects on index properties of weakly solidified sandstones. The porosity of samples is inversely related to the grain size and decreases linearly as grain size increases. While a direct relationship was observed between grain size and dry bulk density, as bulk density increased with increasing median grain size. Furthermore, it was observed that the point load strength index and SHV of samples increased as a result of grain size increase. These observations are indirectly related to the porosity decrease as a function of median grain size.
A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.
Bord, Séverine; Bioche, Christèle; Druilhet, Pierre
2018-05-01
We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2013-01-01
Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725
Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello
2013-10-26
Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.
Xu, Jiao; Li, Mei; Shi, Guoliang; Wang, Haiting; Ma, Xian; Wu, Jianhui; Shi, Xurong; Feng, Yinchang
2017-11-15
In this study, single particle mass spectra signatures of both coal burning boiler and biomass burning boiler emitted particles were studied. Particle samples were suspended in clean Resuspension Chamber, and analyzed by ELPI and SPAMS simultaneously. The size distribution of BBB (biomass burning boiler sample) and CBB (coal burning boiler sample) are different, as BBB peaks at smaller size, and CBB peaks at larger size. Mass spectra signatures of two samples were studied by analyzing the average mass spectrum of each particle cluster extracted by ART-2a in different size ranges. In conclusion, BBB sample mostly consists of OC and EC containing particles, and a small fraction of K-rich particles in the size range of 0.2-0.5μm. In 0.5-1.0μm, BBB sample consists of EC, OC, K-rich and Al_Silicate containing particles; CBB sample consists of EC, ECOC containing particles, while Al_Silicate (including Al_Ca_Ti_Silicate, Al_Ti_Silicate, Al_Silicate) containing particles got higher fractions as size increase. The similarity of single particle mass spectrum signatures between two samples were studied by analyzing the dot product, results indicated that part of the single particle mass spectra of two samples in the same size range are similar, which bring challenge to the future source apportionment activity by using single particle aerosol mass spectrometer. Results of this study will provide physicochemical information of important sources which contribute to particle pollution, and will support source apportionment activities. Copyright © 2017. Published by Elsevier B.V.
Van Berkel, Gary J.
2015-10-06
A system and method for analyzing a chemical composition of a specimen are described. The system can include at least one pin; a sampling device configured to contact a liquid with a specimen on the at least one pin to form a testing solution; and a stepper mechanism configured to move the at least one pin and the sampling device relative to one another. The system can also include an analytical instrument for determining a chemical composition of the specimen from the testing solution. In particular, the systems and methods described herein enable chemical analysis of specimens, such as tissue, to be evaluated in a manner that the spatial-resolution is limited by the size of the pins used to obtain tissue samples, not the size of the sampling device used to solubilize the samples coupled to the pins.
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
Group-sequential three-arm noninferiority clinical trial designs
Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko
2016-01-01
We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481
Cui, Zaixu; Gong, Gaolang
2018-06-02
Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.
Inertial impaction air sampling device
Dewhurst, Katharine H.
1990-01-01
An inertial impactor to be used in an air sampling device for collection of respirable size particles in ambient air which may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry.
Inertial impaction air sampling device
Dewhurst, K.H.
1987-12-10
An inertial impactor to be used in an air sampling device for collection of respirable size particles in ambient air which may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry. 3 figs.
NASA Astrophysics Data System (ADS)
Jeyakumar, S.
2016-06-01
The dependence of the turnover frequency on the linear size is presented for a sample of Giga-hertz Peaked Spectrum and Compact Steep Spectrum radio sources derived from complete samples. The dependence of the luminosity of the emission at the peak frequency with the linear size and the peak frequency is also presented for the galaxies in the sample. The luminosity of the smaller sources evolve strongly with the linear size. Optical depth effects have been included to the 3D model for the radio source of Kaiser to study the spectral turnover. Using this model, the observed trend can be explained by synchrotron self-absorption. The observed trend in the peak-frequency-linear-size plane is not affected by the luminosity evolution of the sources.
Shah, R; Worner, S P; Chapman, R B
2012-10-01
Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.
NASA Astrophysics Data System (ADS)
Goldbery, R.; Tehori, O.
SEDPAK provides a comprehensive software package for operation of a settling tube and sand analyzer (2-0.063 mm) and includes data-processing programs for statistical and graphic output of results. The programs are menu-driven and written in APPLESOFT BASIC, conforming with APPLE 3.3 DOS. Data storage and retrieval from disc is an important feature of SEDPAK. Additional features of SEDPAK include condensation of raw settling data via standard size-calibration curves to yield statistical grain-size parameters, plots of grain-size frequency distributions and cumulative log/probability curves. The program also has a module for processing of grain-size frequency data from sieved samples. An addition feature of SEDPAK is the option for automatic data processing and graphic output of a sequential or nonsequential array of samples on one side of a disc.
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Efficient computation of the joint sample frequency spectra for multiple populations
Kamm, John A.; Terhorst, Jonathan; Song, Yun S.
2016-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248
Sample size requirements for indirect association studies of gene-environment interactions (G x E).
Hein, Rebecca; Beckmann, Lars; Chang-Claude, Jenny
2008-04-01
Association studies accounting for gene-environment interactions (G x E) may be useful for detecting genetic effects. Although current technology enables very dense marker spacing in genetic association studies, the true disease variants may not be genotyped. Thus, causal genes are searched for by indirect association using genetic markers in linkage disequilibrium (LD) with the true disease variants. Sample sizes needed to detect G x E effects in indirect case-control association studies depend on the true genetic main effects, disease allele frequencies, whether marker and disease allele frequencies match, LD between loci, main effects and prevalence of environmental exposures, and the magnitude of interactions. We explored variables influencing sample sizes needed to detect G x E, compared these sample sizes with those required to detect genetic marginal effects, and provide an algorithm for power and sample size estimations. Required sample sizes may be heavily inflated if LD between marker and disease loci decreases. More than 10,000 case-control pairs may be required to detect G x E. However, given weak true genetic main effects, moderate prevalence of environmental exposures, as well as strong interactions, G x E effects may be detected with smaller sample sizes than those needed for the detection of genetic marginal effects. Moreover, in this scenario, rare disease variants may only be detectable when G x E is included in the analyses. Thus, the analysis of G x E appears to be an attractive option for the detection of weak genetic main effects of rare variants that may not be detectable in the analysis of genetic marginal effects only.
Donegan, Thomas M.
2018-01-01
Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266
Paul F. Hessburg; Bradley G. Smith; Craig A. Miller; Scott D. Kreiter; R. Brion Salter
1999-01-01
In the interior Columbia River basin midscale ecological assessment, including portions of the Klamath and Great Basins, we mapped and characterized historical and current vegetation composition and structure of 337 randomly sampled subwatersheds (9500 ha average size) in 43 subbasins (404 000 ha average size). We compared landscape patterns, vegetation structure and...
NASA Astrophysics Data System (ADS)
Davis, Cabell S.; Wiebe, Peter H.
1985-01-01
Macrozooplankton size structure and taxonomic composition in warm-core ring 82B was examined from a time series (March, April, June) of ring center MOCNESS (1 m) samples. Size distributions of 15 major taxonomic groups were determined from length measurements digitized from silhouette photographs of the samples. Silhouette digitization allows rapid quantification of Zooplankton size structure and taxonomic composition. Length/weight regressions, determined for each taxon, were used to partition the biomass (displacement volumes) of each sample among the major taxonomic groups. Zooplankton taxonomic composition and size structure varied with depth and appeared to coincide with the hydrographic structure of the ring. In March and April, within the thermostad region of the ring, smaller herbivorous/omnivorous Zooplankton, including copepods, crustacean larvae, and euphausiids, were dominant, whereas below this region, larger carnivores, such as medusae, ctenophores, fish, and decapods, dominated. Copepods were generally dominant in most samples above 500 m. Total macrozooplankton abundance and biomass increased between March and April, primarily because of increases in herbivorous taxa, including copepods, crustacean larvae, and larvaceans. A marked increase in total macrozooplankton abundance and biomass between April and June was characterized by an equally dramatic shift from smaller herbivores (1.0-3.0 mm) in April to large herbivores (5.0-6.0 mm) and carnivores (>15 mm) in June. Species identifications made directly from the samples suggest that changes in trophic structure resulted from seeding type immigration and subsequent in situ population growth of Slope Water zooplankton species.
Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie
2011-08-01
Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.
Development and Validation of the Caring Loneliness Scale.
Karhe, Liisa; Kaunonen, Marja; Koivisto, Anna-Maija
2016-12-01
The Caring Loneliness Scale (CARLOS) includes 5 categories derived from earlier qualitative research. This article assesses the reliability and construct validity of a scale designed to measure patient experiences of loneliness in a professional caring relationship. Statistical analysis with 4 different sample sizes included Cronbach's alpha and exploratory factor analysis with principal axis factoring extraction. The sample size of 250 gave the most useful and comprehensible structure, but all 4 samples yielded underlying content of loneliness experiences. The initial 5 categories were reduced to 4 factors with 24 items and Cronbach's alpha ranging from .77 to .90. The findings support the reliability and validity of CARLOS for the assessment of Finnish breast cancer and heart surgery patients' experiences but as all instruments, further validation is needed.
Sampling and data handling methods for inhalable particulate sampling. Final report nov 78-dec 80
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, W.B.; Cushing, K.M.; Johnson, J.W.
1982-05-01
The report reviews the objectives of a research program on sampling and measuring particles in the inhalable particulate (IP) size range in emissions from stationary sources, and describes methods and equipment required. A computer technique was developed to analyze data on particle-size distributions of samples taken with cascade impactors from industrial process streams. Research in sampling systems for IP matter included concepts for maintaining isokinetic sampling conditions, necessary for representative sampling of the larger particles, while flowrates in the particle-sizing device were constant. Laboratory studies were conducted to develop suitable IP sampling systems with overall cut diameters of 15 micrometersmore » and conforming to a specified collection efficiency curve. Collection efficiencies were similarly measured for a horizontal elutriator. Design parameters were calculated for horizontal elutriators to be used with impactors, the EPA SASS train, and the EPA FAS train. Two cyclone systems were designed and evaluated. Tests on an Andersen Size Selective Inlet, a 15-micrometer precollector for high-volume samplers, showed its performance to be with the proposed limits for IP samplers. A stack sampling system was designed in which the aerosol is diluted in flow patterns and with mixing times simulating those in stack plumes.« less
Marshall, S J; Biddle, S J H; Gorely, T; Cameron, N; Murdey, I
2004-10-01
To review the empirical evidence of associations between television (TV) viewing, video/computer game use and (a) body fatness, and (b) physical activity. Meta-analysis. Published English-language studies were located from computerized literature searches, bibliographies of primary studies and narrative reviews, and manual searches of personal archives. Included studies presented at least one empirical association between TV viewing, video/computer game use and body fatness or physical activity among samples of children and youth aged 3-18 y. The mean sample-weighted corrected effect size (Pearson r). Based on data from 52 independent samples, the mean sample-weighted effect size between TV viewing and body fatness was 0.066 (95% CI=0.056-0.078; total N=44,707). The sample-weighted fully corrected effect size was 0.084. Based on data from six independent samples, the mean sample-weighted effect size between video/computer game use and body fatness was 0.070 (95% CI=-0.048 to 0.188; total N=1,722). The sample-weighted fully corrected effect size was 0.128. Based on data from 39 independent samples, the mean sample-weighted effect size between TV viewing and physical activity was -0.096 (95% CI=-0.080 to -0.112; total N=141,505). The sample-weighted fully corrected effect size was -0.129. Based on data from 10 independent samples, the mean sample-weighted effect size between video/computer game use and physical activity was -0.104 (95% CI=-0.080 to -0.128; total N=119,942). The sample-weighted fully corrected effect size was -0.141. A statistically significant relationship exists between TV viewing and body fatness among children and youth although it is likely to be too small to be of substantial clinical relevance. The relationship between TV viewing and physical activity is small but negative. The strength of these relationships remains virtually unchanged even after correcting for common sources of bias known to impact study outcomes. While the total amount of time per day engaged in sedentary behavior is inevitably prohibitive of physical activity, media-based inactivity may be unfairly implicated in recent epidemiologic trends of overweight and obesity among children and youth. Relationships between sedentary behavior and health are unlikely to be explained using single markers of inactivity, such as TV viewing or video/computer game use.
Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias
Chambers, David A.; Glasgow, Russell E.
2014-01-01
Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853
Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan
2016-03-09
Cardiac magnetic resonance (CMR) can quantify myocardial infarct (MI) size and myocardium at risk (MaR), enabling assessment of myocardial salvage index (MSI). We assessed how MSI impacts the number of patients needed to reach statistical power in relation to MI size alone and levels of biochemical markers in clinical cardioprotection trials and how scan day affect sample size. Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid scan day bias masking a treatment effect of 25%. Sample size in cardioprotection trials can be reduced 46% to 65% without compromising statistical power when using MSI by CMR as an outcome variable instead of MI size alone or biochemical markers. It is essential to ensure lack of bias in scan day between treatment and control arms to avoid compromising statistical power. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Estimating numbers of females with cubs-of-the-year in the Yellowstone grizzly bear population
Keating, K.A.; Schwartz, C.C.; Haroldson, M.A.; Moody, D.
2001-01-01
For grizzly bears (Ursus arctos horribilis) in the Greater Yellowstone Ecosystem (GYE), minimum population size and allowable numbers of human-caused mortalities have been calculated as a function of the number of unique females with cubs-of-the-year (FCUB) seen during a 3- year period. This approach underestimates the total number of FCUB, thereby biasing estimates of population size and sustainable mortality. Also, it does not permit calculation of valid confidence bounds. Many statistical methods can resolve or mitigate these problems, but there is no universal best method. Instead, relative performances of different methods can vary with population size, sample size, and degree of heterogeneity among sighting probabilities for individual animals. We compared 7 nonparametric estimators, using Monte Carlo techniques to assess performances over the range of sampling conditions deemed plausible for the Yellowstone population. Our goal was to estimate the number of FCUB present in the population each year. Our evaluation differed from previous comparisons of such estimators by including sample coverage methods and by treating individual sightings, rather than sample periods, as the sample unit. Consequently, our conclusions also differ from earlier studies. Recommendations regarding estimators and necessary sample sizes are presented, together with estimates of annual numbers of FCUB in the Yellowstone population with bootstrap confidence bounds.
The Relation of Empathy and Defending in Bullying: A Meta-Analytic Investigation
ERIC Educational Resources Information Center
Nickerson, Amanda B.; Aloe, Ariel M.; Werth, Jilynn M.
2015-01-01
This meta-analysis synthesized results about the association between empathy and defending in bullying. A total of 20 studies were included, with 22 effect sizes from 6 studies that separated findings by the defender's gender, and 31 effect sizes from 18 studies that provided effects for the total sample were included in the analysis. The weighted…
Snyder, Noah P.; Allen, James R.; Dare, Carlin; Hampton, Margaret A.; Schneider, Gary; Wooley, Ryan J.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.
2004-01-01
This report presents sedimentologic data from three 2002 sampling campaigns conducted in Englebright Lake on the Yuba River in northern California. This work was done to assess the properties of the material deposited in the reservoir between completion of Englebright Dam in 1940 and 2002, as part of the Upper Yuba River Studies Program. Included are the results of grain-size-distribution and loss-on-ignition analyses for 561 samples, as well as an error analysis based on replicate pairs of subsamples.
Inertial impaction air sampling device
Dewhurst, K.H.
1990-05-22
An inertial impactor is designed which is to be used in an air sampling device for collection of respirable size particles in ambient air. The device may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry. 3 figs.
Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey
2013-01-01
We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.
Nutrition labeling and value size pricing at fast-food restaurants: a consumer perspective.
O'Dougherty, Maureen; Harnack, Lisa J; French, Simone A; Story, Mary; Oakes, J Michael; Jeffery, Robert W
2006-01-01
This pilot study examined nutrition-related attitudes that may affect food choices at fast-food restaurants, including consumer attitudes toward nutrition labeling of fast foods and elimination of value size pricing. A convenience sample of 79 fast-food restaurant patrons aged 16 and above (78.5% white, 55% female, mean age 41.2 [17.1]) selected meals from fast-food restaurant menus that varied as to whether nutrition information was provided and value pricing included and completed a survey and interview on nutrition-related attitudes. Only 57.9% of participants rated nutrition as important when buying fast food. Almost two thirds (62%) supported a law requiring nutrition labeling on restaurant menus. One third (34%) supported a law requiring restaurants to offer lower prices on smaller instead of bigger-sized portions. This convenience sample of fast-food patrons supported nutrition labels on menus. More research is needed with larger samples on whether point-of-purchase nutrition labeling at fast-food restaurants raises perceived importance of nutrition when eating out.
Size-biased distributions in the generalized beta distribution family, with applications to forestry
Mark J. Ducey; Jeffrey H. Gove
2015-01-01
Size-biased distributions arise in many forestry applications, as well as other environmental, econometric, and biomedical sampling problems. We examine the size-biased versions of the generalized beta of the first kind, generalized beta of the second kind and generalized gamma distributions. These distributions include, as special cases, the Dagum (Burr Type III),...
Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.
Gajewski, Byron J; Mayo, Matthew S
2006-08-15
A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.
Capital Budgeting Decisions with Post-Audit Information
1990-06-08
estimates that were used during project selection. In similar fashion, this research introduces the equivalent sample size concept that permits the... equivalent sample size is extended to include the user’s prior beliefs. 4. For a management tool, the concepts for Cash Flow Control Charts are...Acoxxting Research , vol. 7, no. 2, Autumn 1969, pp. 215-244. [9] Gaynor, Edwin W., "Use of Control Charts in Cost Control ", National Association of Cost
A single test for rejecting the null hypothesis in subgroups and in the overall sample.
Lin, Yunzhi; Zhou, Kefei; Ganju, Jitendra
2017-01-01
In clinical trials, some patient subgroups are likely to demonstrate larger effect sizes than other subgroups. For example, the effect size, or informally the benefit with treatment, is often greater in patients with a moderate condition of a disease than in those with a mild condition. A limitation of the usual method of analysis is that it does not incorporate this ordering of effect size by patient subgroup. We propose a test statistic which supplements the conventional test by including this information and simultaneously tests the null hypothesis in pre-specified subgroups and in the overall sample. It results in more power than the conventional test when the differences in effect sizes across subgroups are at least moderately large; otherwise it loses power. The method involves combining p-values from models fit to pre-specified subgroups and the overall sample in a manner that assigns greater weight to subgroups in which a larger effect size is expected. Results are presented for randomized trials with two and three subgroups.
Statistical considerations for agroforestry studies
James A. Baldwin
1993-01-01
Statistical topics that related to agroforestry studies are discussed. These included study objectives, populations of interest, sampling schemes, sample sizes, estimation vs. hypothesis testing, and P-values. In addition, a relatively new and very much improved histogram display is described.
McGarvey, Daniel J.; Falke, Jeffrey A.; Li, Hiram W.; Li, Judith; Hauer, F. Richard; Lamberti, G.A.
2017-01-01
Methods to sample fishes in stream ecosystems and to analyze the raw data, focusing primarily on assemblage-level (all fish species combined) analyses, are presented in this chapter. We begin with guidance on sample site selection, permitting for fish collection, and information-gathering steps to be completed prior to conducting fieldwork. Basic sampling methods (visual surveying, electrofishing, and seining) are presented with specific instructions for estimating population sizes via visual, capture-recapture, and depletion surveys, in addition to new guidance on environmental DNA (eDNA) methods. Steps to process fish specimens in the field including the use of anesthesia and preservation of whole specimens or tissue samples (for genetic or stable isotope analysis) are also presented. Data analysis methods include characterization of size-structure within populations, estimation of species richness and diversity, and application of fish functional traits. We conclude with three advanced topics in assemblage-level analysis: multidimensional scaling (MDS), ecological networks, and loop analysis.
NASA Astrophysics Data System (ADS)
Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.
2015-11-01
Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM samples. Some of the day to night difference may have been caused also by differing wind directions transporting air masses from different emission sources during the day and the night. The present findings indicate the important role of the local particle sources and atmospheric processes on the health related toxicological properties of the PM. The varying toxicological responses evoked by the PM samples showed the importance of examining various particle sizes. Especially the detected considerable toxicological activity by PM0.2 size range suggests they're attributable to combustion sources, new particle formation and atmospheric processes.
Peel, D; Waples, R S; Macbeth, G M; Do, C; Ovenden, J R
2013-03-01
Theoretical models are often applied to population genetic data sets without fully considering the effect of missing data. Researchers can deal with missing data by removing individuals that have failed to yield genotypes and/or by removing loci that have failed to yield allelic determinations, but despite their best efforts, most data sets still contain some missing data. As a consequence, realized sample size differs among loci, and this poses a problem for unbiased methods that must explicitly account for random sampling error. One commonly used solution for the calculation of contemporary effective population size (N(e) ) is to calculate the effective sample size as an unweighted mean or harmonic mean across loci. This is not ideal because it fails to account for the fact that loci with different numbers of alleles have different information content. Here we consider this problem for genetic estimators of contemporary effective population size (N(e) ). To evaluate bias and precision of several statistical approaches for dealing with missing data, we simulated populations with known N(e) and various degrees of missing data. Across all scenarios, one method of correcting for missing data (fixed-inverse variance-weighted harmonic mean) consistently performed the best for both single-sample and two-sample (temporal) methods of estimating N(e) and outperformed some methods currently in widespread use. The approach adopted here may be a starting point to adjust other population genetics methods that include per-locus sample size components. © 2012 Blackwell Publishing Ltd.
Demonstration of Multi- and Single-Reader Sample Size Program for Diagnostic Studies software.
Hillis, Stephen L; Schartz, Kevin M
2015-02-01
The recently released software Multi- and Single-Reader Sample Size Sample Size Program for Diagnostic Studies , written by Kevin Schartz and Stephen Hillis, performs sample size computations for diagnostic reader-performance studies. The program computes the sample size needed to detect a specified difference in a reader performance measure between two modalities, when using the analysis methods initially proposed by Dorfman, Berbaum, and Metz (DBM) and Obuchowski and Rockette (OR), and later unified and improved by Hillis and colleagues. A commonly used reader performance measure is the area under the receiver-operating-characteristic curve. The program can be used with typical common reader-performance measures which can be estimated parametrically or nonparametrically. The program has an easy-to-use step-by-step intuitive interface that walks the user through the entry of the needed information. Features of the software include the following: (1) choice of several study designs; (2) choice of inputs obtained from either OR or DBM analyses; (3) choice of three different inference situations: both readers and cases random, readers fixed and cases random, and readers random and cases fixed; (4) choice of two types of hypotheses: equivalence or noninferiority; (6) choice of two output formats: power for specified case and reader sample sizes, or a listing of case-reader combinations that provide a specified power; (7) choice of single or multi-reader analyses; and (8) functionality in Windows, Mac OS, and Linux.
Swimsuit issues: promoting positive body image in young women's magazines.
Boyd, Elizabeth Reid; Moncrieff-Boyd, Jessica
2011-08-01
This preliminary study reviews the promotion of healthy body image to young Australian women, following the 2009 introduction of the voluntary Industry Code of Conduct on Body Image. The Code includes using diverse sized models in magazines. A qualitative content analysis of the 2010 annual 'swimsuit issues' was conducted on 10 Australian young women's magazines. Pictorial and/or textual editorial evidence of promoting diverse body shapes and sizes was regarded as indicative of the magazines' upholding aspects of the voluntary Code of Conduct for Body Image. Diverse sized models were incorporated in four of the seven magazines with swimsuit features sampled. Body size differentials were presented as part of the swimsuit features in three of the magazines sampled. Tips for diverse body type enhancement were included in four of the magazines. All magazines met at least one criterion. One magazine displayed evidence of all three criteria. Preliminary examination suggests that more than half of young women's magazines are upholding elements of the voluntary Code of Conduct for Body Image, through representation of diverse-sized women in their swimsuit issues.
Experimental design, power and sample size for animal reproduction experiments.
Chapman, Phillip L; Seidel, George E
2008-01-01
The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.
Particle Size Distribution of Heavy Metals and Magnetic Susceptibility in an Industrial Site.
Ayoubi, Shamsollah; Soltani, Zeynab; Khademi, Hossein
2018-05-01
This study was conducted to explore the relationships between magnetic susceptibility and some soil heavy metals concentrations in various particle sizes in an industrial site, central Iran. Soils were partitioned into five fractions (< 28, 28-75, 75-150, 150-300, and 300-2000 µm). Heavy metals concentrations including Zn, Pb, Fe, Cu, Ni and Mn and magnetic susceptibility were determined in bulk soil samples and all fractions in 60 soil samples collected from the depth of 0-5 cm. The studied heavy metals except for Pb and Fe displayed a substantial enrichment in the < 28 µm. These two elements seemed to be independent of the selected size fractions. Magnetic minerals are specially linked with medium size fractions including 28-75, 75-150 and 150-300 µm. The highest correlations were found for < 28 µm and heavy metals followed by 150-300 µm fraction which are susceptible to wind erosion risk in an arid environment.
A thermal emission spectral library of rock-forming minerals
NASA Astrophysics Data System (ADS)
Christensen, Philip R.; Bandfield, Joshua L.; Hamilton, Victoria E.; Howard, Douglas A.; Lane, Melissa D.; Piatek, Jennifer L.; Ruff, Steven W.; Stefanov, William L.
2000-04-01
A library of thermal infrared spectra of silicate, carbonate, sulfate, phosphate, halide, and oxide minerals has been prepared for comparison to spectra obtained from planetary and Earth-orbiting spacecraft, airborne instruments, and laboratory measurements. The emphasis in developing this library has been to obtain pure samples of specific minerals. All samples were hand processed and analyzed for composition and purity. The majority are 710-1000 μm particle size fractions, chosen to minimize particle size effects. Spectral acquisition follows a method described previously, and emissivity is determined to within 2% in most cases. Each mineral spectrum is accompanied by descriptive information in database form including compositional information, sample quality, and a comments field to describe special circumstances and unique conditions. More than 150 samples were selected to include the common rock-forming minerals with an emphasis on igneous and sedimentary minerals. This library is available in digital form and will be expanded as new, well-characterized samples are acquired.
Using meta-analysis to inform the design of subsequent studies of diagnostic test accuracy.
Hinchliffe, Sally R; Crowther, Michael J; Phillips, Robert S; Sutton, Alex J
2013-06-01
An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial evidence to underpin reliable clinical decision-making. Very few investigators consider any sample size calculations when designing a new diagnostic accuracy study. However, it is important to consider the number of subjects in a new study in order to achieve a precise measure of accuracy. Sutton et al. have suggested previously that when designing a new therapeutic trial, it could be more beneficial to consider the power of the updated meta-analysis including the new trial rather than of the new trial itself. The methodology involves simulating new studies for a range of sample sizes and estimating the power of the updated meta-analysis with each new study added. Plotting the power values against the range of sample sizes allows the clinician to make an informed decision about the sample size of a new trial. This paper extends this approach from the trial setting and applies it to diagnostic accuracy studies. Several meta-analytic models are considered including bivariate random effects meta-analysis that models the correlation between sensitivity and specificity. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Wu, Zhichao; Medeiros, Felipe A
2018-03-20
Visual field testing is an important endpoint in glaucoma clinical trials, and the testing paradigm used can have a significant impact on the sample size requirements. To investigate this, this study included 353 eyes of 247 glaucoma patients seen over a 3-year period to extract real-world visual field rates of change and variability estimates to provide sample size estimates from computer simulations. The clinical trial scenario assumed that a new treatment was added to one of two groups that were both under routine clinical care, with various treatment effects examined. Three different visual field testing paradigms were evaluated: a) evenly spaced testing, b) United Kingdom Glaucoma Treatment Study (UKGTS) follow-up scheme, which adds clustered tests at the beginning and end of follow-up in addition to evenly spaced testing, and c) clustered testing paradigm, with clusters of tests at the beginning and end of the trial period and two intermediary visits. The sample size requirements were reduced by 17-19% and 39-40% using the UKGTS and clustered testing paradigms, respectively, when compared to the evenly spaced approach. These findings highlight how the clustered testing paradigm can substantially reduce sample size requirements and improve the feasibility of future glaucoma clinical trials.
A simple autocorrelation algorithm for determining grain size from digital images of sediment
Rubin, D.M.
2004-01-01
Autocorrelation between pixels in digital images of sediment can be used to measure average grain size of sediment on the bed, grain-size distribution of bed sediment, and vertical profiles in grain size in a cross-sectional image through a bed. The technique is less sensitive than traditional laboratory analyses to tails of a grain-size distribution, but it offers substantial other advantages: it is 100 times as fast; it is ideal for sampling surficial sediment (the part that interacts with a flow); it can determine vertical profiles in grain size on a scale finer than can be sampled physically; and it can be used in the field to provide almost real-time grain-size analysis. The technique can be applied to digital images obtained using any source with sufficient resolution, including digital cameras, digital video, or underwater digital microscopes (for real-time grain-size mapping of the bed). ?? 2004, SEPM (Society for Sedimentary Geology).
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
NASA Astrophysics Data System (ADS)
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses with LAI and clip harvest data to determine whether LAI can be used as a suitable proxy for aboveground standing biomass. We also compared optimal sample sizes derived from LAI data, and clip-harvest data from two different size clip harvest areas (0.1m by 1m vs. 0.1m by 2m). Sample sizes were calculated in order to estimate the mean to within a standardized level of uncertainty that will be used to guide sampling effort across all vegetation types (i.e. estimated within × 10% with 95% confidence). Finally, we employed a Semivariogram approach to determine optimal sample size and spacing.
NASA Astrophysics Data System (ADS)
Alexander, Jennifer Mary
Atmospheric mineral dust has a large impact on the earth's radiation balance and climate. The radiative effects of mineral dust depend on factors including, particle size, shape, and composition which can all be extremely complex. Mineral dust particles are typically irregular in shape and can include sharp edges, voids, and fine scale surface roughness. Particle shape can also depend on the type of mineral and can vary as a function of particle size. In addition, atmospheric mineral dust is a complex mixture of different minerals as well as other, possibly organic, components that have been mixed in while these particles are suspended in the atmosphere. Aerosol optical properties are investigated in this work, including studies of the effect of particle size, shape, and composition on the infrared (IR) extinction and visible scattering properties in order to achieve more accurate modeling methods. Studies of particle shape effects on dust optical properties for single component mineral samples of silicate clay and diatomaceous earth are carried out here first. Experimental measurements are modeled using T-matrix theory in a uniform spheroid approximation. Previous efforts to simulate the measured optical properties of silicate clay, using models that assumed particle shape was independent of particle size, have achieved only limited success. However, a model which accounts for a correlation between particle size and shape for the silicate clays offers a large improvement over earlier modeling approaches. Diatomaceous earth is also studied as an example of a single component mineral dust aerosol with extreme particle shapes. A particle shape distribution, determined by fitting the experimental IR extinction data, used as a basis for modeling the visible light scattering properties. While the visible simulations show only modestly good agreement with the scattering data, the fits are generally better than those obtained using more commonly invoked particle shape distributions. The next goal of this work is to investigate if modeling methods developed in the studies of single mineral components can be generalized to predict the optical properties of more authentic aerosol samples which are complex mixtures of different minerals. Samples of Saharan sand, Iowa loess, and Arizona road dust are used here as test cases. T-matrix based simulations of the authentic samples, using measured particle size distributions, empirical mineralogies, and a priori particle shape models for each mineral component are directly compared with the measured IR extinction spectra and visible scattering profiles. This modeling approach offers a significant improvement over more commonly applied models that ignore variations in particle shape with size or mineralogy and include only a moderate range of shape parameters. Mineral dust samples processed with organic acids and humic material are also studied in order to explore how the optical properties of dust can change after being aged in the atmosphere. Processed samples include quartz mixed with humic material, and calcite reacted with acetic and oxalic acid. Clear differences in the light scattering properties are observed for all three processed mineral dust samples when compared to the unprocessed mineral dust or organic salt products. These interactions result in both internal and external mixtures depending on the sample. In addition, the presence of these organic materials can alter the mineral dust particle shape. Overall, however, these results demonstrate the need to account for the effects of atmospheric aging of mineral dust on aerosol optical properties. Particle shape can also affect the aerodynamic properties of mineral dust aerosol. In order to account for these effects, the dynamic shape factor is used to give a measure of particle asphericity. Dynamic shape factors of quartz are measured by mass and mobility selecting particles and measuring their vacuum aerodynamic diameter. From this, dynamic shape factors in both the transition and vacuum regime can be derived. The measured dynamic shape factors of quartz agree quite well with the spheroidal shape distributions derived through studies of the optical properties.
Alternative sample sizes for verification dose experiments and dose audits
NASA Astrophysics Data System (ADS)
Taylor, W. A.; Hansen, J. M.
1999-01-01
ISO 11137 (1995), "Sterilization of Health Care Products—Requirements for Validation and Routine Control—Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2). This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohletz, K.H.; Raymond, R. Jr.; Rawson, G.
1988-01-01
The MISTY PICTURE surface burst was detonated at the White Sands Missle range in May of 1987. The Los Alamos National Laboratory dust characterization program was expanded to help correlate and interrelate aspects of the overall MISTY PICTURE dust and ejecta characterization program. Pre-shot sampling of the test bed included composite samples from 15 to 75 m distance from Surface Ground Zero (SGZ) representing depths down to 2.5 m, interval samples from 15 to 25 m from SGZ representing depths down to 3m, and samples of surface material (top 0.5 cm) out to distances of 190 m from SGZ. Sweep-upmore » samples were collected in GREG/SNOB gages located within the DPR. All samples were dry-sieved between 8.0 mm and 0.045 mm (16 size fractures); selected samples were analyzed for fines by a contrifugal settling technique. The size distributions were analyzed using spectral decomposition based upon a sequential fragmentation model. Results suggest that the same particle size subpopulations are present in the ejecta, fallout, and sweep-up samples as are present in the pre-shot test bed. The particle size distribution in post-shot environments apparently can be modelled taking into account heterogeneities in the pre-shot test bed and dominant wind direction during and following the shot. 13 refs., 12 figs., 2 tabs.« less
Modeling the transport of engineered nanoparticles in saturated porous media - an experimental setup
NASA Astrophysics Data System (ADS)
Braun, A.; Neukum, C.; Azzam, R.
2011-12-01
The accelerating production and application of engineered nanoparticles is causing concerns regarding their release and fate in the environment. For assessing the risk that is posed to drinking water resources it is important to understand the transport and retention mechanisms of engineered nanoparticles in soil and groundwater. In this study an experimental setup for analyzing the mobility of silver and titanium dioxide nanoparticles in saturated porous media is presented. Batch and column experiments with glass beads and two different soils as matrices are carried out under varied conditions to study the impact of electrolyte concentration and pore water velocities. The analysis of nanoparticles implies several challenges, such as the detection and characterization and the preparation of a well dispersed sample with defined properties, as nanoparticles tend to form agglomerates when suspended in an aqueous medium. The analytical part of the experiments is mainly undertaken with Flow Field-Flow Fractionation (FlFFF). This chromatography like technique separates a particulate sample according to size. It is coupled to a UV/Vis and a light scattering detector for analyzing concentration and size distribution of the sample. The advantage of this technique is the ability to analyze also complex environmental samples, such as the effluent of column experiments including soil components, and the gentle sample treatment. For optimization of the sample preparation and for getting a first idea of the aggregation behavior in soil solutions, in sedimentation experiments the effect of ionic strength, sample concentration and addition of a surfactant on particle or aggregate size and temporal dispersion stability was investigated. In general the samples are more stable the lower the concentration of particles is. For TiO2 nanoparticles, the addition of a surfactant yielded the most stable samples with smallest aggregate sizes. Furthermore the suspension stability is increasing with electrolyte concentration. Depending on the dispersing medium the results show that TiO2 nanoparticles tend to form aggregates between 100-200 nm in diameter while the primary particle size is given as 21 nm by the manufacturer. Aggregate sizes are increasing with time. The particle size distribution of the silver nanoparticle samples is quite uniform in each medium. The fresh samples show aggregate sizes between 40 and 45 nm while the primary particle size is 15 nm according to the manufacturer. Aggregate size is only slightly increasing with time during the sedimentation experiments. These results are used as a reference when analyzing the effluent of column experiments.
Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach
NASA Astrophysics Data System (ADS)
Xiao, T.
2012-12-01
One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.
Till, J.L.; Jackson, M.J.; Rosenbaum, J.G.; Solheid, P.
2011-01-01
The Tiva Canyon Tuff contains dispersed nanoscale Fe-Ti-oxide grains with a narrow magnetic grain size distribution, making it an ideal material in which to identify and study grain-size-sensitive magnetic behavior in rocks. A detailed magnetic characterization was performed on samples from the basal 5 m of the tuff. The magnetic materials in this basal section consist primarily of (low-impurity) magnetite in the form of elongated submicron grains exsolved from volcanic glass. Magnetic properties studied include bulk magnetic susceptibility, frequency-dependent and temperature-dependent magnetic susceptibility, anhysteretic remanence acquisition, and hysteresis properties. The combined data constitute a distinct magnetic signature at each stratigraphic level in the section corresponding to different grain size distributions. The inferred magnetic domain state changes progressively upward from superparamagnetic grains near the base to particles with pseudo-single-domain or metastable single-domain characteristics near the top of the sampled section. Direct observations of magnetic grain size confirm that distinct transitions in room temperature magnetic susceptibility and remanence probably denote the limits of stable single-domain behavior in the section. These results provide a unique example of grain-size-dependent magnetic properties in noninteracting particle assemblages over three decades of grain size, including close approximations of ideal Stoner-Wohlfarth assemblages, and may be considered a useful reference for future rock magnetic studies involving grain-size-sensitive properties.
NASA Astrophysics Data System (ADS)
Gholizadeh, Ahmad
2018-04-01
In the present work, the influence of different sintering atmospheres and temperatures on physical properties of the Cu0.5Zn0.5Fe2O4 nanoparticles including the redistribution of Zn2+ and Fe3+ ions, the oxidation of Fe atoms in the lattice, crystallite sizes, IR bands, saturation magnetization and magnetic core sizes have been investigated. The fitting of XRD patterns by using Fullprof program and also FT-IR measurement show the formation of a cubic structure with no presence of impurity phase for all the samples. The unit cell parameter of the samples sintered at the air- and inert-ambient atmospheres trend to decrease with sintering temperature, but for the samples sintered under carbon monoxide-ambient atmosphere increase. The magnetization curves versus the applied magnetic field, indicate different behaviour for the samples sintered at 700 °C with the respect to the samples sintered at 300 °C. Also, the saturation magnetization increases with the sintering temperature and reach a maximum 61.68 emu/g in the sample sintered under reducing atmosphere at 600 °C. The magnetic particle size distributions of samples have been calculated by fitting the M-H curves with the size distributed Langevin function. The results obtained from the XRD and FTIR measurements suggest that the magnetic core size has the dominant effect in variation of the saturation magnetization of the samples.
van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald
2017-12-04
Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the measurement of hours worked each week by GPs strongly varied according to the number of GPs included and the frequency of measurements per GP during the week measured. The best balance between both dimensions will depend upon different circumstances, such as the target group and the budget available.
New Measurements of the Particle Size Distribution of Apollo 11 Lunar Soil 10084
NASA Technical Reports Server (NTRS)
McKay, D.S.; Cooper, B.L.; Riofrio, L.M.
2009-01-01
We have initiated a major new program to determine the grain size distribution of nearly all lunar soils collected in the Apollo program. Following the return of Apollo soil and core samples, a number of investigators including our own group performed grain size distribution studies and published the results [1-11]. Nearly all of these studies were done by sieving the samples, usually with a working fluid such as Freon(TradeMark) or water. We have measured the particle size distribution of lunar soil 10084,2005 in water, using a Microtrac(TradeMark) laser diffraction instrument. Details of our own sieving technique and protocol (also used in [11]). are given in [4]. While sieving usually produces accurate and reproducible results, it has disadvantages. It is very labor intensive and requires hours to days to perform properly. Even using automated sieve shaking devices, four or five days may be needed to sieve each sample, although multiple sieve stacks increases productivity. Second, sieving is subject to loss of grains through handling and weighing operations, and these losses are concentrated in the finest grain sizes. Loss from handling becomes a more acute problem when smaller amounts of material are used. While we were able to quantitatively sieve into 6 or 8 size fractions using starting soil masses as low as 50mg, attrition and handling problems limit the practicality of sieving smaller amounts. Third, sieving below 10 or 20microns is not practical because of the problems of grain loss, and smaller grains sticking to coarser grains. Sieving is completely impractical below about 5- 10microns. Consequently, sieving gives no information on the size distribution below approx.10 microns which includes the important submicrometer and nanoparticle size ranges. Finally, sieving creates a limited number of size bins and may therefore miss fine structure of the distribution which would be revealed by other methods that produce many smaller size bins.
The prevalence of terraced treescapes in analyses of phylogenetic data sets.
Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J
2018-04-04
The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.
Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S
2015-02-01
With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.
Embedding clinical interventions into observational studies
Newman, Anne B.; Avilés-Santa, M. Larissa; Anderson, Garnet; Heiss, Gerardo; Howard, Wm. James; Krucoff, Mitchell; Kuller, Lewis H.; Lewis, Cora E.; Robinson, Jennifer G.; Taylor, Herman; Treviño, Roberto P.; Weintraub, William
2017-01-01
Novel approaches to observational studies and clinical trials could improve the cost-effectiveness and speed of translation of research. Hybrid designs that combine elements of clinical trials with observational registries or cohort studies should be considered as part of a long-term strategy to transform clinical trials and epidemiology, adapting to the opportunities of big data and the challenges of constrained budgets. Important considerations include study aims, timing, breadth and depth of the existing infrastructure that can be leveraged, participant burden, likely participation rate and available sample size in the cohort, required sample size for the trial, and investigator expertise. Community engagement and stakeholder (including study participants) support are essential for these efforts to succeed. PMID:26611435
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papelis, Charalambos; Um, Wooyong; Russel, Charles E.
2003-03-28
The specific surface area of natural and manmade solid materials is a key parameter controlling important interfacial processes in natural environments and engineered systems, including dissolution reactions and sorption processes at solid-fluid interfaces. To improve our ability to quantify the release of trace elements trapped in natural glasses, the release of hazardous compounds trapped in manmade glasses, or the release of radionuclides from nuclear melt glass, we measured the specific surface area of natural and manmade glasses as a function of particle size, morphology, and composition. Volcanic ash, volcanic tuff, tektites, obsidian glass, and in situ vitrified rock were analyzed.more » Specific surface area estimates were obtained using krypton as gas adsorbent and the BET model. The range of surface areas measured exceeded three orders of magnitude. A tektite sample had the highest surface area (1.65 m2/g), while one of the samples of in situ vitrified rock had the lowest surf ace area (0.0016 m2/g). The specific surface area of the samples was a function of particle size, decreasing with increasing particle size. Different types of materials, however, showed variable dependence on particle size, and could be assigned to one of three distinct groups: (1) samples with low surface area dependence on particle size and surface areas approximately two orders of magnitude higher than the surface area of smooth spheres of equivalent size. The specific surface area of these materials was attributed mostly to internal porosity and surface roughness. (2) samples that showed a trend of decreasing surface area dependence on particle size as the particle size increased. The minimum specific surface area of these materials was between 0.1 and 0.01 m2/g and was also attributed to internal porosity and surface roughness. (3) samples whose surface area showed a monotonic decrease with increasing particle size, never reaching an ultimate surface area limit within the particle size range examined. The surface area results were consistent with particle morphology, examined by scanning electron microscopy, and have significant implications for the release of radionuclides and toxic metals in the environment.« less
Graham, Simon; O'Connor, Catherine C; Morgan, Stephen; Chamberlain, Catherine; Hocking, Jane
2017-06-01
Background Aboriginal and Torres Strait Islanders (Aboriginal) are Australia's first peoples. Between 2006 and 2015, HIV notifications increased among Aboriginal people; however, among non-Aboriginal people, notifications remained relatively stable. This systematic review and meta-analysis aims to examine the prevalence of HIV among Aboriginal people overall and by subgroups. In November 2015, a search of PubMed and Web of Science, grey literature and abstracts from conferences was conducted. A study was included if it reported the number of Aboriginal people tested and those who tested positive for HIV. The following variables were extracted: gender; Aboriginal status; population group (men who have sex with men, people who inject drugs, adults, youth in detention and pregnant females) and geographical location. An assessment of between study heterogeneity (I 2 test) and within study bias (selection, measurement and sample size) was also conducted. Seven studies were included; all were cross-sectional study designs. The overall sample size was 3772 and the prevalence of HIV was 0.1% (I 2 =38.3%, P=0.136). Five studies included convenient samples of people attending Australian Needle and Syringe Program Centres, clinics, hospitals and a youth detention centre, increasing the potential of selection bias. Four studies had a sample size, thus decreasing the ability to report pooled estimates. The prevalence of HIV among Aboriginal people in Australia is low. Community-based programs that include both prevention messages for those at risk of infection and culturally appropriate clinical management and support for Aboriginal people living with HIV are needed to prevent HIV increasing among Aboriginal people.
Mächtle, W
1999-01-01
Sedimentation velocity is a powerful tool for the analysis of complex solutions of macromolecules. However, sample turbidity imposes an upper limit to the size of molecular complexes currently amenable to such analysis. Furthermore, the breadth of the particle size distribution, combined with possible variations in the density of different particles, makes it difficult to analyze extremely complex mixtures. These same problems are faced in the polymer industry, where dispersions of latices, pigments, lacquers, and emulsions must be characterized. There is a rich history of methods developed for the polymer industry finding use in the biochemical sciences. Two such methods are presented. These use analytical ultracentrifugation to determine the density and size distributions for submicron-sized particles. Both methods rely on Stokes' equations to estimate particle size and density, whereas turbidity, corrected using Mie's theory, provides the concentration measurement. The first method uses the sedimentation time in dispersion media of different densities to evaluate the particle density and size distribution. This method works provided the sample is chemically homogeneous. The second method splices together data gathered at different sample concentrations, thus permitting the high-resolution determination of the size distribution of particle diameters ranging from 10 to 3000 nm. By increasing the rotor speed exponentially from 0 to 40,000 rpm over a 1-h period, size distributions may be measured for extremely broadly distributed dispersions. Presented here is a short history of particle size distribution analysis using the ultracentrifuge, along with a description of the newest experimental methods. Several applications of the methods are provided that demonstrate the breadth of its utility, including extensions to samples containing nonspherical and chromophoric particles. PMID:9916040
Xu, Huacheng; Guo, Laodong
2017-06-15
Dissolved organic matter (DOM) is ubiquitous in natural waters. The ecological role and environmental fate of DOM are highly related to the chemical composition and size distribution. To evaluate size-dependent DOM quantity and quality, water samples were collected from river, lake, and coastal marine environments and size fractionated through a series of micro- and ultra-filtrations with different membranes having different pore-sizes/cutoffs, including 0.7, 0.4, and 0.2 μm and 100, 10, 3, and 1 kDa. Abundance of dissolved organic carbon, total carbohydrates, chromophoric and fluorescent components in the filtrates decreased consistently with decreasing filter/membrane cutoffs, but with a rapid decline when the filter cutoff reached 3 kDa, showing an evident size-dependent DOM abundance and composition. About 70% of carbohydrates and 90% of humic- and protein-like components were measured in the <3 kDa fraction in freshwater samples, but these percentages were higher in the seawater sample. Spectroscopic properties of DOM, such as specific ultraviolet absorbance, spectral slope, and biological and humification indices also varied significantly with membrane cutoffs. In addition, different ultrafiltration membranes with the same manufacture-rated cutoff also gave rise to different DOM retention efficiencies and thus different colloidal abundances and size spectra. Thus, the size-dependent DOM properties were related to both sample types and membranes used. Our results here provide not only baseline data for filter pore-size selection when exploring DOM ecological and environmental roles, but also new insights into better understanding the physical definition of DOM and its size continuum in quantity and quality in aquatic environments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sample size and power considerations in network meta-analysis
2012-01-01
Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327
Tooth Size Variation Related to Age in Amboseli Baboons
Galbany, Jordi; Dotras, Laia; Alberts, Susan C.; Pérez-Pérez, Alejandro
2011-01-01
We measured the molar size from a single population of wild baboons from Amboseli (Kenya), both females (n = 57) and males (n = 50). All the females were of known age; the males represented a mix of known-age individuals (n = 31) and individuals with ages estimated to within 2 years (n = 19). The results showed a significant reduction in the mesiodistal length of teeth in both sexes as a function of age. Overall patterns of age-related change in tooth size did not change whether we included or excluded the individuals of estimated age, but patterns of statistical significance changed as a result of changed sample sizes. Our results demonstrate that tooth length is directly related to age due to interproximal wearing caused by M2 and M3 compression loads. Dental studies in primates, including both fossil and extant species, are mostly based on specimens obtained from osteological collections of varying origins, for which the age at death of each individual in the sample is not known. Researchers should take into account the phenomenon of interproximal attrition leading to reduced tooth size when measuring tooth length for ondontometric purposes. PMID:21325862
An Integrated Tool for System Analysis of Sample Return Vehicles
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Maddock, Robert W.; Winski, Richard G.
2012-01-01
The next important step in space exploration is the return of sample materials from extraterrestrial locations to Earth for analysis. Most mission concepts that return sample material to Earth share one common element: an Earth entry vehicle. The analysis and design of entry vehicles is multidisciplinary in nature, requiring the application of mass sizing, flight mechanics, aerodynamics, aerothermodynamics, thermal analysis, structural analysis, and impact analysis tools. Integration of a multidisciplinary problem is a challenging task; the execution process and data transfer among disciplines should be automated and consistent. This paper describes an integrated analysis tool for the design and sizing of an Earth entry vehicle. The current tool includes the following disciplines: mass sizing, flight mechanics, aerodynamics, aerothermodynamics, and impact analysis tools. Python and Java languages are used for integration. Results are presented and compared with the results from previous studies.
Standardized mean differences cause funnel plot distortion in publication bias assessments.
Zwetsloot, Peter-Paul; Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris Ah; Chamuleau, Steven Aj; MacLeod, Malcolm R; Wever, Kimberley E
2017-09-08
Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results.
Standardized mean differences cause funnel plot distortion in publication bias assessments
Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris AH; Chamuleau, Steven AJ; MacLeod, Malcolm R
2017-01-01
Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results. PMID:28884685
A size-dependent constitutive model of bulk metallic glasses in the supercooled liquid region
Yao, Di; Deng, Lei; Zhang, Mao; Wang, Xinyun; Tang, Na; Li, Jianjun
2015-01-01
Size effect is of great importance in micro forming processes. In this paper, micro cylinder compression was conducted to investigate the deformation behavior of bulk metallic glasses (BMGs) in supercooled liquid region with different deformation variables including sample size, temperature and strain rate. It was found that the elastic and plastic behaviors of BMGs have a strong dependence on the sample size. The free volume and defect concentration were introduced to explain the size effect. In order to demonstrate the influence of deformation variables on steady stress, elastic modulus and overshoot phenomenon, four size-dependent factors were proposed to construct a size-dependent constitutive model based on the Maxwell-pulse type model previously presented by the authors according to viscosity theory and free volume model. The proposed constitutive model was then adopted in finite element method simulations, and validated by comparing the micro cylinder compression and micro double cup extrusion experimental data with the numerical results. Furthermore, the model provides a new approach to understanding the size-dependent plastic deformation behavior of BMGs. PMID:25626690
2013-01-01
Introduction Small-study effects refer to the fact that trials with limited sample sizes are more likely to report larger beneficial effects than large trials. However, this has never been investigated in critical care medicine. Thus, the present study aimed to examine the presence and extent of small-study effects in critical care medicine. Methods Critical care meta-analyses involving randomized controlled trials and reported mortality as an outcome measure were considered eligible for the study. Component trials were classified as large (≥100 patients per arm) and small (<100 patients per arm) according to their sample sizes. Ratio of odds ratio (ROR) was calculated for each meta-analysis and then RORs were combined using a meta-analytic approach. ROR<1 indicated larger beneficial effect in small trials. Small and large trials were compared in methodological qualities including sequence generating, blinding, allocation concealment, intention to treat and sample size calculation. Results A total of 27 critical care meta-analyses involving 317 trials were included. Of them, five meta-analyses showed statistically significant RORs <1, and other meta-analyses did not reach a statistical significance. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Large trials showed significantly better reporting quality than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data. Conclusions Small trials are more likely to report larger beneficial effects than large trials in critical care medicine, which could be partly explained by the lower methodological quality in small trials. Caution should be practiced in the interpretation of meta-analyses involving small trials. PMID:23302257
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
ERIC Educational Resources Information Center
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
Sehmel, George A.
1979-01-01
An isokinetic air sampler includes a filter, a holder for the filter, an air pump for drawing air through the filter at a fixed, predetermined rate, an inlet assembly for the sampler having an inlet opening therein of a size such that isokinetic air sampling is obtained at a particular wind speed, a closure for the inlet opening and means for simultaneously opening the closure and turning on the air pump when the wind speed is such that isokinetic air sampling is obtained. A system incorporating a plurality of such samplers provided with air pumps set to draw air through the filter at the same fixed, predetermined rate and having different inlet opening sizes for use at different wind speeds is included within the ambit of the present invention as is a method of sampling air to measure airborne concentrations of particulate pollutants as a function of wind speed.
NASA Astrophysics Data System (ADS)
Turner, Jefferson T.; Doucette, Gregory J.; Keafer, Bruce A.; Anderson, Donald M.
2005-09-01
During spring blooms of the toxic dinoflagellate Alexandrium fundyense in Casco Bay, Maine in 1998, we investigated vectorial intoxication of various zooplankton size fractions with PSP toxins, including zooplankton community composition from quantitative zooplankton samples (>102 μm), as well as zooplankton composition in relation to toxin levels in various size fractions (20-64, 64-100, 100-200, 200-500, >500 μm). Zooplankton abundance in 102 μm mesh samples was low (most values<10,000 animals m -3) from early April through early May, but increased to maxima in mid-June (cruise mean=121,500 animals m -3). Quantitative zooplankton samples (>102 μm) were dominated by copepod nauplii, and Oithona similis copepodites and adults at most locations except for those furthest inshore. At these inshore locations, Acartia hudsonica copepodites and adults were usually dominant. Larger copepods such as Calanus finmarchicus, Centropages typicus, and Pseudocalanus spp. were found primarily offshore, and at much lower abundances than O. similis. Rotifers, mainly present from late April to late May, were most abundant inshore. The marine cladoceran Evadne nordmani was sporadically abundant, particularly in mid-June. Microplankton in 20-64 μm size fractions was generally dominated by A. fundyense, non-toxic dinoflagellates, and tintinnids. Microplankton in 64-100 μm size fractions was generally dominated by larger non-toxic dinoflagellates, tintinnids, aloricate ciliates, and copepod nauplii, and in early May, rotifers. Some samples (23%) in the 64-100 μm size fractions contained abundant cells of A. fundyense, presumably due to sieve clogging, but most did not contain A. fundyense cells. This suggests that PSP toxin levels in those samples were due to vectorial intoxication of microzooplankters such as heterotrophic dinoflagellates, tintinnids, aloricate ciliates, rotifers, and copepod nauplii via feeding on A. fundyense cells. Dominant taxa in zooplankton fractions varied in size. Samples in the 100-200 μm size fraction were overwhelmingly dominated in most cases by copepod nauplii and small copepodites of O. similis, and during late May, rotifers. Samples in the 200-500 μm size fraction contained fewer copepod nauplii, but progressively more copepodites and adults of O. similis, particularly at offshore locations. At the most inshore stations, copepodites and adults of A. hudsonica were usual dominants. There were few copepod nauplii or O. similis in the>500 μm size fraction, which was usually dominated by copepodites and adults of C. finmarchicus, C. typicus, and Pseudocalanus spp. at offshore locations, and A. hudsonica inshore. Most of the higher PSP toxin concentrations were found in the larger zooplankton size fractions that were dominated by larger copepods such as C. finmarchicus and C. typicus. In contrast to our earlier findings, elevated toxin levels were also measured in numerous samples from smaller zooplankton size fractions, dominated by heterotrophic dinoflagellates, tintinnids and aloricate ciliates, rotifers, copepod nauplii, and smaller copepods such as O. similis and, at the most inshore locations, A. hudsonica. Thus, our data suggest that ingested PSP toxins are widespread throughout the zooplankton grazing community, and that potential vectors for intoxication of zooplankton assemblages include heterotrophic dinoflagellates, rotifers, protozoans, copepod nauplii, and small copepods.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
studies. Investigators must supply positive and negative controls. Current pricing for CIDR Program studies are for a minimum study size of 90 samples and increasing in multiples of 90. Please inquire for for the assay is included for CIDR Program studies. FFPE samples are supported for MethylationEPIC
An Analysis of Methods Used to Examine Gender Differences in Computer-Related Behavior.
ERIC Educational Resources Information Center
Kay, Robin
1992-01-01
Review of research investigating gender differences in computer-related behavior examines statistical and methodological flaws. Issues addressed include sample selection, sample size, scale development, scale quality, the use of univariate and multivariate analyses, regressional analysis, construct definition, construct testing, and the…
Feng, Dai; Cortese, Giuliana; Baumgartner, Richard
2017-12-01
The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.
Method and system for laser-based formation of micro-shapes in surfaces of optical elements
Bass, Isaac Louis; Guss, Gabriel Mark
2013-03-05
A method of forming a surface feature extending into a sample includes providing a laser operable to emit an output beam and modulating the output beam to form a pulse train having a plurality of pulses. The method also includes a) directing the pulse train along an optical path intersecting an exposed portion of the sample at a position i and b) focusing a first portion of the plurality of pulses to impinge on the sample at the position i. Each of the plurality of pulses is characterized by a spot size at the sample. The method further includes c) ablating at least a portion of the sample at the position i to form a portion of the surface feature and d) incrementing counter i. The method includes e) repeating steps a) through d) to form the surface feature. The sample is free of a rim surrounding the surface feature.
Elahi, Fanny M; Marx, Gabe; Cobigo, Yann; Staffaroni, Adam M; Kornak, John; Tosun, Duygu; Boxer, Adam L; Kramer, Joel H; Miller, Bruce L; Rosen, Howard J
2017-01-01
Degradation of white matter microstructure has been demonstrated in frontotemporal lobar degeneration (FTLD) and Alzheimer's disease (AD). In preparation for clinical trials, ongoing studies are investigating the utility of longitudinal brain imaging for quantification of disease progression. To date only one study has examined sample size calculations based on longitudinal changes in white matter integrity in FTLD. To quantify longitudinal changes in white matter microstructural integrity in the three canonical subtypes of frontotemporal dementia (FTD) and AD using diffusion tensor imaging (DTI). 60 patients with clinical diagnoses of FTD, including 27 with behavioral variant frontotemporal dementia (bvFTD), 14 with non-fluent variant primary progressive aphasia (nfvPPA), and 19 with semantic variant PPA (svPPA), as well as 19 patients with AD and 69 healthy controls were studied. We used a voxel-wise approach to calculate annual rate of change in fractional anisotropy (FA) and mean diffusivity (MD) in each group using two time points approximately one year apart. Mean rates of change in FA and MD in 48 atlas-based regions-of-interest, as well as global measures of cognitive function were used to calculate sample sizes for clinical trials (80% power, alpha of 5%). All FTD groups showed statistically significant baseline and longitudinal white matter degeneration, with predominant involvement of frontal tracts in the bvFTD group, frontal and temporal tracts in the PPA groups and posterior tracts in the AD group. Longitudinal change in MD yielded a larger number of regions with sample sizes below 100 participants per therapeutic arm in comparison with FA. SvPPA had the smallest sample size based on change in MD in the fornix (n = 41 participants per study arm to detect a 40% effect of drug), and nfvPPA and AD had their smallest sample sizes based on rate of change in MD within the left superior longitudinal fasciculus (n = 49 for nfvPPA, and n = 23 for AD). BvFTD generally showed the largest sample size estimates (minimum n = 140 based on MD in the corpus callosum). The corpus callosum appeared to be the best region for a potential study that would include all FTD subtypes. Change in global measure of functional status (CDR box score) yielded the smallest sample size for bvFTD (n = 71), but clinical measures were inferior to white matter change for the other groups. All three of the canonical subtypes of FTD are associated with significant change in white matter integrity over one year. These changes are consistent enough that drug effects in future clinical trials could be detected with relatively small numbers of participants. While there are some differences in regions of change across groups, the genu of the corpus callosum is a region that could be used to track progression in studies that include all subtypes.
Embedding clinical interventions into observational studies.
Newman, Anne B; Avilés-Santa, M Larissa; Anderson, Garnet; Heiss, Gerardo; Howard, Wm James; Krucoff, Mitchell; Kuller, Lewis H; Lewis, Cora E; Robinson, Jennifer G; Taylor, Herman; Treviño, Roberto P; Weintraub, William
2016-01-01
Novel approaches to observational studies and clinical trials could improve the cost-effectiveness and speed of translation of research. Hybrid designs that combine elements of clinical trials with observational registries or cohort studies should be considered as part of a long-term strategy to transform clinical trials and epidemiology, adapting to the opportunities of big data and the challenges of constrained budgets. Important considerations include study aims, timing, breadth and depth of the existing infrastructure that can be leveraged, participant burden, likely participation rate and available sample size in the cohort, required sample size for the trial, and investigator expertise. Community engagement and stakeholder (including study participants) support are essential for these efforts to succeed. Copyright © 2015. Published by Elsevier Inc.
Design and analysis of three-arm trials with negative binomially distributed endpoints.
Mütze, Tobias; Munk, Axel; Friede, Tim
2016-02-20
A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.
Dimensions of design space: a decision-theoretic approach to optimal research design.
Conti, Stefano; Claxton, Karl
2009-01-01
Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.
NASA Astrophysics Data System (ADS)
Cantarello, Elena; Steck, Claude E.; Fontana, Paolo; Fontaneto, Diego; Marini, Lorenzo; Pautasso, Marco
2010-03-01
Recent large-scale studies have shown that biodiversity-rich regions also tend to be densely populated areas. The most obvious explanation is that biodiversity and human beings tend to match the distribution of energy availability, environmental stability and/or habitat heterogeneity. However, the species-people correlation can also be an artefact, as more populated regions could show more species because of a more thorough sampling. Few studies have tested this sampling bias hypothesis. Using a newly collated dataset, we studied whether Orthoptera species richness is related to human population size in Italy’s regions (average area 15,000 km2) and provinces (2,900 km2). As expected, the observed number of species increases significantly with increasing human population size for both grain sizes, although the proportion of variance explained is minimal at the provincial level. However, variations in observed Orthoptera species richness are primarily associated with the available number of records, which is in turn well correlated with human population size (at least at the regional level). Estimated Orthoptera species richness (Chao2 and Jackknife) also increases with human population size both for regions and provinces. Both for regions and provinces, this increase is not significant when controlling for variation in area and number of records. Our study confirms the hypothesis that broad-scale human population-biodiversity correlations can in some cases be artefactual. More systematic sampling of less studied taxa such as invertebrates is necessary to ascertain whether biogeographical patterns persist when sampling effort is kept constant or included in models.
Overview of the Mars Sample Return Earth Entry Vehicle
NASA Technical Reports Server (NTRS)
Dillman, Robert; Corliss, James
2008-01-01
NASA's Mars Sample Return (MSR) project will bring Mars surface and atmosphere samples back to Earth for detailed examination. Langley Research Center's MSR Earth Entry Vehicle (EEV) is a core part of the mission, protecting the sample container during atmospheric entry, descent, and landing. Planetary protection requirements demand a higher reliability from the EEV than for any previous planetary entry vehicle. An overview of the EEV design and preliminary analysis is presented, with a follow-on discussion of recommended future design trade studies to be performed over the next several years in support of an MSR launch in 2018 or 2020. Planned topics include vehicle size for impact protection of a range of sample container sizes, outer mold line changes to achieve surface sterilization during re-entry, micrometeoroid protection, aerodynamic stability, thermal protection, and structural materials selection.
NASA Astrophysics Data System (ADS)
Reitz, M. A.; Seeber, L.; Schaefer, J. M.; Ferguson, E. K.
2012-12-01
Early studies pioneering the method for catchment wide erosion rates by measuring 10Be in alluvial sediment were taken at river mouths and used the sand size grain fraction from the riverbeds in order to average upstream erosion rates and measure erosion patterns. Finer particles (<0.0625 mm) were excluded to reduce the possibility of a wind-blown component of sediment and coarser particles (>2 mm) were excluded to better approximate erosion from the entire upstream catchment area (coarse grains are generally found near the source). Now that the sensitivity of 10Be measurements is rapidly increasing, we can precisely measure erosion rates from rivers eroding active tectonic regions. These active regions create higher energy drainage systems that erode faster and carry coarser sediment. In these settings, does the sand-sized fraction fully capture the average erosion of the upstream drainage area? Or does a different grain size fraction provide a more accurate measure of upstream erosion? During a study of the Neto River in Calabria, southern Italy, we took 8 samples along the length of the river, focusing on collecting samples just below confluences with major tributaries, in order to use the high-resolution erosion rate data to constrain tectonic motion. The samples we measured were sieved to either a 0.125 mm - 0.710 mm fraction or the 0.125 mm - 4 mm fraction (depending on how much of the former was available). After measuring these 8 samples for 10Be and determining erosion rates, we used the approach by Granger et al. [1996] to calculate the subcatchment erosion rates between each sample point. In the subcatchments of the river where we used grain sizes up to 4 mm, we measured very low 10Be concentrations (corresponding to high erosion rates) and calculated nonsensical subcatchment erosion rates (i.e. negative rates). We, therefore, hypothesize that the coarser grain sizes we included are preferentially sampling a smaller upstream area, and not the entire upstream catchment, which is assumed when measurements are based solely on the sand sized fraction. To test this hypothesis, we used samples with a variety of grain sizes from the Shillong Plateau. We sieved 5 samples into three grain size fractions: 0.125 mm - 710 mm, 710 mm - 4 mm, and >4 mm and measured 10Be concentrations in each fraction. Although there is some variation in the grain size fraction that yields the highest erosion rate, generally, the coarser grain size fractions have higher erosion rates. More significant are the results when calculating the subcatchment erosion rates, which suggest that even medium sized grains (710 mm - 4 mm) are sampling an area smaller than the entire upstream area; this finding is consistent with the nonsensical results from the Neto River study. This result has numerous implications for the interpretations of 10Be erosion rates: most importantly, an alluvial sample may not be averaging the entire upstream area, even when using the sand size fraction - resulting erosion rates more pertinent for that sample point rather than the entire catchment.
Photographic techniques for characterizing streambed particle sizes
Whitman, Matthew S.; Moran, Edward H.; Ourso, Robert T.
2003-01-01
We developed photographic techniques to characterize coarse (>2-mm) and fine (≤2-mm) streambed particle sizes in 12 streams in Anchorage, Alaska. Results were compared with current sampling techniques to assess which provided greater sampling efficiency and accuracy. The streams sampled were wadeable and contained gravel—cobble streambeds. Gradients ranged from about 5% at the upstream sites to about 0.25% at the downstream sites. Mean particle sizes and size-frequency distributions resulting from digitized photographs differed significantly from those resulting from Wolman pebble counts for five sites in the analysis. Wolman counts were biased toward selecting larger particles. Photographic analysis also yielded a greater number of measured particles (mean = 989) than did the Wolman counts (mean = 328). Stream embeddedness ratings assigned from field and photographic observations were significantly different at 5 of the 12 sites, although both types of ratings showed a positive relationship with digitized surface fines. Visual estimates of embeddedness and digitized surface fines may both be useful indicators of benthic conditions, but digitizing surface fines produces quantitative rather than qualitative data. Benefits of the photographic techniques include reduced field time, minimal streambed disturbance, convenience of postfield processing, easy sample archiving, and improved accuracy and replication potential.
Jahandar Lashaki, Masoud; Atkinson, John D; Hashisho, Zaher; Phillips, John H; Anderson, James E; Nichols, Mark
2016-11-05
The objective of this study is to determine the contribution of surface oxygen groups to irreversible adsorption (aka heel formation) during cyclic adsorption/regeneration of organic vapors commonly found in industrial systems, including vehicle-painting operations. For this purpose, three chemically modified activated carbon samples, including two oxygen-deficient (hydrogen-treated and heat-treated) and one oxygen-rich sample (nitric acid-treated) were prepared. The samples were tested for 5 adsorption/regeneration cycles using a mixture of nine organic compounds. For the different samples, mass balance cumulative heel was 14 and 20% higher for oxygen functionalized and hydrogen-treated samples, respectively, relative to heat-treated sample. Thermal analysis results showed heel formation due to physisorption for the oxygen-deficient samples, and weakened physisorption combined with chemisorption for the oxygen-rich sample. Chemisorption was attributed to consumption of surface oxygen groups by adsorbed species, resulting in formation of high boiling point oxidation byproducts or bonding between the adsorbates and the surface groups. Pore size distributions indicated that different pore sizes contributed to heel formation - narrow micropores (<7Å) in the oxygen-deficient samples and midsize micropores (7-12Å) in the oxygen-rich sample. The results from this study help explain the heel formation mechanism and how it relates to chemically tailored adsorbent materials. Copyright © 2016 Elsevier B.V. All rights reserved.
Tay, Benjamin Chia-Meng; Chow, Tzu-Hao; Ng, Beng-Koon; Loh, Thomas Kwok-Seng
2012-09-01
This study investigates the autocorrelation bandwidths of dual-window (DW) optical coherence tomography (OCT) k-space scattering profile of different-sized microspheres and their correlation to scatterer size. A dual-bandwidth spectroscopic metric defined as the ratio of the 10% to 90% autocorrelation bandwidths is found to change monotonically with microsphere size and gives the best contrast enhancement for scatterer size differentiation in the resulting spectroscopic image. A simulation model supports the experimental results and revealed a tradeoff between the smallest detectable scatterer size and the maximum scatterer size in the linear range of the dual-window dual-bandwidth (DWDB) metric, which depends on the choice of the light source optical bandwidth. Spectroscopic OCT (SOCT) images of microspheres and tonsil tissue samples based on the proposed DWDB metric showed clear differentiation between different-sized scatterers as compared to those derived from conventional short-time Fourier transform metrics. The DWDB metric significantly improves the contrast in SOCT imaging and can aid the visualization and identification of dissimilar scatterer size in a sample. Potential applications include the early detection of cell nuclear changes in tissue carcinogenesis, the monitoring of healing tendons, and cell proliferation in tissue scaffolds.
Kumar, S.; Simonson, S.E.; Stohlgren, T.J.
2009-01-01
We investigated butterfly responses to plot-level characteristics (plant species richness, vegetation height, and range in NDVI [normalized difference vegetation index]) and spatial heterogeneity in topography and landscape patterns (composition and configuration) at multiple spatial scales. Stratified random sampling was used to collect data on butterfly species richness from seventy-six 20 ?? 50 m plots. The plant species richness and average vegetation height data were collected from 76 modified-Whittaker plots overlaid on 76 butterfly plots. Spatial heterogeneity around sample plots was quantified by measuring topographic variables and landscape metrics at eight spatial extents (radii of 300, 600 to 2,400 m). The number of butterfly species recorded was strongly positively correlated with plant species richness, proportion of shrubland and mean patch size of shrubland. Patterns in butterfly species richness were negatively correlated with other variables including mean patch size, average vegetation height, elevation, and range in NDVI. The best predictive model selected using Akaike's Information Criterion corrected for small sample size (AICc), explained 62% of the variation in butterfly species richness at the 2,100 m spatial extent. Average vegetation height and mean patch size were among the best predictors of butterfly species richness. The models that included plot-level information and topographic variables explained relatively less variation in butterfly species richness, and were improved significantly after including landscape metrics. Our results suggest that spatial heterogeneity greatly influences patterns in butterfly species richness, and that it should be explicitly considered in conservation and management actions. ?? 2008 Springer Science+Business Media B.V.
Structure and properties of clinical coralline implants measured via 3D imaging and analysis.
Knackstedt, Mark Alexander; Arns, Christoph H; Senden, Tim J; Gross, Karlis
2006-05-01
The development and design of advanced porous materials for biomedical applications requires a thorough understanding of how material structure impacts on mechanical and transport properties. This paper illustrates a 3D imaging and analysis study of two clinically proven coral bone graft samples (Porites and Goniopora). Images are obtained from X-ray micro-computed tomography (micro-CT) at a resolution of 16.8 microm. A visual comparison of the two images shows very different structure; Porites has a homogeneous structure and consistent pore size while Goniopora has a bimodal pore size and a strongly disordered structure. A number of 3D structural characteristics are measured directly on the images including pore volume-to-surface-area, pore and solid size distributions, chord length measurements and tortuosity. Computational results made directly on the digitized tomographic images are presented for the permeability, diffusivity and elastic modulus of the coral samples. The results allow one to quantify differences between the two samples. 3D digital analysis can provide a more thorough assessment of biomaterial structure including the pore wall thickness, local flow, mechanical properties and diffusion pathways. We discuss the implications of these results to the development of optimal scaffold design for tissue ingrowth.
Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence
2017-11-01
When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.
A strategy for characterized aerosol-sampling transport efficiency.
NASA Astrophysics Data System (ADS)
Schwarz, J. P.
2017-12-01
A fundamental concern when sampling aerosol in the laboratory or in situ, on the ground or (especially) from aircraft, is characterizing transport losses due to particles contacting the walls of tubing used for transport. Depending on the size range of the aerosol, different mechanisms dominate these losses: diffusion for the ultra-fine, and inertial and gravitational settling losses for the coarse mode. In the coarse mode, losses become intractable very quickly with increasing particle size above 5 µm diameter. Here we present these issues, with a concept approach to reducing aerosol losses via strategic dilution with porous tubing including results of laboratory testing of a prototype. We infer the potential value of this approach to atmospheric aerosol sampling.
Study of Cost of Distance Education Institutes with Different Size Classes in India.
ERIC Educational Resources Information Center
Datt, Ruddar
A study of the cost of distance education institutes in India with different size classes involved nine institutions. The sample included 47 percent of total enrollment in distance education institutions in India. The study was restricted to recurring costs and examined the shares of different components of costs and the sources of funding. It…
Seasonal and sexual differences in American marten diet in northeastern Oregon.
E.L. Bull
2002-01-01
Information on the diet of the American marten (Martes americana) is vital to understanding habitat requirements of populations of this species. The frequency of occurrence of prey items found in 1014 scat samples associated with 31 radiocollared American martens in northeastern Oregon included: 62.7% vole-sized prey, 28.2% squirrel-sized prey, 22....
Thanh Noi, Phan; Kappas, Martin
2017-01-01
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km2 within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets. PMID:29271909
Thanh Noi, Phan; Kappas, Martin
2017-12-22
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km² within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets.
System for precise position registration
Sundelin, Ronald M.; Wang, Tong
2005-11-22
An apparatus for enabling accurate retaining of a precise position, such as for reacquisition of a microscopic spot or feature having a size of 0.1 mm or less, on broad-area surfaces after non-in situ processing. The apparatus includes a sample and sample holder. The sample holder includes a base and three support posts. Two of the support posts interact with a cylindrical hole and a U-groove in the sample to establish location of one point on the sample and a line through the sample. Simultaneous contact of the third support post with the surface of the sample defines a plane through the sample. All points of the sample are therefore uniquely defined by the sample and sample holder. The position registration system of the current invention provides accuracy, as measured in x, y repeatability, of at least 140 .mu.m.
Got power? A systematic review of sample size adequacy in health professions education research.
Cook, David A; Hatala, Rose
2015-03-01
Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.
Hallén, Jonas; Jensen, Jesper K; Buser, Peter; Jaffe, Allan S; Atar, Dan
2011-03-01
Presence of microvascular obstruction (MVO) following primary percutaneous coronary intervention (pPCI) for ST-elevation myocardial infarction (STEMI) confers higher risk of left-ventricular remodelling and dysfunction. Measurement of cardiac troponin I (cTnI) after STEMI reflects the extent of myocardial destruction. We aimed to explore whether cTnI values were associated with presence of MVO independently of infarct size in STEMI patients receiving pPCI. 175 patients with STEMI were included. cTnI was sampled at 24 and 48 h. MVO and infarct size was determined by delayed enhancement with cardiac magnetic resonance at five to seven days post index event. The presence of MVO following STEMI was associated with larger infarct size and higher values of cTnI at 24 and 48 h. For any given infarct size or cTnI value, there was a greater risk of MVO development in non-anterior infarctions. cTnI was strongly associated with MVO in both anterior and non-anterior infarctions (P < 0.01) after adjustment for covariates (including infarct size); and was reasonably effective in predicting MVO in individual patients (area-under-the-curve ≥0.81). Presence of MVO is reflected in levels of cTnI sampled at an early time-point following STEMI and this association persists after adjustment for infarct size.
Corrosion resistance of nanostructured titanium.
Garbacz, H; Pisarek, M; Kurzydłowski, K J
2007-11-01
The present work reports results of studies of corrosion resistance of pure nano-Ti-Grade 2 after hydrostatic extrusion. The grain size of the examined samples was below 90 nm. Surface analytical technique including AES combined with Ar(+) ion sputtering, were used to investigate the chemical composition and thicknesses of the oxides formed on nano-Ti. It has been found that the grain size of the titanium substrate did not influence the thickness of oxide formed on the titanium. The thickness of the oxide observed on the titanium samples before and after hydrostatic extrusion was about 6 nm. Tests carried out in a NaCl solution revealed a slightly lower corrosion resistance of nano-Ti in comparison with the titanium with micrometric grain size.
NASA Astrophysics Data System (ADS)
Chan, Y. C.; Vowles, P. D.; McTainsh, G. H.; Simpson, R. W.; Cohen, D. D.; Bailey, G. M.; McOrist, G. D.
This paper describes a method for the simultaneous collection of size-fractionated aerosol samples on several collection substrates, including glass-fibre filter, carbon tape and silver tape, with a commercially available high-volume cascade impactor. This permitted various chemical analysis procedures, including ion beam analysis (IBA), instrumental neutron activation analysis (INAA), carbon analysis and scanning electron microscopy (SEM), to be carried out on the samples.
[A comparison of convenience sampling and purposive sampling].
Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien
2014-06-01
Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.
The impact of multiple endpoint dependency on Q and I(2) in meta-analysis.
Thompson, Christopher Glen; Becker, Betsy Jane
2014-09-01
A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures Q and I(2) in scenarios using the unbiased standardized-mean-difference effect size. Univariate and multivariate meta-analysis methods are examined. Conditions included different overall outcome effects, study sample sizes, numbers of studies, between-outcomes correlations, dependency structures, and ways of computing the correlation. The univariate approach used typical fixed-effects analyses whereas the multivariate approach used generalized least-squares (GLS) estimates of a fixed-effects model, weighted by the inverse variance-covariance matrix. Increased dependence among effect sizes led to increased Type I error rates from univariate models. When effect sizes were strongly dependent, error rates were drastically higher than nominal levels regardless of study sample size and number of studies. In contrast, using GLS estimation to account for multiple-endpoint dependency maintained error rates within nominal levels. Conversely, mean I(2) values were not greatly affected by increased amounts of dependency. Last, we point out that the between-outcomes correlation should be estimated as a pooled within-groups correlation rather than using a full-sample estimator that does not consider treatment/control group membership. Copyright © 2014 John Wiley & Sons, Ltd.
Simon, Nancy S.; Ingle, Sarah N.
2011-01-01
μThis study of phosphorus (P) cycling in eutrophic Upper Klamath Lake (UKL), Oregon, was conducted by the U.S. Geological Survey in cooperation with the U.S. Bureau of Reclamation. Lakebed sediments from the upper 30 centimeters (cm) of cores collected from 26 sites were characterized. Cores were sampled at 0.5, 1.5, 2.5, 3.5, 4.5, 10, 15, 20, 25, and 30 cm. Prior to freezing, water content and sediment pH were determined. After being freeze-dried, all samples were separated into greater than 63-micron (μm) particle-size (coarse) and less than 63-μm particle-size (fine) fractions. In the surface samples (0.5 to 4.5 cm below the sediment water interface), approximately three-fourths of the particles were larger than 63-μm. The ratios of the coarse particle-size fraction (>63 μm) and the fine particle-size fraction (<63 μm) were approximately equal in samples at depths greater than 10 cm below the sediment water interface. Chemical analyses included both size fractions of freeze-dried samples. Chemical analyses included determination of total concentrations of aluminum (Al), calcium (Ca), carbon (C), iron (Fe), poorly crystalline Fe, nitrogen (N), P, and titanium (Ti). Total Fe concentrations were the largest in sediment from the northern portion of UKL, Howard Bay, and the southern portion of the lake. Concentrations of total Al, Ca, and Ti were largest in sediment from the northern, central, and southernmost portions of the lake and in sediment from Howard Bay. Concentrations of total C and N were largest in sediment from the embayments and in sediment from the northern arm and southern portion of the lake in the general region of Buck Island. Concentrations of total C were larger in the greater than 63-μm particle-size fraction than in the less than 63-μm particle-size fraction. Sediments were sequentially extracted to determine concentrations of inorganic forms of P, including loosely sorbed P, P associated with poorly crystalline Fe oxides, and P associated with mineral phases. The difference between the concentration of total P and sum of the concentrations of inorganic forms of P is referred to as residual P. Residual P was the largest fraction of P in all of the sediment samples. In UKL, the correlation between concentrations of total P and total Fe in sediment is poor (R2<0.1). The correlation between the concentrations of total P and P associated with poorly crystalline Fe oxides is good (R2=0.43) in surface sediment (0.5-4.5 cm below the sediment water interface) but poor (R2<0.1) in sediments at depths between 10 cm and 30 cm. Phosphorus associated with poorly crystalline Fe oxides is considered bioavailable because it is released when sediment conditions change from oxidizing to reducing, which causes dissolution of Fe oxides.
Body mass estimates of hominin fossils and the evolution of human body size.
Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G
2015-08-01
Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.
Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong
2017-01-01
Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01). These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11–15. PMID:28934362
Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong; Li, Hong-Qing
2017-01-01
Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01). These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.
A new estimator of the discovery probability.
Favaro, Stefano; Lijoi, Antonio; Prünster, Igor
2012-12-01
Species sampling problems have a long history in ecological and biological studies and a number of issues, including the evaluation of species richness, the design of sampling experiments, and the estimation of rare species variety, are to be addressed. Such inferential problems have recently emerged also in genomic applications, however, exhibiting some peculiar features that make them more challenging: specifically, one has to deal with very large populations (genomic libraries) containing a huge number of distinct species (genes) and only a small portion of the library has been sampled (sequenced). These aspects motivate the Bayesian nonparametric approach we undertake, since it allows to achieve the degree of flexibility typically needed in this framework. Based on an observed sample of size n, focus will be on prediction of a key aspect of the outcome from an additional sample of size m, namely, the so-called discovery probability. In particular, conditionally on an observed basic sample of size n, we derive a novel estimator of the probability of detecting, at the (n+m+1)th observation, species that have been observed with any given frequency in the enlarged sample of size n+m. Such an estimator admits a closed-form expression that can be exactly evaluated. The result we obtain allows us to quantify both the rate at which rare species are detected and the achieved sample coverage of abundant species, as m increases. Natural applications are represented by the estimation of the probability of discovering rare genes within genomic libraries and the results are illustrated by means of two expressed sequence tags datasets. © 2012, The International Biometric Society.
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P
1995-01-01
This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.
Containers and systems for the measurement of radioactive gases and related methods
Mann, Nicholas R; Watrous, Matthew G; Oertel, Christopher P; McGrath, Christopher A
2017-06-20
Containers for a fluid sample containing a radionuclide for measurement of radiation from the radionuclide include an outer shell having one or more ports between an interior and an exterior of the outer shell, and an inner shell secured to the outer shell. The inner shell includes a detector receptacle sized for at least partial insertion into the outer shell. The inner shell and outer shell together at least partially define a fluid sample space. The outer shell and inner shell are configured for maintaining an operating pressure within the fluid sample space of at least about 1000 psi. Systems for measuring radioactivity in a fluid include such a container and a radiation detector received at least partially within the detector receptacle. Methods of measuring radioactivity in a fluid sample include maintaining a pressure of a fluid sample within a Marinelli-type container at least at about 1000 psi.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
Metric variation and sexual dimorphism in the dentition of Ouranopithecus macedoniensis.
Schrein, Caitlin M
2006-04-01
The fossil sample attributed to the late Miocene hominoid taxon Ouranopithecus macedoniensis is characterized by a high degree of dental metric variation. As a result, some researchers support a multiple-species taxonomy for this sample. Other researchers do not think that the sample variation is too great to be accommodated within one species. This study examines variation and sexual dimorphism in mandibular canine and postcanine dental metrics of an Ouranopithecus sample. Bootstrapping (resampling with replacement) of extant hominoid dental metric data is performed to test the hypothesis that the coefficients of variation (CV) and the indices of sexual dimorphism (ISD) of the fossil sample are not significantly different from those of modern great apes. Variation and sexual dimorphism in Ouranopithecus M(1) dimensions were statistically different from those of all extant ape samples; however, most of the dental metrics of Ouranopithecus were neither more variable nor more sexually dimorphic than those of Gorilla and Pongo. Similarly high levels of mandibular molar variation are known to characterize other fossil hominoid species. The Ouranopithecus specimens are morphologically homogeneous and it is probable that all but one specimen included in this study are from a single population. It is unlikely that the sample includes specimens of two sympatric large-bodied hominoid species. For these reasons, a single-species hypothesis is not rejected for the Ouranopithecus macedoniensis material. Correlations between mandibular first molar tooth size dimorphism and body size dimorphism indicate that O. macedoniensis and other extinct hominoids were more sexually size dimorphic than any living great apes, which suggests that social behaviors and life history profiles of these species may have been different from those of living species.
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
Performance of digital RGB reflectance color extraction for plaque lesion
NASA Astrophysics Data System (ADS)
Hashim, Hadzli; Taib, Mohd Nasir; Jailani, Rozita; Sulaiman, Saadiah; Baba, Roshidah
2005-01-01
Several clinical psoriasis lesion groups are been studied for digital RGB color features extraction. Previous works have used samples size that included all the outliers lying beyond the standard deviation factors from the peak histograms. This paper described the statistical performances of the RGB model with and without removing these outliers. Plaque lesion is experimented with other types of psoriasis. The statistical tests are compared with respect to three samples size; the original 90 samples, the first size reduction by removing outliers from 2 standard deviation distances (2SD) and the second size reduction by removing outliers from 1 standard deviation distance (1SD). Quantification of data images through the normal/direct and differential of the conventional reflectance method is considered. Results performances are concluded by observing the error plots with 95% confidence interval and findings of the inference T-tests applied. The statistical tests outcomes have shown that B component for conventional differential method can be used to distinctively classify plaque from the other psoriasis groups in consistent with the error plots finding with an improvement in p-value greater than 0.5.
Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L
2014-01-01
The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit drugs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Karumendu, L U; Ven, R van de; Kerr, M J; Lanza, M; Hopkins, D L
2009-08-01
The impact of homogenization speed on Particle Size (PS) results was examined using samples from the M.longissimus thoracis et lumborum (LL) of 40 lambs. One gram duplicate samples from meat aged for 1 and 5days were homogenized at five different speeds; 11,000, 13,000, 16,000, 19,000 and 22,000rpm. In addition to this LL samples from 30 different lamb carcases also aged for 1 and 5days were used to study the comparison between PS and myofibrillar fragmentation index (MFI) values. In this case, 1g duplicate samples (n=30) were homogenized at 16,000rpm and the other half (0.5g samples) at 11,000rpm (n=30). The homogenates were then subjected to respective combinations of treatments which included either PS analysis or the determination of MFI, both with or without three cycles of centrifugation. All 140 samples of LL included 65g blocks for subsequent shear force (SF) testing. Homogenization at 16,000rpm provided the greatest ability to detect ageing differences for particle size between samples aged for 1 and 5days. Particle size at the 25% quantile provided the best result for detecting differences due to ageing. It was observed that as ageing increased the mean PS decreased and was significantly (P<0.001) less for 5days aged samples compared to 1day aged samples, while MFI values significantly increased (P<0.001) as ageing period increased. When comparing the PS and MFI methods it became apparent that, as opposed to the MFI method, there was a greater coefficient of variation for the PS method which warranted a quality assurance system. Given this requirement and examination of the mean, standard deviation and the 25% quantile for PS data it was concluded that three cycles of centrifugation were not necessary and this also applied to the MFI method. There were significant correlations (P<0.001) within the same lamb loin sample aged for a given period between mean MFI and mean PS (-0.53), mean MFI and mean SF (-0.38) and mean PS and mean SF (0.23). It was concluded that PS analysis offers significant potential for streamlining determination of myofibrillar degradation when samples are measured after homogenization at 16,000rpm with no centrifugation.
NASA Astrophysics Data System (ADS)
Japuntich, Daniel A.; Franklin, Luke M.; Pui, David Y.; Kuehn, Thomas H.; Kim, Seong Chan; Viner, Andrew S.
2007-01-01
Two different air filter test methodologies are discussed and compared for challenges in the nano-sized particle range of 10-400 nm. Included in the discussion are test procedure development, factors affecting variability and comparisons between results from the tests. One test system which gives a discrete penetration for a given particle size is the TSI 8160 Automated Filter tester (updated and commercially available now as the TSI 3160) manufactured by the TSI, Inc., Shoreview, MN. Another filter test system was developed utilizing a Scanning Mobility Particle Sizer (SMPS) to sample the particle size distributions downstream and upstream of an air filter to obtain a continuous percent filter penetration versus particle size curve. Filtration test results are shown for fiberglass filter paper of intermediate filtration efficiency. Test variables affecting the results of the TSI 8160 for NaCl and dioctyl phthalate (DOP) particles are discussed, including condensation particle counter stability and the sizing of the selected particle challenges. Filter testing using a TSI 3936 SMPS sampling upstream and downstream of a filter is also shown with a discussion of test variables and the need for proper SMPS volume purging and filter penetration correction procedure. For both tests, the penetration versus particle size curves for the filter media studied follow the theoretical Brownian capture model of decreasing penetration with decreasing particle diameter down to 10 nm with no deviation. From these findings, the authors can say with reasonable confidence that there is no evidence of particle thermal rebound in the size range.
NASA Astrophysics Data System (ADS)
Marin, N.; Farmer, J. D.; Zacny, K.; Sellar, R. G.; Nunez, J.
2011-12-01
This study seeks to understand variations in composition and texture of basaltic pyroclastic materials used in the 2010 International Lunar Surface Operation-In-Situ Resource Utilization Analogue Test (ILSO-ISRU) held on the slopes of Mauna Kea Volcano, Hawaii (1). The quantity and quality of resources delivered by ISRU depends upon the nature of the materials processed (2). We obtained a one-meter deep auger cuttings sample of a basaltic regolith at the primary site for feed stock materials being mined for the ISRU field test. The auger sample was subdivided into six, ~16 cm depth increments and each interval was sampled and characterized in the field using the Multispectral Microscopic Imager (MMI; 3) and a portable X-ray Diffractometer (Terra, InXitu Instruments, Inc.). Splits from each sampled interval were returned to the lab and analyzed using more definitive methods, including high resolution Powder X-ray Diffraction and Thermal Infrared (TIR) spectroscopy. The mineralogy and microtexture (grain size, sorting, roundness and sphericity) of the auger samples were determined using petrographic point count measurements obtained from grain-mount thin sections. NIH Image J (http://rsb.info.nih.gov/ij/) was applied to digital images of thin sections to document changes in particle size with depth. Results from TIR showed a general predominance of volcanic glass, along with plagioclase, olivine, and clinopyroxene. In addition, thin section and XRPD analyses showed a down core increase in the abundance of hydrated iron oxides (as in situ weathering products). Quantitative point count analyses confirmed the abundance of volcanic glass in samples, but also revealed olivine and pyroxene to be minor components, that decreased in abundance with depth. Furthermore, point count and XRD analyses showed a decrease in magnetite and ilmenite with depth, accompanied by an increase in Fe3+phases, including hematite and ferrihydrite. Image J particle analysis showed that the average grain size decreased down the depth profile. This decrease in average grain size and increase in hydrated iron oxides down hole suggests that the most favorable ISRU feedstock materials were sampled in the lower half-meter of the mine section sampled.
Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin
NASA Astrophysics Data System (ADS)
He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu
2017-06-01
This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.
Factors to Consider in Designing Aerosol Inlet Systems for Engine Exhaust Plume Sampling
NASA Technical Reports Server (NTRS)
Anderson, Bruce
2004-01-01
This document consists of viewgraphs of charts and diagrams of considerations to take when sampling the engine exhaust plume. It includes a chart that compares the emissions from various fuels, a diagram and charts of the various processes and conditions that influence the particulate size and concentration,
Beno, Sarah M; Stasiewicz, Matthew J; Andrus, Alexis D; Ralyea, Robert D; Kent, David J; Martin, Nicole H; Wiedmann, Martin; Boor, Kathryn J
2016-12-01
Pathogen environmental monitoring programs (EMPs) are essential for food processing facilities of all sizes that produce ready-to-eat food products exposed to the processing environment. We developed, implemented, and evaluated EMPs targeting Listeria spp. and Salmonella in nine small cheese processing facilities, including seven farmstead facilities. Individual EMPs with monthly sample collection protocols were designed specifically for each facility. Salmonella was detected in only one facility, with likely introduction from the adjacent farm indicated by pulsed-field gel electrophoresis data. Listeria spp. were isolated from all nine facilities during routine sampling. The overall Listeria spp. (other than Listeria monocytogenes ) and L. monocytogenes prevalences in the 4,430 environmental samples collected were 6.03 and 1.35%, respectively. Molecular characterization and subtyping data suggested persistence of a given Listeria spp. strain in seven facilities and persistence of L. monocytogenes in four facilities. To assess routine sampling plans, validation sampling for Listeria spp. was performed in seven facilities after at least 6 months of routine sampling. This validation sampling was performed by independent individuals and included collection of 50 to 150 samples per facility, based on statistical sample size calculations. Two of the facilities had a significantly higher frequency of detection of Listeria spp. during the validation sampling than during routine sampling, whereas two other facilities had significantly lower frequencies of detection. This study provides a model for a science- and statistics-based approach to developing and validating pathogen EMPs.
Effect of Sampling Plans on the Risk of Escherichia coli O157 Illness.
Kiermeier, Andreas; Sumner, John; Jenson, Ian
2015-07-01
Australia exports about 150,000 to 200,000 tons of manufacturing beef to the United States annually. Each lot is tested for Escherichia coli O157 using the N-60 sampling protocol, where 60 small pieces of surface meat from each lot of production are tested. A risk assessment of E. coli O157 illness from the consumption of hamburgers made from Australian manufacturing meat formed the basis to evaluate the effect of sample size and amount on the number of illnesses predicted. The sampling plans evaluated included no sampling (resulting in an estimated 55.2 illnesses per annum), the current N-60 plan (50.2 illnesses), N-90 (49.6 illnesses), N-120 (48.4 illnesses), and a more stringent N-60 sampling plan taking five 25-g samples from each of 12 cartons (47.4 illnesses per annum). While sampling may detect some highly contaminated lots, it does not guarantee that all such lots are removed from commerce. It is concluded that increasing the sample size or sample amount from the current N-60 plan would have a very small public health effect.
NASA Astrophysics Data System (ADS)
Yulia, M.; Suhandy, D.
2018-03-01
NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.
Erosion of an ancient mountain range, the Great Smoky Mountains, North Carolina and Tennessee
Matmon, A.; Bierman, P.R.; Larsen, J.; Southworth, S.; Pavich, M.; Finkel, R.; Caffee, M.
2003-01-01
Analysis of 10Be and 26Al in bedrock (n=10), colluvium (n=5 including grain size splits), and alluvial sediments (n=59 including grain size splits), coupled with field observations and GIS analysis, suggest that erosion rates in the Great Smoky Mountains are controlled by subsurface bedrock erosion and diffusive slope processes. The results indicate rapid alluvial transport, minimal alluvial storage, and suggest that most of the cosmogenic nuclide inventory in sediments is accumulated while they are eroding from bedrock and traveling down hill slopes. Spatially homogeneous erosion rates of 25 - 30 mm Ky-1 are calculated throughout the Great Smoky Mountains using measured concentrations of cosmogenic 10Be and 26Al in quartz separated from alluvial sediment. 10Be and 26Al concentrations in sediments collected from headwater tributaries that have no upstream samples (n=18) are consistent with an average erosion rate of 28 ?? 8 mm Ky-1, similar to that of the outlet rivers (n=16, 24 ?? 6 mm Ky-1), which carry most of the sediment out of the mountain range. Grain-size-specific analysis of 6 alluvial sediment samples shows higher nuclide concentrations in smaller grain sizes than in larger ones. The difference in concentrations arises from the large elevation distribution of the source of the smaller grains compared with the narrow and relatively low source elevation of the large grains. Large sandstone clasts disaggregate into sand-size grains rapidly during weathering and downslope transport; thus, only clasts from the lower parts of slopes reach the streams. 26Al/10Be ratios do not suggest significant burial periods for our samples. However, alluvial samples have lower 26Al/10Be ratios than bedrock and colluvial samples, a trend consistent with a longer integrated cosmic ray exposure history that includes periods of burial during down-slope transport. The results confirm some of the basic ideas embedded in Davis' geographic cycle model, such as the reduction of relief through slope processes, and of Hack's dynamic equilibrium model such as the similarity of erosion rates across different lithologies. Comparing cosmogenic nuclide data with other measured and calculated erosion rates for the Appalachians, we conclude that rates of erosion, integrated over varying time periods from decades to a hundred million years are similar, the result of equilibrium between erosion and isostatic uplift in the southern Appalachian Mountains.
NASA Astrophysics Data System (ADS)
Powell, R. D.; Scherer, R. P.; Griffiths, I.; Taylor, L.; Winans, J.; Mankoff, K. D.
2011-12-01
A remotely operated vehicle (ROV) has been custom-designed and built by DOER Marine to meet scientific requirements for exploring subglacial water cavities. This sub-ice rover (SIR) will explore and quantitatively document the grounding zone areas of the Ross Ice Shelf cavity using a 3km-long umbilical tether by deployment through an 800m-long ice borehole in a torpedo shape, which is also its default mode if operational failure occurs. Once in the ocean cavity it transforms via a diamond-shaped geometry into a rectangular form when all of its instruments come alive in its flight mode. Instrumentation includes 4 cameras (one forward-looking HD), a vertical scanning sonar (long-range imaging for spatial orientation and navigation), Doppler current meter (determine water current velocities), multi-beam sonar (image and swath map bottom topography), sub-bottom profiler (profile sub-sea-floor sediment for geological history), CTD (determine salinity, temperature and depth), DO meter (determine dissolved oxygen content in water), transmissometer (determine suspended particulate concentrations in water), laser particle-size analyzer (determine sizes of particles in water), triple laser-beams (determine size and volume of objects), thermistor probe (measure in situ temperatures of ice and sediment), shear vane probe (determine in situ strength of sediment), manipulator arm (deploy instrumentation packages, collect samples), shallow ice corer (collect ice samples and glacial debris), water sampler (determine sea water/freshwater composition, calibrate real-time sensors, sample microbes), shallow sediment corer (sample sea floor, in-ice and subglacial sediment for stratigraphy, facies, particle size, composition, structure, fabric, microbes). A sophisticated array of data handling, storing and displaying will allow real-time observations and environmental assessments to be made. This robotic submarine and other instruments will be tested in Lake Tahoe in September, 2011 and results will be presented on its trials and geological and biological findings down to the deepest depths of the lake. Other instruments include a 5m-ling percussion corer for sampling deeper sediments, an ice-tethered profiler with CTD and ACDP, and in situ oceanographic mooring designed to fit down a narrow (30cm-diameter) ice borehole that include interchangeable packages of ACDPs, CTDs, transmissometers, laser particle-size analyzer, DO meter, automated multi-port water sampler, water column nutrient analyzer, sediment porewater chemistry analyzer, down-looking color camera (see figure), and altimeter.
Infrared reflectance spectra: Effects of particle size, provenance and preparation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Yin-Fong; Myers, Tanya L.; Brauer, Carolyn S.
2014-09-22
We have recently developed methods for making more accurate infrared total and diffuse directional - hemispherical reflectance measurements using an integrating sphere. We have found that reflectance spectra of solids, especially powders, are influenced by a number of factors including the sample preparation method, the particle size and morphology, as well as the sample origin. On a quantitative basis we have investigated some of these parameters and the effects they have on reflectance spectra, particularly in the longwave infrared. In the IR the spectral features may be observed as either maxima or minima: In general, upward-going peaks in the reflectancemore » spectrum result from strong surface scattering, i.e. rays that are reflected from the surface without bulk penetration, whereas downward-going peaks are due to either absorption or volume scattering, i.e. rays that have penetrated or refracted into the sample interior and are not reflected. The light signals reflected from solids usually encompass all such effects, but with strong dependencies on particle size and preparation. This paper measures the reflectance spectra in the 1.3 – 16 micron range for various bulk materials that have a combination of strong and weak absorption bands in order to observe the effects on the spectral features: Bulk materials were ground with a mortar and pestle and sieved to separate the samples into various size fractions between 5 and 500 microns. The median particle size is demonstrated to have large effects on the reflectance spectra. For certain minerals we also observe significant spectral change depending on the geologic origin of the sample. All three such effects (particle size, preparation and provenance) result in substantial change in the reflectance spectra for solid materials; successful identification algorithms will require sufficient flexibility to account for these parameters.« less
Infrared reflectance spectra: effects of particle size, provenance and preparation
NASA Astrophysics Data System (ADS)
Su, Yin-Fong; Myers, Tanya L.; Brauer, Carolyn S.; Blake, Thomas A.; Forland, Brenda M.; Szecsody, J. E.; Johnson, Timothy J.
2014-10-01
We have recently developed methods for making more accurate infrared total and diffuse directional - hemispherical reflectance measurements using an integrating sphere. We have found that reflectance spectra of solids, especially powders, are influenced by a number of factors including the sample preparation method, the particle size and morphology, as well as the sample origin. On a quantitative basis we have investigated some of these parameters and the effects they have on reflectance spectra, particularly in the longwave infrared. In the IR the spectral features may be observed as either maxima or minima: In general, upward-going peaks in the reflectance spectrum result from strong surface scattering, i.e. rays that are reflected from the surface without bulk penetration, whereas downward-going peaks are due to either absorption or volume scattering, i.e. rays that have penetrated or refracted into the sample interior and are not reflected. The light signals reflected from solids usually encompass all such effects, but with strong dependencies on particle size and preparation. This paper measures the reflectance spectra in the 1.3 - 16 micron range for various bulk materials that have a combination of strong and weak absorption bands in order to observe the effects on the spectral features: Bulk materials were ground with a mortar and pestle and sieved to separate the samples into various size fractions between 5 and 500 microns. The median particle size is demonstrated to have large effects on the reflectance spectra. For certain minerals we also observe significant spectral change depending on the geologic origin of the sample. All three such effects (particle size, preparation and provenance) result in substantial change in the reflectance spectra for solid materials; successful identification algorithms will require sufficient flexibility to account for these parameters.
Myatt, Julia P; Crompton, Robin H; Thorpe, Susannah K S
2011-01-01
By relating an animal's morphology to its functional role and the behaviours performed, we can further develop our understanding of the selective factors and constraints acting on the adaptations of great apes. Comparison of muscle architecture between different ape species, however, is difficult because only small sample sizes are ever available. Further, such samples are often comprised of different age–sex classes, so studies have to rely on scaling techniques to remove body mass differences. However, the reliability of such scaling techniques has been questioned. As datasets increase in size, more reliable statistical analysis may eventually become possible. Here we employ geometric and allometric scaling techniques, and ancovas (a form of general linear model, GLM) to highlight and explore the different methods available for comparing functional morphology in the non-human great apes. Our results underline the importance of regressing data against a suitable body size variable to ascertain the relationship (geometric or allometric) and of choosing appropriate exponents by which to scale data. ancova models, while likely to be more robust than scaling for species comparisons when sample sizes are high, suffer from reduced power when sample sizes are low. Therefore, until sample sizes are radically increased it is preferable to include scaling analyses along with ancovas in data exploration. Overall, the results obtained from the different methods show little significant variation, whether in muscle belly mass, fascicle length or physiological cross-sectional area between the different species. This may reflect relatively close evolutionary relationships of the non-human great apes; a universal influence on morphology of generalised orthograde locomotor behaviours or, quite likely, both. PMID:21507000
Steinberg, David M.; Fine, Jason; Chappell, Rick
2009-01-01
Important properties of diagnostic methods are their sensitivity, specificity, and positive and negative predictive values (PPV and NPV). These methods are typically assessed via case–control samples, which include one cohort of cases known to have the disease and a second control cohort of disease-free subjects. Such studies give direct estimates of sensitivity and specificity but only indirect estimates of PPV and NPV, which also depend on the disease prevalence in the tested population. The motivating example arises in assay testing, where usage is contemplated in populations with known prevalences. Further instances include biomarker development, where subjects are selected from a population with known prevalence and assessment of PPV and NPV is crucial, and the assessment of diagnostic imaging procedures for rare diseases, where case–control studies may be the only feasible designs. We develop formulas for optimal allocation of the sample between the case and control cohorts and for computing sample size when the goal of the study is to prove that the test procedure exceeds pre-stated bounds for PPV and/or NPV. Surprisingly, the optimal sampling schemes for many purposes are highly unbalanced, even when information is desired on both PPV and NPV. PMID:18556677
NASA Astrophysics Data System (ADS)
Kumar, S.; Aggarwal, S. G.; Fu, P. Q.; Kang, M.; Sarangi, B.; Sinha, D.; Kotnala, R. K.
2017-06-01
During March 20-22, 2012 Delhi experienced a massive dust-storm which originated in Middle-East. Size segregated sampling of these dust aerosols was performed using a nine staged Andersen sampler (5 sets of samples were collected including before dust-storm (BDS)), dust-storm day 1 to 3 (DS1 to DS3) and after dust storm (ADS). Sugars (mono and disaccharides, sugar-alcohols and anhydro-sugars) were determined using GC-MS technique. It was observed that on the onset of dust-storm, total suspended particulate matter (TSPM, sum of all stages) concentration in DS1 sample increased by > 2.5 folds compared to that of BDS samples. Interestingly, fine particulate matter (sum of stages with cutoff size < 2.1 μm) loading in DS1 also increased by > 2.5 folds as compared to that of BDS samples. Sugars analyzed in DS1 coarse mode (sum of stages with cutoff size > 2.1 μm) samples showed a considerable increase ( 1.7-2.8 folds) compared to that of other samples. It was further observed that mono-saccharides, disaccharides and sugar-alcohols concentrations were enhanced in giant (> 9.0 μm) particles in DS1 samples as compared to other samples. On the other hand, anhydro-sugars comprised 13-27% of sugars in coarse mode particles and were mostly found in fine mode constituting 66-85% of sugars in all the sample types. Trehalose showed an enhanced ( 2-4 folds) concentration in DS1 aerosol samples in both coarse (62.80 ng/m3) and fine (8.57 ng/m3) mode. This increase in Trehalose content in both coarse and fine mode suggests their origin to the transported desert dust and supports their candidature as an organic tracer for desert dust entrainments. Further, levoglucosan to mannosan (L/M) ratios which have been used to predict the type of biomass burning influences on aerosols are found to be size dependent in these samples. These ratios are higher for fine mode particles, hence should be used with caution while interpreting the sources using this tool.
Comparative analyses of basal rate of metabolism in mammals: data selection does matter.
Genoud, Michel; Isler, Karin; Martin, Robert D
2018-02-01
Basal rate of metabolism (BMR) is a physiological parameter that should be measured under strictly defined experimental conditions. In comparative analyses among mammals BMR is widely used as an index of the intensity of the metabolic machinery or as a proxy for energy expenditure. Many databases with BMR values for mammals are available, but the criteria used to select metabolic data as BMR estimates have often varied and the potential effect of this variability has rarely been questioned. We provide a new, expanded BMR database reflecting compliance with standard criteria (resting, postabsorptive state; thermal neutrality; adult, non-reproductive status for females) and examine potential effects of differential selectivity on the results of comparative analyses. The database includes 1739 different entries for 817 species of mammals, compiled from the original sources. It provides information permitting assessment of the validity of each estimate and presents the value closest to a proper BMR for each entry. Using different selection criteria, several alternative data sets were extracted and used in comparative analyses of (i) the scaling of BMR to body mass and (ii) the relationship between brain mass and BMR. It was expected that results would be especially dependent on selection criteria with small sample sizes and with relatively weak relationships. Phylogenetically informed regression (phylogenetic generalized least squares, PGLS) was applied to the alternative data sets for several different clades (Mammalia, Eutheria, Metatheria, or individual orders). For Mammalia, a 'subsampling procedure' was also applied, in which random subsamples of different sample sizes were taken from each original data set and successively analysed. In each case, two data sets with identical sample size and species, but comprising BMR data with different degrees of reliability, were compared. Selection criteria had minor effects on scaling equations computed for large clades (Mammalia, Eutheria, Metatheria), although less-reliable estimates of BMR were generally about 12-20% larger than more-reliable ones. Larger effects were found with more-limited clades, such as sciuromorph rodents. For the relationship between BMR and brain mass the results of comparative analyses were found to depend strongly on the data set used, especially with more-limited, order-level clades. In fact, with small sample sizes (e.g. <100) results often appeared erratic. Subsampling revealed that sample size has a non-linear effect on the probability of a zero slope for a given relationship. Depending on the species included, results could differ dramatically, especially with small sample sizes. Overall, our findings indicate a need for due diligence when selecting BMR estimates and caution regarding results (even if seemingly significant) with small sample sizes. © 2017 Cambridge Philosophical Society.
2011-01-01
Background The relationship between urbanicity and adolescent health is a critical issue for which little empirical evidence has been reported. Although an association has been suggested, a dichotomous rural versus urban comparison may not succeed in identifying differences between adolescent contexts. This study aims to assess the influence of locality size on risk behaviors in a national sample of young Mexicans living in low-income households, while considering the moderating effect of socioeconomic status (SES). Methods This is a secondary analysis of three national surveys of low-income households in Mexico in different settings: rural, semi-urban and urban areas. We analyzed risk behaviors in 15-21-year-olds and their potential relation to urbanicity. The risk behaviors explored were: tobacco and alcohol consumption, sexual initiation and condom use. The adolescents' localities of residence were classified according to the number of inhabitants in each locality. We used a logistical model to identify an association between locality size and risk behaviors, including an interaction term with SES. Results The final sample included 17,974 adolescents from 704 localities in Mexico. Locality size was associated with tobacco and alcohol consumption, showing a similar effect throughout all SES levels: the larger the size of the locality, the lower the risk of consuming tobacco or alcohol compared with rural settings. The effect of locality size on sexual behavior was more complex. The odds of adolescent condom use were higher in larger localities only among adolescents in the lowest SES levels. We found no statically significant association between locality size and sexual initiation. Conclusions The results suggest that in this sample of adolescents from low-income areas in Mexico, risk behaviors are related to locality size (number of inhabitants). Furthermore, for condom use, this relation is moderated by SES. Such heterogeneity suggests the need for more detailed analyses of both the effects of urbanicity on behavior, and the responses--which are also heterogeneous--required to address this situation. PMID:22129110
Jeffries, Cy M.; Graewert, Melissa A.; Blanchet, Clément E.; Langley, David B.; Whitten, Andrew E.; Svergun, Dmitri I
2017-01-01
Small-angle X-ray and neutron scattering (SAXS and SANS) are techniques used to extract structural parameters and determine the overall structures and shapes of biological macromolecules, complexes and assemblies in solution. The scattering intensities measured from a sample contain contributions from all atoms within the illuminated sample volume including the solvent and buffer components as well as the macromolecules of interest. In order to obtain structural information, it is essential to prepare an exactly matched solvent blank so that background scattering contributions can be accurately subtracted from the sample scattering to obtain the net scattering from the macromolecules in the sample. In addition, sample heterogeneity caused by contaminants, aggregates, mismatched solvents, radiation damage or other factors can severely influence and complicate data analysis so it is essential that the samples are pure and monodisperse for the duration of the experiment. This Protocol outlines the basic physics of SAXS and SANS and reveals how the underlying conceptual principles of the techniques ultimately ‘translate’ into practical laboratory guidance for the production of samples of sufficiently high quality for scattering experiments. The procedure describes how to prepare and characterize protein and nucleic acid samples for both SAXS and SANS using gel electrophoresis, size exclusion chromatography and light scattering. Also included are procedures specific to X-rays (in-line size exclusion chromatography SAXS) and neutrons, specifically preparing samples for contrast matching/variation experiments and deuterium labeling of proteins. PMID:27711050
Power/Sample Size Calculations for Assessing Correlates of Risk in Clinical Efficacy Trials
Gilbert, Peter B.; Janes, Holly E.; Huang, Yunda
2016-01-01
In a randomized controlled clinical trial that assesses treatment efficacy, a common objective is to assess the association of a measured biomarker response endpoint with the primary study endpoint in the active treatment group, using a case-cohort, case-control, or two-phase sampling design. Methods for power and sample size calculations for such biomarker association analyses typically do not account for the level of treatment efficacy, precluding interpretation of the biomarker association results in terms of biomarker effect modification of treatment efficacy, with detriment that the power calculations may tacitly and inadvertently assume that the treatment harms some study participants. We develop power and sample size methods accounting for this issue, and the methods also account for inter-individual variability of the biomarker that is not biologically relevant (e.g., due to technical measurement error). We focus on a binary study endpoint and on a biomarker subject to measurement error that is normally distributed or categorical with two or three levels. We illustrate the methods with preventive HIV vaccine efficacy trials, and include an R package implementing the methods. PMID:27037797
Laboratory Spectrometer for Wear Metal Analysis of Engine Lubricants.
1986-04-01
analysis, the acid digestion technique for sample pretreatment is the best approach available to date because of its relatively large sample size (1000...microliters or more). However, this technique has two major shortcomings limiting its application: (1) it requires the use of hydrofluoric acid (a...accuracy. Sample preparation including filtration or acid digestion may increase analysis times by 20 minutes or more. b. Repeatability In the analysis
Impacts on seafloor geology of drilling disturbance in shallow waters.
Corrêa, Iran C S; Toldo, Elírio E; Toledo, Felipe A L
2010-08-01
This paper describes the effects of drilling disturbance on the seafloor of the upper continental slope of the Campos Basin, Brazil, as a result of the project Environmental Monitoring of Offshore Drilling for Petroleum Exploration--MAPEM. Field sampling was carried out surrounding wells, operated by the company PETROBRAS, to compare sediment properties of the seafloor, including grain-size distribution, total organic carbon, and clay mineral composition, prior to drilling with samples obtained 3 and 22 months after drilling. The sampling grid used had 74 stations, 68 of which were located along 7 radials from the well up to a distance of 500 m. The other 6 stations were used as reference, and were located 2,500 m from the well. The results show no significant sedimentological variation in the area affected by drilling activity. The observed sedimentological changes include a fining of grain size, increase in total organic carbon, an increase in gibbsite, illite, and smectite, and a decrease in kaolinite after drilling took place.
Scott, Jeremiah E; Schrein, Caitlin M; Kelley, Jay
2009-10-01
Sexual size dimorphism in the postcanine dentition of the late Miocene hominoid Lufengpithecus lufengensis exceeds that in Pongo pygmaeus, demonstrating that the maximum degree of molar size dimorphism in apes is not represented among the extant Hominoidea. It has not been established, however, that the molars of Pongo are more dimorphic than those of any other living primate. In this study, we used resampling-based methods to compare molar dimorphism in Gorilla, Pongo, and Lufengpithecus to that in the papionin Mandrillus leucophaeus to test two hypotheses: (1) Pongo possesses the most size-dimorphic molars among living primates and (2) molar size dimorphism in Lufengpithecus is greater than that in the most dimorphic living primates. Our results show that M. leucophaeus exceeds great apes in its overall level of dimorphism and that L. lufengensis is more dimorphic than the extant species. Using these samples, we also evaluated molar dimorphism and taxonomic composition in two other Miocene ape samples--Ouranopithecus macedoniensis from Greece, specimens of which can be sexed based on associated canines and P(3)s, and the Sivapithecus sample from Haritalyangar, India. Ouranopithecus is more dimorphic than the extant taxa but is similar to Lufengpithecus, demonstrating that the level of molar dimorphism required for the Greek fossil sample under the single-species taxonomy is not unprecedented when the comparative framework is expanded to include extinct primates. In contrast, the Haritalyangar Sivapithecus sample, if itrepresents a single species, exhibits substantially greater molar dimorphism than does Lufengpithecus. Given these results, the taxonomic status of this sample remains equivocal.
A Mars Sample Return Sample Handling System
NASA Technical Reports Server (NTRS)
Wilson, David; Stroker, Carol
2013-01-01
We present a sample handling system, a subsystem of the proposed Dragon landed Mars Sample Return (MSR) mission [1], that can return to Earth orbit a significant mass of frozen Mars samples potentially consisting of: rock cores, subsurface drilled rock and ice cuttings, pebble sized rocks, and soil scoops. The sample collection, storage, retrieval and packaging assumptions and concepts in this study are applicable for the NASA's MPPG MSR mission architecture options [2]. Our study assumes a predecessor rover mission collects samples for return to Earth to address questions on: past life, climate change, water history, age dating, understanding Mars interior evolution [3], and, human safety and in-situ resource utilization. Hence the rover will have "integrated priorities for rock sampling" [3] that cover collection of subaqueous or hydrothermal sediments, low-temperature fluidaltered rocks, unaltered igneous rocks, regolith and atmosphere samples. Samples could include: drilled rock cores, alluvial and fluvial deposits, subsurface ice and soils, clays, sulfates, salts including perchlorates, aeolian deposits, and concretions. Thus samples will have a broad range of bulk densities, and require for Earth based analysis where practical: in-situ characterization, management of degradation such as perchlorate deliquescence and volatile release, and contamination management. We propose to adopt a sample container with a set of cups each with a sample from a specific location. We considered two sample cups sizes: (1) a small cup sized for samples matching those submitted to in-situ characterization instruments, and, (2) a larger cup for 100 mm rock cores [4] and pebble sized rocks, thus providing diverse samples and optimizing the MSR sample mass payload fraction for a given payload volume. We minimize sample degradation by keeping them frozen in the MSR payload sample canister using Peltier chip cooling. The cups are sealed by interference fitted heat activated memory alloy caps [5] if the heating does not affect the sample, or by crimping caps similar to bottle capping. We prefer cap sealing surfaces be external to the cup rim to prevent sample dust inside the cups interfering with sealing, or, contamination of the sample by Teflon seal elements (if adopted). Finally the sample collection rover, or a Fetch rover, selects cups with best choice samples and loads them into a sample tray, before delivering it to the Earth Return Vehicle (ERV) in the MSR Dragon capsule as described in [1] (Fig 1). This ensures best use of the MSR payload mass allowance. A 3 meter long jointed robot arm is extended from the Dragon capsule's crew hatch, retrieves the sample tray and inserts it into the sample canister payload located on the ERV stage. The robot arm has capacity to obtain grab samples in the event of a rover failure. The sample canister has a robot arm capture casting to enable capture by crewed or robot spacecraft when it returns to Earth orbit
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A multi-particle crushing apparatus for studying rock fragmentation due to repeated impacts
NASA Astrophysics Data System (ADS)
Huang, S.; Mohanty, B.; Xia, K.
2017-12-01
Rock crushing is a common process in mining and related operations. Although a number of particle crushing tests have been proposed in the literature, most of them are concerned with single-particle crushing, i.e., a single rock sample is crushed in each test. Considering the realistic scenario in crushers where many fragments are involved, a laboratory crushing apparatus is developed in this study. This device consists of a Hopkinson pressure bar system and a piston-holder system. The Hopkinson pressure bar system is used to apply calibrated dynamic loads to the piston-holder system, and the piston-holder system is used to hold rock samples and to recover fragments for subsequent particle size analysis. The rock samples are subjected to three to seven impacts under three impact velocities (2.2, 3.8, and 5.0 m/s), with the feed size of the rock particle samples limited between 9.5 and 12.7 mm. Several key parameters are determined from this test, including particle size distribution parameters, impact velocity, loading pressure, and total work. The results show that the total work correlates well with resulting fragmentation size distribution, and the apparatus provides a useful tool for studying the mechanism of crushing, which further provides guidelines for the design of commercial crushers.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Lindenfors, P; Tullberg, B S
2006-07-01
The fact that characters may co-vary in organism groups because of shared ancestry and not always because of functional correlations was the initial rationale for developing phylogenetic comparative methods. Here we point out a case where similarity due to shared ancestry can produce an undesired effect when conducting an independent contrasts analysis. Under special circumstances, using a low sample size will produce results indicating an evolutionary correlation between characters where an analysis of the same pattern utilizing a larger sample size will show that this correlation does not exist. This is the opposite effect of increased sample size to that expected; normally an increased sample size increases the chance of finding a correlation. The situation where the problem occurs is when co-variation between the two continuous characters analysed is clumped in clades; e.g. when some phylogenetically conservative factors affect both characters simultaneously. In such a case, the correlation between the two characters becomes contingent on the number of clades sharing this conservative factor that are included in the analysis, in relation to the number of species contained within these clades. Removing species scattered evenly over the phylogeny will in this case remove the exact variation that diffuses the evolutionary correlation between the two characters - the variation contained within the clades sharing the conservative factor. We exemplify this problem by discussing a parallel in nature where the described problem may be of importance. This concerns the question of the presence or absence of Rensch's rule in primates.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
Phase-contrast x-ray computed tomography for biological imaging
NASA Astrophysics Data System (ADS)
Momose, Atsushi; Takeda, Tohoru; Itai, Yuji
1997-10-01
We have shown so far that 3D structures in biological sot tissues such as cancer can be revealed by phase-contrast x- ray computed tomography using an x-ray interferometer. As a next step, we aim at applications of this technique to in vivo observation, including radiographic applications. For this purpose, the size of view field is desired to be more than a few centimeters. Therefore, a larger x-ray interferometer should be used with x-rays of higher energy. We have evaluated the optimal x-ray energy from an aspect of does as a function of sample size. Moreover, desired spatial resolution to an image sensor is discussed as functions of x-ray energy and sample size, basing on a requirement in the analysis of interference fringes.
The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education
ERIC Educational Resources Information Center
Slavin, Robert; Smith, Dewi
2009-01-01
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
Tomyn, Ronald L; Sleeth, Darrah K; Thiese, Matthew S; Larson, Rodney R
2016-01-01
In addition to chemical composition, the site of deposition of inhaled particles is important for determining the potential health effects from an exposure. As a result, the International Organization for Standardization adopted a particle deposition sampling convention. This includes extrathoracic particle deposition sampling conventions for the anterior nasal passages (ET1) and the posterior nasal and oral passages (ET2). This study assessed how well a polyurethane foam insert placed in an Institute of Occupational Medicine (IOM) sampler can match an extrathoracic deposition sampling convention, while accounting for possible static buildup in the test particles. In this way, the study aimed to assess whether neutralized particles affected the performance of this sampler for estimating extrathoracic particle deposition. A total of three different particle sizes (4.9, 9.5, and 12.8 µm) were used. For each trial, one particle size was introduced into a low-speed wind tunnel with a wind speed set a 0.2 m/s (∼40 ft/min). This wind speed was chosen to closely match the conditions of most indoor working environments. Each particle size was tested twice either neutralized, using a high voltage neutralizer, or left in its normal (non neutralized) state as standard particles. IOM samplers were fitted with a polyurethane foam insert and placed on a rotating mannequin inside the wind tunnel. Foam sampling efficiencies were calculated for all trials to compare against the normalized ET1 sampling deposition convention. The foam sampling efficiencies matched well to the ET1 deposition convention for the larger particle sizes, but had a general trend of underestimating for all three particle sizes. The results of a Wilcoxon Rank Sum Test also showed that only at 4.9 µm was there a statistically significant difference (p-value = 0.03) between the foam sampling efficiency using the standard particles and the neutralized particles. This is interpreted to mean that static buildup may be occurring and neutralizing the particles that are 4.9 µm diameter in size did affect the performance of the foam sampler when estimating extrathoracic particle deposition.
Phylogenetic effective sample size.
Bartoszek, Krzysztof
2016-10-21
In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Improving the quality of biomarker discovery research: the right samples and enough of them.
Pepe, Margaret S; Li, Christopher I; Feng, Ziding
2015-06-01
Biomarker discovery research has yielded few biomarkers that validate for clinical use. A contributing factor may be poor study designs. The goal in discovery research is to identify a subset of potentially useful markers from a large set of candidates assayed on case and control samples. We recommend the PRoBE design for selecting samples. We propose sample size calculations that require specifying: (i) a definition for biomarker performance; (ii) the proportion of useful markers the study should identify (Discovery Power); and (iii) the tolerable number of useless markers amongst those identified (False Leads Expected, FLE). We apply the methodology to a study of 9,000 candidate biomarkers for risk of colon cancer recurrence where a useful biomarker has positive predictive value ≥ 30%. We find that 40 patients with recurrence and 160 without recurrence suffice to filter out 98% of useless markers (2% FLE) while identifying 95% of useful biomarkers (95% Discovery Power). Alternative methods for sample size calculation required more assumptions. Biomarker discovery research should utilize quality biospecimen repositories and include sample sizes that enable markers meeting prespecified performance characteristics for well-defined clinical applications to be identified. The scientific rigor of discovery research should be improved. ©2015 American Association for Cancer Research.
Non-invasive genetic censusing and monitoring of primate populations.
Arandjelovic, Mimi; Vigilant, Linda
2018-03-01
Knowing the density or abundance of primate populations is essential for their conservation management and contextualizing socio-demographic and behavioral observations. When direct counts of animals are not possible, genetic analysis of non-invasive samples collected from wildlife populations allows estimates of population size with higher accuracy and precision than is possible using indirect signs. Furthermore, in contrast to traditional indirect survey methods, prolonged or periodic genetic sampling across months or years enables inference of group membership, movement, dynamics, and some kin relationships. Data may also be used to estimate sex ratios, sex differences in dispersal distances, and detect gene flow among locations. Recent advances in capture-recapture models have further improved the precision of population estimates derived from non-invasive samples. Simulations using these methods have shown that the confidence interval of point estimates includes the true population size when assumptions of the models are met, and therefore this range of population size minima and maxima should be emphasized in population monitoring studies. Innovations such as the use of sniffer dogs or anti-poaching patrols for sample collection are important to ensure adequate sampling, and the expected development of efficient and cost-effective genotyping by sequencing methods for DNAs derived from non-invasive samples will automate and speed analyses. © 2018 Wiley Periodicals, Inc.
MSeq-CNV: accurate detection of Copy Number Variation from Sequencing of Multiple samples.
Malekpour, Seyed Amir; Pezeshk, Hamid; Sadeghi, Mehdi
2018-03-05
Currently a few tools are capable of detecting genome-wide Copy Number Variations (CNVs) based on sequencing of multiple samples. Although aberrations in mate pair insertion sizes provide additional hints for the CNV detection based on multiple samples, the majority of the current tools rely only on the depth of coverage. Here, we propose a new algorithm (MSeq-CNV) which allows detecting common CNVs across multiple samples. MSeq-CNV applies a mixture density for modeling aberrations in depth of coverage and abnormalities in the mate pair insertion sizes. Each component in this mixture density applies a Binomial distribution for modeling the number of mate pairs with aberration in the insertion size and also a Poisson distribution for emitting the read counts, in each genomic position. MSeq-CNV is applied on simulated data and also on real data of six HapMap individuals with high-coverage sequencing, in 1000 Genomes Project. These individuals include a CEU trio of European ancestry and a YRI trio of Nigerian ethnicity. Ancestry of these individuals is studied by clustering the identified CNVs. MSeq-CNV is also applied for detecting CNVs in two samples with low-coverage sequencing in 1000 Genomes Project and six samples form the Simons Genome Diversity Project.
Compact ultrahigh vacuum sample environments for x-ray nanobeam diffraction and imaging.
Evans, P G; Chahine, G; Grifone, R; Jacques, V L R; Spalenka, J W; Schülli, T U
2013-11-01
X-ray nanobeams present the opportunity to obtain structural insight in materials with small volumes or nanoscale heterogeneity. The effective spatial resolution of the information derived from nanobeam techniques depends on the stability and precision with which the relative position of the x-ray optics and sample can be controlled. Nanobeam techniques include diffraction, imaging, and coherent scattering, with applications throughout materials science and condensed matter physics. Sample positioning is a significant mechanical challenge for x-ray instrumentation providing vacuum or controlled gas environments at elevated temperatures. Such environments often have masses that are too large for nanopositioners capable of the required positional accuracy of the order of a small fraction of the x-ray spot size. Similarly, the need to place x-ray optics as close as 1 cm to the sample places a constraint on the overall size of the sample environment. We illustrate a solution to the mechanical challenge in which compact ion-pumped ultrahigh vacuum chambers with masses of 1-2 kg are integrated with nanopositioners. The overall size of the environment is sufficiently small to allow their use with zone-plate focusing optics. We describe the design of sample environments for elevated-temperature nanobeam diffraction experiments demonstrate in situ diffraction, reflectivity, and scanning nanobeam imaging of the ripening of Au crystallites on Si substrates.
Compact ultrahigh vacuum sample environments for x-ray nanobeam diffraction and imaging
NASA Astrophysics Data System (ADS)
Evans, P. G.; Chahine, G.; Grifone, R.; Jacques, V. L. R.; Spalenka, J. W.; Schülli, T. U.
2013-11-01
X-ray nanobeams present the opportunity to obtain structural insight in materials with small volumes or nanoscale heterogeneity. The effective spatial resolution of the information derived from nanobeam techniques depends on the stability and precision with which the relative position of the x-ray optics and sample can be controlled. Nanobeam techniques include diffraction, imaging, and coherent scattering, with applications throughout materials science and condensed matter physics. Sample positioning is a significant mechanical challenge for x-ray instrumentation providing vacuum or controlled gas environments at elevated temperatures. Such environments often have masses that are too large for nanopositioners capable of the required positional accuracy of the order of a small fraction of the x-ray spot size. Similarly, the need to place x-ray optics as close as 1 cm to the sample places a constraint on the overall size of the sample environment. We illustrate a solution to the mechanical challenge in which compact ion-pumped ultrahigh vacuum chambers with masses of 1-2 kg are integrated with nanopositioners. The overall size of the environment is sufficiently small to allow their use with zone-plate focusing optics. We describe the design of sample environments for elevated-temperature nanobeam diffraction experiments demonstrate in situ diffraction, reflectivity, and scanning nanobeam imaging of the ripening of Au crystallites on Si substrates.
Sleeth, Darrah K; Balthaser, Susan A; Collingwood, Scott; Larson, Rodney R
2016-03-07
Extrathoracic deposition of inhaled particles (i.e., in the head and throat) is an important exposure route for many hazardous materials. Current best practices for exposure assessment of aerosols in the workplace involve particle size selective sampling methods based on particle penetration into the human respiratory tract (i.e., inhalable or respirable sampling). However, the International Organization for Standardization (ISO) has recently adopted particle deposition sampling conventions (ISO 13138), including conventions for extrathoracic (ET) deposition into the anterior nasal passage (ET₁) and the posterior nasal and oral passages (ET₂). For this study, polyurethane foam was used as a collection substrate inside an inhalable aerosol sampler to provide an estimate of extrathoracic particle deposition. Aerosols of fused aluminum oxide (five sizes, 4.9 µm-44.3 µm) were used as a test dust in a low speed (0.2 m/s) wind tunnel. Samplers were placed on a rotating mannequin inside the wind tunnel to simulate orientation-averaged personal sampling. Collection efficiency data for the foam insert matched well to the extrathoracic deposition convention for the particle sizes tested. The concept of using a foam insert to match a particle deposition sampling convention was explored in this study and shows promise for future use as a sampling device.
Sleeth, Darrah K.; Balthaser, Susan A.; Collingwood, Scott; Larson, Rodney R.
2016-01-01
Extrathoracic deposition of inhaled particles (i.e., in the head and throat) is an important exposure route for many hazardous materials. Current best practices for exposure assessment of aerosols in the workplace involve particle size selective sampling methods based on particle penetration into the human respiratory tract (i.e., inhalable or respirable sampling). However, the International Organization for Standardization (ISO) has recently adopted particle deposition sampling conventions (ISO 13138), including conventions for extrathoracic (ET) deposition into the anterior nasal passage (ET1) and the posterior nasal and oral passages (ET2). For this study, polyurethane foam was used as a collection substrate inside an inhalable aerosol sampler to provide an estimate of extrathoracic particle deposition. Aerosols of fused aluminum oxide (five sizes, 4.9 µm–44.3 µm) were used as a test dust in a low speed (0.2 m/s) wind tunnel. Samplers were placed on a rotating mannequin inside the wind tunnel to simulate orientation-averaged personal sampling. Collection efficiency data for the foam insert matched well to the extrathoracic deposition convention for the particle sizes tested. The concept of using a foam insert to match a particle deposition sampling convention was explored in this study and shows promise for future use as a sampling device. PMID:26959046
The Response of Simple Polymer Structures Under Dynamic Loading
NASA Astrophysics Data System (ADS)
Proud, William; Ellison, Kay; Yapp, Su; Cole, Cloe; Galimberti, Stefano; Institute of Shock Physics Team
2017-06-01
The dynamic response of polymeric materials has been widely studied with the effects of degree of crystallinity, strain rate, temperature and sample size being commonly reported. This study uses a simple PMMA structure, a right cylindrical sample, with structural features such as holes. The features are added an varied in a systematic fashion. Samples were dynamically loaded using a Split Hopkinson Pressure Bar up to failure. The resulting stress-strain curves are presented showing the change in sample response. The strain to failure is shown to increase initially with the presence of holes, while failure stress is relatively unaffected. The fracture patterns seen in the failed samples change, with tensile cracks, Hertzian cones, shear effects being dominant for different holes sizes and geometries. The sample were prepared by laser cutting and checked for residual stress before experiment. The data is used to validate predictive model predictions where material, structure and damage are included.. The Institute of Shock Physics acknowledges the support of Imperial College London and the Atomic Weapons Establishment.
Metagenomic Exploration of Viruses throughout the Indian Ocean
Lorenzi, Hernan A.; Fadrosh, Douglas W.; Brami, Daniel; Thiagarajan, Mathangi; McCrow, John P.; Tovchigrechko, Andrey; Yooseph, Shibu; Venter, J. Craig
2012-01-01
The characterization of global marine microbial taxonomic and functional diversity is a primary goal of the Global Ocean Sampling Expedition. As part of this study, 19 water samples were collected aboard the Sorcerer II sailing vessel from the southern Indian Ocean in an effort to more thoroughly understand the lifestyle strategies of the microbial inhabitants of this ultra-oligotrophic region. No investigations of whole virioplankton assemblages have been conducted on waters collected from the Indian Ocean or across multiple size fractions thus far. Therefore, the goals of this study were to examine the effect of size fractionation on viral consortia structure and function and understand the diversity and functional potential of the Indian Ocean virome. Five samples were selected for comprehensive metagenomic exploration; and sequencing was performed on the microbes captured on 3.0-, 0.8- and 0.1 µm membrane filters as well as the viral fraction (<0.1 µm). Phylogenetic approaches were also used to identify predicted proteins of viral origin in the larger fractions of data from all Indian Ocean samples, which were included in subsequent metagenomic analyses. Taxonomic profiling of viral sequences suggested that size fractionation of marine microbial communities enriches for specific groups of viruses within the different size classes and functional characterization further substantiated this observation. Functional analyses also revealed a relative enrichment for metabolic proteins of viral origin that potentially reflect the physiological condition of host cells in the Indian Ocean including those involved in nitrogen metabolism and oxidative phosphorylation. A novel classification method, MGTAXA, was used to assess virus-host relationships in the Indian Ocean by predicting the taxonomy of putative host genera, with Prochlorococcus, Acanthochlois and members of the SAR86 cluster comprising the most abundant predictions. This is the first study to holistically explore virioplankton dynamics across multiple size classes and provides unprecedented insight into virus diversity, metabolic potential and virus-host interactions. PMID:23082107
Grain Size and Phase Purity Characterization of U 3Si 2 Pellet Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoggan, Rita E.; Tolman, Kevin R.; Cappia, Fabiola
Characterization of U 3Si 2 fresh fuel pellets is important for quality assurance and validation of the finished product. Grain size measurement methods, phase identification methods using scanning electron microscopes equipped with energy dispersive spectroscopy and x-ray diffraction, and phase quantification methods via image analysis have been developed and implemented on U 3Si 2 pellet samples. A wide variety of samples have been characterized including representative pellets from an initial irradiation experiment, and samples produced using optimized methods to enhance phase purity from an extended fabrication effort. The average grain size for initial pellets was between 16 and 18 µm.more » The typical average grain size for pellets from the extended fabrication was between 20 and 30 µm with some samples exhibiting irregular grain growth. Pellets from the latter half of extended fabrication had a bimodal grain size distribution consisting of coarsened grains (>80 µm) surrounded by the typical (20-30 µm) grain structure around the surface. Phases identified in initial uranium silicide pellets included: U 3Si 2 as the main phase composing about 80 vol. %, Si rich phases (USi and U 5Si 4) composing about 13 vol. %, and UO 2 composing about 5 vol. %. Initial batches from the extended U 3Si 2 pellet fabrication had similar phases and phase quantities. The latter half of the extended fabrication pellet batches did not contain Si rich phases, and had between 1-5% UO 2: achieving U 3Si 2 phase purity between 95 vol. % and 98 vol. % U 3Si 2. The amount of UO 2 in sintered U 3Si 2 pellets is correlated to the length of time between U 3Si 2 powder fabrication and pellet formation. These measurements provide information necessary to optimize fabrication efforts and a baseline for future work on this fuel compound.« less
Metallographic Characterization of Wrought Depleted Uranium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forsyth, Robert Thomas; Hill, Mary Ann
Metallographic characterization was performed on wrought depleted uranium (DU) samples taken from the longitudinal and transverse orientations from specific locations on two specimens. Characterization of the samples included general microstructure, inclusion analysis, grain size analysis, and microhardness testing. Comparisons of the characterization results were made to determine any differences based on specimen, sample orientation, or sample location. In addition, the characterization results for the wrought DU samples were also compared with data obtained from the metallographic characterization of cast DU samples previously characterized. No differences were observed in microstructure, inclusion size, morphology, and distribution, or grain size in regard tomore » specimen, location, or orientation for the wrought depleted uranium samples. However, a small difference was observed in average hardness with regard to orientation at the same locations within the same specimen. The longitudinal samples were slightly harder than the transverse samples from the same location of the same specimen. This was true for both wrought DU specimens. Comparing the wrought DU sample data with the previously characterized cast DU sample data, distinct differences in microstructure, inclusion size, morphology and distribution, grain size, and microhardness were observed. As expected, the microstructure of the wrought DU samples consisted of small recrystallized grains which were uniform, randomly oriented, and equiaxed with minimal twinning observed in only a few grains. In contrast, the cast DU microstructure consisted of large irregularly shaped grains with extensive twinning observed in most grains. Inclusions in the wrought DU samples were elongated, broken and cracked and light and dark phases were observed in some inclusions. The mean inclusion area percentage for the wrought DU samples ranged from 0.08% to 0.34% and the average density from all wrought DU samples was 1.62E+04/cm 2. Inclusions in the cast DU samples were equiaxed and intact with light and dark phases observed in some inclusions. The mean inclusion area percentage for the cast DU samples ranged from 0.93% to 1.00% and the average density from all wrought DU samples was 2.83E+04/cm 2. The average mean grain area from all wrought DU samples was 141 μm 2 while the average mean grain area from all cast DU samples was 1.7 mm2. The average Knoop microhardness from all wrought DU samples was 215 HK and the average Knoop microhardness from all cast DU samples was 264 HK.« less
Illiteracy, Sex and Occupational Status in Present-Day China.
ERIC Educational Resources Information Center
Lamontagne, Jacques
This study determined the magnitude of disparity between men and women in China in relation to illiteracy and occupational status. Region and ethnicity are used as control variables. The data collected are from a 10 percent sampling of the 1982 census; the total sample size includes a population of 100,380,000 nationwide. The census questionnaire…
A method for analysing small samples of floral pollen for free and protein-bound amino acids.
Stabler, Daniel; Power, Eileen F; Borland, Anne M; Barnes, Jeremy D; Wright, Geraldine A
2018-02-01
Pollen provides floral visitors with essential nutrients including proteins, lipids, vitamins and minerals. As an important nutrient resource for pollinators, including honeybees and bumblebees, pollen quality is of growing interest in assessing available nutrition to foraging bees. To date, quantifying the protein-bound amino acids in pollen has been difficult and methods rely on large amounts of pollen, typically more than 1 g. More usual is to estimate a crude protein value based on the nitrogen content of pollen, however, such methods provide no information on the distribution of essential and non-essential amino acids constituting the proteins.Here, we describe a method of microwave-assisted acid hydrolysis using low amounts of pollen that allows exploration of amino acid composition, quantified using ultra high performance liquid chromatography (UHPLC), and a back calculation to estimate the crude protein content of pollen.Reliable analysis of protein-bound and free amino acids as well as an estimation of crude protein concentration was obtained from pollen samples as low as 1 mg. Greater variation in both protein-bound and free amino acids was found in pollen sample sizes <1 mg. Due to the variability in recovery of amino acids in smaller sample sizes, we suggest a correction factor to apply to specific sample sizes of pollen in order to estimate total crude protein content.The method described in this paper will allow researchers to explore the composition of amino acids in pollen and will aid research assessing the available nutrition to pollinating animals. This method will be particularly useful in assaying the pollen of wild plants, from which it is difficult to obtain large sample weights.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Public Opinion Polls, Chicken Soup and Sample Size
ERIC Educational Resources Information Center
Nguyen, Phung
2005-01-01
Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.
Field sampling of loose erodible material: A new method to consider the full particle-size range
NASA Astrophysics Data System (ADS)
Klose, Martina; Gill, Thomas E.
2017-04-01
The aerodynamic entrainment of sand and dust is determined by the atmospheric forces exerted onto the soil surface and by the soil-surface condition. If aerodynamic forces are strong enough to generate sand and dust lifting, the entrained sediment amount still critically depends on the supply of loose particles readily available for lifting. This loose erodible material (LEM) is sometimes defined as the thin layer of loose particles on top of a crusted surface. Here, we more generally define LEM as loose particles or particle aggregates available for entrainment, which may or may not overlay a soil crust. Field sampling of LEM is difficult and only few attempts have been made. Motivated by saltation as the most efficient process to generate dust emission, methods have focused on capturing LEM in the sand-size range or on determining the potential of a soil surface to be eroded by aerodynamic forces and particle impacts. Here, our focus is to capture the full particle-size distribution of LEM in situ, including the dust and sand-size range, to investigate the potential and likelihood of dust emission mechanisms (aerodynamic entrainment, saltation bombardment, aggregate disintegration) to occur. A new vacuum method is introduced and its capability to sample LEM without significant alteration of the LEM particle-size distribution is investigated.
Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.
Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto
2007-07-01
To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.
Effect of the centrifugal force on domain chaos in Rayleigh-Bénard convection.
Becker, Nathan; Scheel, J D; Cross, M C; Ahlers, Guenter
2006-06-01
Experiments and simulations from a variety of sample sizes indicated that the centrifugal force significantly affects the domain-chaos state observed in rotating Rayleigh-Bénard convection-patterns. In a large-aspect-ratio sample, we observed a hybrid state consisting of domain chaos close to the sample center, surrounded by an annulus of nearly stationary nearly radial rolls populated by occasional defects reminiscent of undulation chaos. Although the Coriolis force is responsible for domain chaos, by comparing experiment and simulation we show that the centrifugal force is responsible for the radial rolls. Furthermore, simulations of the Boussinesq equations for smaller aspect ratios neglecting the centrifugal force yielded a domain precession-frequency f approximately epsilon(mu) with mu approximately equal to 1 as predicted by the amplitude-equation model for domain chaos, but contradicted by previous experiment. Additionally the simulations gave a domain size that was larger than in the experiment. When the centrifugal force was included in the simulation, mu and the domain size were consistent with experiment.
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Jaejin; Woo, Jong-Hak; Mulchaey, John S.
We perform a comprehensive study of X-ray cavities using a large sample of X-ray targets selected from the Chandra archive. The sample is selected to cover a large dynamic range including galaxy clusters, groups, and individual galaxies. Using β -modeling and unsharp masking techniques, we investigate the presence of X-ray cavities for 133 targets that have sufficient X-ray photons for analysis. We detect 148 X-ray cavities from 69 targets and measure their properties, including cavity size, angle, and distance from the center of the diffuse X-ray gas. We confirm the strong correlation between cavity size and distance from the X-raymore » center similar to previous studies. We find that the detection rates of X-ray cavities are similar among galaxy clusters, groups and individual galaxies, suggesting that the formation mechanism of X-ray cavities is independent of environment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, S.; Aldering, G.; Antilogus, P.
The use of Type Ia supernovae as distance indicators led to the discovery of the accelerating expansion of the universe a decade ago. Now that large second generation surveys have significantly increased the size and quality of the high-redshift sample, the cosmological constraints are limited by the currently available sample of ~50 cosmologically useful nearby supernovae. The Nearby Supernova Factory addresses this problem by discovering nearby supernovae and observing their spectrophotometric time development. Our data sample includes over 2400 spectra from spectral timeseries of 185 supernovae. This talk presents results from a portion of this sample including a Hubble diagrammore » (relative distance vs. redshift) and a description of some analyses using this rich dataset.« less
Grey literature in meta-analyses.
Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J
2003-01-01
In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.
Alexander, R.B.; Ludtke, A.S.; Fitzgerald, K.K.; Schertz, T.L.
1996-01-01
Data from two U.S. Geological Survey (USGS) national stream water-quality monitoring networks, the National Stream Quality Accounting Network (NASQAN) and the Hydrologic Benchmark Network (HBN), are now available in a two CD-ROM set. These data on CD-ROM are collectively referred to as WQN, water-quality networks. Data from these networks have been used at the national, regional, and local levels to estimate the rates of chemical flux from watersheds, quantify changes in stream water quality for periods during the past 30 years, and investigate relations between water quality and streamflow as well as the relations of water quality to pollution sources and various physical characteristics of watersheds. The networks include 679 monitoring stations in watersheds that represent diverse climatic, physiographic, and cultural characteristics. The HBN includes 63 stations in relatively small, minimally disturbed basins ranging in size from 2 to 2,000 square miles with a median drainage basin size of 57 square miles. NASQAN includes 618 stations in larger, more culturally-influenced drainage basins ranging in size from one square mile to 1.2 million square miles with a median drainage basin size of about 4,000 square miles. The CD-ROMs contain data for 63 physical, chemical, and biological properties of water (122 total constituents including analyses of dissolved and water suspended-sediment samples) collected during more than 60,000 site visits. These data approximately span the periods 1962-95 for HBN and 1973-95 for NASQAN. The data reflect sampling over a wide range of streamflow conditions and the use of relatively consistent sampling and analytical methods. The CD-ROMs provide ancillary information and data-retrieval tools to allow the national network data to be properly and efficiently used. Ancillary information includes the following: descriptions of the network objectives and history, characteristics of the network stations and water-quality data, historical records of important changes in network sample collection and laboratory analytical methods, water reference sample data for estimating laboratory measurement bias and variability for 34 dissolved constituents for the period 1985-95, discussions of statistical methods for using water reference sample data to evaluate the accuracy of network stream water-quality data, and a bibliography of scientific investigations using national network data and other publications relevant to the networks. The data structure of the CD-ROMs is designed to allow users to efficiently enter the water-quality data to user-supplied software packages including statistical analysis, modeling, or geographic information systems. On one disc, all data are stored in ASCII form accessible from any computer system with a CD-ROM driver. The data also can be accessed using DOS-based retrieval software supplied on a second disc. This software supports logical queries of the water-quality data based on constituent concentrations, sample- collection date, river name, station name, county, state, hydrologic unit number, and 1990 population and 1987 land-cover characteristics for station watersheds. User-selected data may be output in a variety of formats including dBASE, flat ASCII, delimited ASCII, or fixed-field for subsequent use in other software packages.
Calibrating the Ordovician Radiation of marine life: implications for Phanerozoic diversity trends
NASA Technical Reports Server (NTRS)
Miller, A. I.; Foote, M.
1996-01-01
It has long been suspected that trends in global marine biodiversity calibrated for the Phanerozoic may be affected by sampling problems. However, this possibility has not been evaluated definitively, and raw diversity trends are generally accepted at face value in macroevolutionary investigations. Here, we analyze a global-scale sample of fossil occurrences that allows us to determine directly the effects of sample size on the calibration of what is generally thought to be among the most significant global biodiversity increases in the history of life: the Ordovician Radiation. Utilizing a composite database that includes trilobites, brachiopods, and three classes of molluscs, we conduct rarefaction analyses to demonstrate that the diversification trajectory for the Radiation was considerably different than suggested by raw diversity time-series. Our analyses suggest that a substantial portion of the increase recognized in raw diversity depictions for the last three Ordovician epochs (the Llandeilian, Caradocian, and Ashgillian) is a consequence of increased sample size of the preserved and catalogued fossil record. We also use biometric data for a global sample of Ordovician trilobites, along with methods of measuring morphological diversity that are not biased by sample size, to show that morphological diversification in this major clade had leveled off by the Llanvirnian. The discordance between raw diversity depictions and more robust taxonomic and morphological diversity metrics suggests that sampling effects may strongly influence our perception of biodiversity trends throughout the Phanerozoic.
Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.
Rochon, K; Scoles, G A; Lysyk, T J
2012-03-01
A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
The relation between statistical power and inference in fMRI
Wager, Tor D.; Yarkoni, Tal
2017-01-01
Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843
Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation
NASA Astrophysics Data System (ADS)
Loktev, Mikhail; Soloviev, Oleg; Savenko, Svyatoslav; Vdovin, Gleb
2011-07-01
We report on the first results to our knowledge obtained with adaptable multiaperture imaging through turbulence on a horizontal atmospheric path. We show that the resolution can be improved by adaptively matching the size of the subaperture to the characteristic size of the turbulence. Further improvement is achieved by the deconvolution of a number of subimages registered simultaneously through multiple subapertures. Different implementations of multiaperture geometry, including pupil multiplication, pupil image sampling, and a plenoptic telescope, are considered. Resolution improvement has been demonstrated on a ˜550m horizontal turbulent path, using a combination of aperture sampling, speckle image processing, and, optionally, frame selection.
Silverman, Rachel K; Ivanova, Anastasia
2017-01-01
Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.
Ancestral inference from haplotypes and mutations.
Griffiths, Robert C; Tavaré, Simon
2018-04-25
We consider inference about the history of a sample of DNA sequences, conditional upon the haplotype counts and the number of segregating sites observed at the present time. After deriving some theoretical results in the coalescent setting, we implement rejection sampling and importance sampling schemes to perform the inference. The importance sampling scheme addresses an extension of the Ewens Sampling Formula for a configuration of haplotypes and the number of segregating sites in the sample. The implementations include both constant and variable population size models. The methods are illustrated by two human Y chromosome datasets. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.
2009-04-01
Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".
Ratios of total suspended solids to suspended sediment concentrations by particle size
Selbig, W.R.; Bannerman, R.T.
2011-01-01
Wet-sieving sand-sized particles from a whole storm-water sample before splitting the sample into laboratory-prepared containers can reduce bias and improve the precision of suspended-sediment concentrations (SSC). Wet-sieving, however, may alter concentrations of total suspended solids (TSS) because the analytical method used to determine TSS may not have included the sediment retained on the sieves. Measuring TSS is still commonly used by environmental managers as a regulatory metric for solids in storm water. For this reason, a new method of correlating concentrations of TSS and SSC by particle size was used to develop a series of correction factors for SSC as a means to estimate TSS. In general, differences between TSS and SSC increased with greater particle size and higher sand content. Median correction factors to SSC ranged from 0.29 for particles larger than 500m to 0.85 for particles measuring from 32 to 63m. Great variability was observed in each fraction-a result of varying amounts of organic matter in the samples. Wide variability in organic content could reduce the transferability of the correction factors. ?? 2011 American Society of Civil Engineers.
Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe
2014-01-01
Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681
Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe
2014-01-01
To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Phillips, Brianne E; Venn-Watson, Stephanie; Archer, Linda L; Nollens, Hendrik H; Wellehan, James F X
2014-10-01
Hemochromatosis (iron storage disease) has been reported in diverse mammals including bottlenose dolphins (Tursiops truncatus). The primary cause of excessive iron storage in humans is hereditary hemochromatosis. Most human hereditary hemochromatosis cases (up to 90%) are caused by a point mutation in the hfe gene, resulting in a C282Y substitution leading to iron accumulation. To evaluate the possibility of a hereditary hemochromatosis-like genetic predisposition in dolphins, we sequenced the bottlenose dolphin hfe gene, using reverse transcriptase-PCR and hfe primers designed from the dolphin genome, from liver of affected and healthy control dolphins. Sample size included two case animals and five control animals. Although isotype diversity was evident, no coding differences were identified in the hfe gene between any of the animals examined. Because our sample size was small, we cannot exclude the possibility that hemochromatosis in dolphins is due to a coding mutation in the hfe gene. Other potential causes of hemochromatosis, including mutations in different genes, diet, primary liver disease, and insulin resistance, should be evaluated.
Bogdanova, Yelena; Yee, Megan K; Ho, Vivian T; Cicerone, Keith D
Comprehensive review of the use of computerized treatment as a rehabilitation tool for attention and executive function in adults (aged 18 years or older) who suffered an acquired brain injury. Systematic review of empirical research. Two reviewers independently assessed articles using the methodological quality criteria of Cicerone et al. Data extracted included sample size, diagnosis, intervention information, treatment schedule, assessment methods, and outcome measures. A literature review (PubMed, EMBASE, Ovid, Cochrane, PsychINFO, CINAHL) generated a total of 4931 publications. Twenty-eight studies using computerized cognitive interventions targeting attention and executive functions were included in this review. In 23 studies, significant improvements in attention and executive function subsequent to training were reported; in the remaining 5, promising trends were observed. Preliminary evidence suggests improvements in cognitive function following computerized rehabilitation for acquired brain injury populations including traumatic brain injury and stroke. Further studies are needed to address methodological issues (eg, small sample size, inadequate control groups) and to inform development of guidelines and standardized protocols.
Partitioning heritability by functional annotation using genome-wide association summary statistics.
Finucane, Hilary K; Bulik-Sullivan, Brendan; Gusev, Alexander; Trynka, Gosia; Reshef, Yakir; Loh, Po-Ru; Anttila, Verneri; Xu, Han; Zang, Chongzhi; Farh, Kyle; Ripke, Stephan; Day, Felix R; Purcell, Shaun; Stahl, Eli; Lindstrom, Sara; Perry, John R B; Okada, Yukinori; Raychaudhuri, Soumya; Daly, Mark J; Patterson, Nick; Neale, Benjamin M; Price, Alkes L
2015-11-01
Recent work has demonstrated that some functional categories of the genome contribute disproportionately to the heritability of complex diseases. Here we analyze a broad set of functional elements, including cell type-specific elements, to estimate their polygenic contributions to heritability in genome-wide association studies (GWAS) of 17 complex diseases and traits with an average sample size of 73,599. To enable this analysis, we introduce a new method, stratified LD score regression, for partitioning heritability from GWAS summary statistics while accounting for linked markers. This new method is computationally tractable at very large sample sizes and leverages genome-wide information. Our findings include a large enrichment of heritability in conserved regions across many traits, a very large immunological disease-specific enrichment of heritability in FANTOM5 enhancers and many cell type-specific enrichments, including significant enrichment of central nervous system cell types in the heritability of body mass index, age at menarche, educational attainment and smoking behavior.
Geometrical characteristics of sandstone with different sample sizes
NASA Astrophysics Data System (ADS)
Cheon, D. S.; Takahashi, M., , Dr
2017-12-01
In many rock engineering projects such as CO2 underground storage, engineering geothermal system, it is important things to understand the fluid flow behavior in the deep geological conditions. This fluid flow is generally affected by the geometrical characteristics of rock, especially porous media. Furthermore, physical properties in rock may depend on the existence of voids space in rock. Total porosity and pore size distribution can be measured by Mercury Intrusion Porosimetry and the other geometrical and spatial information of pores can be obtained through micro-focus X-ray CT. Using the micro-focus X-ray CT, we obtained the extracted void space and transparent image from the original CT voxel images of with different sample sizes like 1 mm, 2 mm, 3 mm cubes. The test samples are Berea sandstone and Otway sandstone. The former is well-known sandstone and it is used for the standard sample to compared to the result from the Otway sandstone. Otway sandstone was obtained from the CO2CRC Otway pilot site for the CO2 geosequestraion project. From the X-ray scan and ExFACT software, we get the informations including effective pore radii, coordination number, tortuosity and effective throat/pore radius ratio etc. The geometrical information analysis showed that for Berea sandstone and Otway sandstone, there is rarely differences with different sample sizes and total value of coordination number show high porosity, the tortuosity of Berea sandstone is higher than the Otway sandstone. In the future, these information will be used for the permeability of the samples.
NASA Astrophysics Data System (ADS)
Peterson, Joseph E.; Lenczewski, Melissa E.; Clawson, Steven R.; Warnock, Jonathan P.
2017-04-01
Microscopic soft tissues have been identified in fossil vertebrate remains collected from various lithologies. However, the diagenetic mechanisms to preserve such tissues have remained elusive. While previous studies have described infiltration of biofilms in Haversian and Volkmann’s canals, biostratinomic alteration (e.g., trampling), and iron derived from hemoglobin as playing roles in the preservation processes, the influence of sediment texture has not previously been investigated. This study uses a Kolmogorov Smirnov Goodness-of-Fit test to explore the influence of biostratinomic variability and burial media against the infiltration of biofilms in bone samples. Controlled columns of sediment with bone samples were used to simulate burial and subsequent groundwater flow. Sediments used in this study include clay-, silt-, and sand-sized particles modeled after various fluvial facies commonly associated with fossil vertebrates. Extant limb bone samples obtained from Gallus gallus domesticus (Domestic Chicken) buried in clay-rich sediment exhibit heavy biofilm infiltration, while bones buried in sands and silts exhibit moderate levels. Crushed bones exhibit significantly lower biofilm infiltration than whole bone samples. Strong interactions between biostratinomic alteration and sediment size are also identified with respect to biofilm development. Sediments modeling crevasse splay deposits exhibit considerable variability; whole-bone crevasse splay samples exhibit higher frequencies of high-level biofilm infiltration, and crushed-bone samples in modeled crevasse splay deposits display relatively high frequencies of low-level biofilm infiltration. These results suggest that sediment size, depositional setting, and biostratinomic condition play key roles in biofilm infiltration in vertebrate remains, and may influence soft tissue preservation in fossil vertebrates.
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
A descriptive study of effect-size reporting in research reviews.
Floyd, Judith A
2017-06-01
To describe effect-size reporting in research reviews completed in support of evidence-based practice in nursing. Many research reviews report nurses' critical appraisal of level, quality and overall strength of evidence available to address clinical questions. Several studies of research-review quality suggest effect-size information would be useful to include in these reviews, but none focused on reviewers' attention to effect sizes. Descriptive. One hundred and four reviews indexed in CINAHL as systematic reviews and published from July 2012-February 2014 were examined. Papers were required to be peer-reviewed, written in English, contain an abstract and have at least one nurse author. Reviews were excluded if they did not use critical appraisal methods to address evidence of correlation, prediction or effectiveness. Data from remaining papers (N = 73) were extracted by three or more independent coders using a structured coding form and detailed codebook. Data were stored, viewed and analysed using Microsoft Office Excel ® spreadsheet functions. Sixteen percent (n = 12) of the sample contained effect-size information. Of the 12, six included all the effect-size information recommended by APA guidelines. Independent of completeness of reporting, seven contained discussion of effect sizes in the paper, but none included effect-size information in abstracts. Research reviews available to practicing nurses often fail to include information needed to accurately assess how much improvement may result from implementation of evidence-based policies, programs, protocols or practices. Manuscript reviewers are urged to hold authors to APA standards for reporting/discussing effect-size information in both primary research reports and research reviews. © 2016 John Wiley & Sons Ltd.
Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M
2018-06-01
Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.
Comparing particle-size distributions in modern and ancient sand-bed rivers
NASA Astrophysics Data System (ADS)
Hajek, E. A.; Lynds, R. M.; Huzurbazar, S. V.
2011-12-01
Particle-size distributions yield valuable insight into processes controlling sediment supply, transport, and deposition in sedimentary systems. This is especially true in ancient deposits, where effects of changing boundary conditions and autogenic processes may be detected from deposited sediment. In order to improve interpretations in ancient deposits and constrain uncertainty associated with new methods for paleomorphodynamic reconstructions in ancient fluvial systems, we compare particle-size distributions in three active sand-bed rivers in central Nebraska (USA) to grain-size distributions from ancient sandy fluvial deposits. Within the modern rivers studied, particle-size distributions of active-layer, suspended-load, and slackwater deposits show consistent relationships despite some morphological and sediment-supply differences between the rivers. In particular, there is substantial and consistent overlap between bed-material and suspended-load distributions, and the coarsest material found in slackwater deposits is comparable to the coarse fraction of suspended-sediment samples. Proxy bed-load and slackwater-deposit samples from the Kayenta Formation (Lower Jurassic, Utah/Colorado, USA) show overlap similar to that seen in the modern rivers, suggesting that these deposits may be sampled for paleomorphodynamic reconstructions, including paleoslope estimation. We also compare grain-size distributions of channel, floodplain, and proximal-overbank deposits in the Willwood (Paleocene/Eocene, Bighorn Basin, Wyoming, USA), Wasatch (Paleocene/Eocene, Piceance Creek Basin, Colorado, USA), and Ferris (Cretaceous/Paleocene, Hanna Basin, Wyoming, USA) formations. Grain-size characteristics in these deposits reflect how suspended- and bed-load sediment is distributed across the floodplain during channel avulsion events. In order to constrain uncertainty inherent in such estimates, we evaluate uncertainty associated with sample collection, preparation, analytical particle-size analysis, and statistical characterization in both modern and ancient settings. We consider potential error contributions and evaluate the degree to which this uncertainty might be significant in modern sediment-transport studies and ancient paleomorphodynamic reconstructions.
MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes
NASA Technical Reports Server (NTRS)
Allen, Carlton C.
2011-01-01
The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by mass of each sample should be preserved to support future scientific investigations. Samples of 15-16 grams are considered optimal. The total mass of returned rocks, soils, blanks and standards should be approximately 500 grams. Atmospheric gas samples should be the equivalent of 50 cubic cm at 20 times Mars ambient atmospheric pressure.
Spiegelhalter, D J; Freedman, L S
1986-01-01
The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.
Stachowiak, Jeanne C; Shugard, Erin E; Mosier, Bruce P; Renzi, Ronald F; Caton, Pamela F; Ferko, Scott M; Van de Vreugde, James L; Yee, Daniel D; Haroldsen, Brent L; VanderNoot, Victoria A
2007-08-01
For domestic and military security, an autonomous system capable of continuously monitoring for airborne biothreat agents is necessary. At present, no system meets the requirements for size, speed, sensitivity, and selectivity to warn against and lead to the prevention of infection in field settings. We present a fully automated system for the detection of aerosolized bacterial biothreat agents such as Bacillus subtilis (surrogate for Bacillus anthracis) based on protein profiling by chip gel electrophoresis coupled with a microfluidic sample preparation system. Protein profiling has previously been demonstrated to differentiate between bacterial organisms. With the goal of reducing response time, multiple microfluidic component modules, including aerosol collection via a commercially available collector, concentration, thermochemical lysis, size exclusion chromatography, fluorescent labeling, and chip gel electrophoresis were integrated together to create an autonomous collection/sample preparation/analysis system. The cycle time for sample preparation was approximately 5 min, while total cycle time, including chip gel electrophoresis, was approximately 10 min. Sensitivity of the coupled system for the detection of B. subtilis spores was 16 agent-containing particles per liter of air, based on samples that were prepared to simulate those collected by wetted cyclone aerosol collector of approximately 80% efficiency operating for 7 min.
Zelt, Ronald B.; Hobza, Christopher M.; Burton, Bethany L.; Schaepe, Nathaniel J.; Piatak, Nadine
2017-11-16
Sediment management is a challenge faced by reservoir managers who have several potential options, including dredging, for mitigation of storage capacity lost to sedimentation. As sediment is removed from reservoir storage, potential use of the sediment for socioeconomic or ecological benefit could potentially defray some costs of its removal. Rivers that transport a sandy sediment load will deposit the sand load along a reservoir-headwaters reach where the current of the river slackens progressively as its bed approaches and then descends below the reservoir water level. Given a rare combination of factors, a reservoir deposit of alluvial sand has potential to be suitable for use as proppant for hydraulic fracturing in unconventional oil and gas development. In 2015, the U.S. Geological Survey began a program of researching potential sources of proppant sand from reservoirs, with an initial focus on the Missouri River subbasins that receive sand loads from the Nebraska Sand Hills. This report documents the methods and results of assessments of the suitability of river delta sediment as proppant for a pilot study area in the delta headwaters of Lewis and Clark Lake, Nebraska and South Dakota. Results from surface-geophysical surveys of electrical resistivity guided borings to collect 3.7-meter long cores at 25 sites on delta sandbars using the direct-push method to recover duplicate, 3.8-centimeter-diameter cores in April 2015. In addition, the U.S. Geological Survey collected samples of upstream sand sources in the lower Niobrara River valley.At the laboratory, samples were dried, weighed, washed, dried, and weighed again. Exploratory analysis of natural sand for determining its suitability as a proppant involved application of a modified subset of the standard protocols known as American Petroleum Institute (API) Recommended Practice (RP) 19C. The RP19C methods were not intended for exploration-stage evaluation of raw materials. Results for the washed samples are not directly applicable to evaluations of suitability for use as fracture sand because, except for particle-size distribution, the API-recommended practices for assessing proppant properties (sphericity, roundness, bulk density, and crush resistance) require testing of specific proppant size classes. An optical imaging particle-size analyzer was used to make measurements of particle-size distribution and particle shape. Measured samples were sieved to separate the dominant-size fraction, and the separated subsample was further tested for roundness, sphericity, bulk density, and crush resistance.For the bulk washed samples collected from the Missouri River delta, the geometric mean size averaged 0.27 millimeters (mm), 80 percent of the samples were predominantly sand in the API 40/70 size class, and 17 percent were predominantly sand in the API 70/140 size class. Distributions of geometric mean size among the four sandbar complexes were similar, but samples collected from sandbar complex B were slightly coarser sand than those from the other three complexes. The average geometric mean sizes among the four sandbar complexes ranged only from 0.26 to 0.30 mm. For 22 main-stem sampling locations along the lower Niobrara River, geometric mean size averaged 0.26 mm, an average of 61 percent was sand in the API 40/70 size class, and 28 percent was sand in the API 70/140 size class. Average composition for lower Niobrara River samples was 48 percent medium sand, 37 percent fine sand, and about 7 percent each very fine sand and coarse sand fractions. On average, samples were moderately well sorted.Particle shape and strength were assessed for the dominant-size class of each sample. For proppant strength, crush resistance was tested at a predetermined level of stress (34.5 megapascals [MPa], or 5,000 pounds-force per square inch). To meet the API minimum requirement for proppant, after the crush test not more than 10 percent of the tested sample should be finer than the precrush dominant-size class. For particle shape, all samples surpassed the recommended minimum criteria for sphericity and roundness, with most samples being well-rounded. For proppant strength, of 57 crush-resistance tested Missouri River delta samples of 40/70-sized sand, 23 (40 percent) were interpreted as meeting the minimum criterion at 34.5 MPa, or 5,000 pounds-force per square inch. Of 12 tested samples of 70/140-sized sand, 9 (75 percent) of the Missouri River delta samples had less than 10 percent fines by volume following crush testing, achieving the minimum criterion at 34.5 MPa. Crush resistance for delta samples was strongest at sandbar complex A, where 67 percent of tested samples met the 10-percent fines criterion at the 34.5-MPa threshold. This frequency was higher than was indicated by samples from sandbar complexes B, C, and D that had rates of 50, 46, and 42 percent, respectively. The group of sandbar complex A samples also contained the largest percentages of samples dominated by the API 70/140 size class, which overall had a higher percentage of samples meeting the minimum criterion compared to samples dominated by coarser size classes; however, samples from sandbar complex A that had the API 40/70 size class tested also had a higher rate for meeting the minimum criterion (57 percent) than did samples from sandbar complexes B, C, and D (50, 43, and 40 percent, respectively). For samples collected along the lower Niobrara River, of the 25 tested samples of 40/70-sized sand, 9 samples passed the API minimum criterion at 34.5 MPa, but only 3 samples passed the more-stringent criterion of 8 percent postcrush fines. All four tested samples of 70/140 sand passed the minimum criterion at 34.5 MPa, with postcrush fines percentage of at most 4.1 percent.For two reaches of the lower Niobrara River, where hydraulic sorting was energized artificially by the hydraulic head drop at and immediately downstream from Spencer Dam, suitability of channel deposits for potential use as fracture sand was confirmed by test results. All reach A washed samples were well-rounded and had sphericity scores above 0.65, and samples for 80 percent of sampled locations met the crush-resistance criterion at the 34.5-MPa stress level. A conservative lower-bound estimate of sand volume in the reach A deposits was about 86,000 cubic meters. All reach B samples were well-rounded but sphericity averaged 0.63, a little less than the average for upstream reaches A and SP. All four samples tested passed the crush-resistance test at 34.5 MPa. Of three reach B sandbars, two had no more than 3 percent fines after the crush test, surpassing more stringent criteria for crush resistance that accept a maximum of 6 percent fines following the crush test for the API 70/140 size class.Relative to the crush-resistance test results for the API 40/70 size fraction of two samples of mine output from Loup River settling-basin dredge spoils near Genoa, Nebr., four of five reach A sample locations compared favorably. The four samples had increases in fines composition of 1.6–5.9 percentage points, whereas fines in the two mine-output samples increased by an average 6.8 percentage points.
Wider-Opening Dewar Flasks for Cryogenic Storage
NASA Technical Reports Server (NTRS)
Ruemmele, Warren P.; Manry, John; Stafford, Kristin; Bue, Grant; Krejci, John; Evernden, Bent
2010-01-01
Dewar flasks have been proposed as containers for relatively long-term (25 days) storage of perishable scientific samples or other perishable objects at a temperature of 175 C. The refrigeration would be maintained through slow boiling of liquid nitrogen (LN2). For the purposes of the application for which these containers were proposed, (1) the neck openings of commercial off-the-shelf (COTS) Dewar flasks are too small for most NASA samples; (2) the round shapes of the COTS containers give rise to unacceptably low efficiency of packing in rectangular cargo compartments; and (3) the COTS containers include metal structures that are too thermally conductive, such that they cannot, without exceeding size and weight limits, hold enough LN2 for the required long-term-storage. In comparison with COTS Dewar flasks, the proposed containers would be rectangular, yet would satisfy the long-term storage requirement without exceeding size and weight limits; would have larger neck openings; and would have greater sample volumes, leading to a packing efficiency of about double the sample volume as a fraction of total volume. The proposed containers would be made partly of aerospace- type composite materials and would include vacuum walls, multilayer insulation, and aerogel insulation.
Joint inversion of NMR and SIP data to estimate pore size distribution of geomaterials
NASA Astrophysics Data System (ADS)
Niu, Qifei; Zhang, Chi
2018-03-01
There are growing interests in using geophysical tools to characterize the microstructure of geomaterials because of the non-invasive nature and the applicability in field. In these applications, multiple types of geophysical data sets are usually processed separately, which may be inadequate to constrain the key feature of target variables. Therefore, simultaneous processing of multiple data sets could potentially improve the resolution. In this study, we propose a method to estimate pore size distribution by joint inversion of nuclear magnetic resonance (NMR) T2 relaxation and spectral induced polarization (SIP) spectra. The petrophysical relation between NMR T2 relaxation time and SIP relaxation time is incorporated in a nonlinear least squares problem formulation, which is solved using Gauss-Newton method. The joint inversion scheme is applied to a synthetic sample and a Berea sandstone sample. The jointly estimated pore size distributions are very close to the true model and results from other experimental method. Even when the knowledge of the petrophysical models of the sample is incomplete, the joint inversion can still capture the main features of the pore size distribution of the samples, including the general shape and relative peak positions of the distribution curves. It is also found from the numerical example that the surface relaxivity of the sample could be extracted with the joint inversion of NMR and SIP data if the diffusion coefficient of the ions in the electrical double layer is known. Comparing to individual inversions, the joint inversion could improve the resolution of the estimated pore size distribution because of the addition of extra data sets. The proposed approach might constitute a first step towards a comprehensive joint inversion that can extract the full pore geometry information of a geomaterial from NMR and SIP data.
Sample size calculations for the design of cluster randomized trials: A summary of methodology.
Gao, Fei; Earnest, Arul; Matchar, David B; Campbell, Michael J; Machin, David
2015-05-01
Cluster randomized trial designs are growing in popularity in, for example, cardiovascular medicine research and other clinical areas and parallel statistical developments concerned with the design and analysis of these trials have been stimulated. Nevertheless, reviews suggest that design issues associated with cluster randomized trials are often poorly appreciated and there remain inadequacies in, for example, describing how the trial size is determined and the associated results are presented. In this paper, our aim is to provide pragmatic guidance for researchers on the methods of calculating sample sizes. We focus attention on designs with the primary purpose of comparing two interventions with respect to continuous, binary, ordered categorical, incidence rate and time-to-event outcome variables. Issues of aggregate and non-aggregate cluster trials, adjustment for variation in cluster size and the effect size are detailed. The problem of establishing the anticipated magnitude of between- and within-cluster variation to enable planning values of the intra-cluster correlation coefficient and the coefficient of variation are also described. Illustrative examples of calculations of trial sizes for each endpoint type are included. Copyright © 2015 Elsevier Inc. All rights reserved.
Size effect on atomic structure in low-dimensional Cu-Zr amorphous systems.
Zhang, W B; Liu, J; Lu, S H; Zhang, H; Wang, H; Wang, X D; Cao, Q P; Zhang, D X; Jiang, J Z
2017-08-04
The size effect on atomic structure of a Cu 64 Zr 36 amorphous system, including zero-dimensional small-size amorphous particles (SSAPs) and two-dimensional small-size amorphous films (SSAFs) together with bulk sample was investigated by molecular dynamics simulations. We revealed that sample size strongly affects local atomic structure in both Cu 64 Zr 36 SSAPs and SSAFs, which are composed of core and shell (surface) components. Compared with core component, the shell component of SSAPs has lower average coordination number and average bond length, higher degree of ordering, and lower packing density due to the segregation of Cu atoms on the shell of Cu 64 Zr 36 SSAPs. These atomic structure differences in SSAPs with various sizes result in different glass transition temperatures, in which the glass transition temperature for the shell component is found to be 577 K, which is much lower than 910 K for the core component. We further extended the size effect on the structure and glasses transition temperature to Cu 64 Zr 36 SSAFs, and revealed that the T g decreases when SSAFs becomes thinner due to the following factors: different dynamic motion (mean square displacement), different density of core and surface and Cu segregation on the surface of SSAFs. The obtained results here are different from the results for the size effect on atomic structure of nanometer-sized crystalline metallic alloys.
Stress dependence of microstructures in experimentally deformed calcite
NASA Astrophysics Data System (ADS)
Platt, John P.; De Bresser, J. H. P.
2017-12-01
Optical measurements of microstructural features in experimentally deformed Carrara marble help define their dependence on stress. These features include dynamically recrystallized grain size (Dr), subgrain size (Sg), minimum bulge size (Lρ), and the maximum scale length for surface-energy driven grain-boundary migration (Lγ). Taken together with previously published data Dr defines a paleopiezometer over the range 15-291 MPa and temperature over the range 500-1000 °C, with a stress exponent of -1.09 (CI -1.27 to -0.95), showing no detectable dependence on temperature. Sg and Dr measured in the same samples are closely similar in size, suggesting that the new grains did not grow significantly after nucleation. Lρ and Lγ measured on each sample define a relationship to stress with an exponent of approximately -1.6, which helps define the boundary between a region of dominant strain-energy-driven grain-boundary migration at high stress, from a region of dominant surface-energy-driven grain-boundary migration at low stress.
Minimal-assumption inference from population-genomic data
NASA Astrophysics Data System (ADS)
Weissman, Daniel; Hallatschek, Oskar
Samples of multiple complete genome sequences contain vast amounts of information about the evolutionary history of populations, much of it in the associations among polymorphisms at different loci. Current methods that take advantage of this linkage information rely on models of recombination and coalescence, limiting the sample sizes and populations that they can analyze. We introduce a method, Minimal-Assumption Genomic Inference of Coalescence (MAGIC), that reconstructs key features of the evolutionary history, including the distribution of coalescence times, by integrating information across genomic length scales without using an explicit model of recombination, demography or selection. Using simulated data, we show that MAGIC's performance is comparable to PSMC' on single diploid samples generated with standard coalescent and recombination models. More importantly, MAGIC can also analyze arbitrarily large samples and is robust to changes in the coalescent and recombination processes. Using MAGIC, we show that the inferred coalescence time histories of samples of multiple human genomes exhibit inconsistencies with a description in terms of an effective population size based on single-genome data.
Choi, Yoonha; Liu, Tiffany Ting; Pankratz, Daniel G; Colby, Thomas V; Barth, Neil M; Lynch, David A; Walsh, P Sean; Raghu, Ganesh; Kennedy, Giulia C; Huang, Jing
2018-05-09
We developed a classifier using RNA sequencing data that identifies the usual interstitial pneumonia (UIP) pattern for the diagnosis of idiopathic pulmonary fibrosis. We addressed significant challenges, including limited sample size, biological and technical sample heterogeneity, and reagent and assay batch effects. We identified inter- and intra-patient heterogeneity, particularly within the non-UIP group. The models classified UIP on transbronchial biopsy samples with a receiver-operating characteristic area under the curve of ~ 0.9 in cross-validation. Using in silico mixed samples in training, we prospectively defined a decision boundary to optimize specificity at ≥85%. The penalized logistic regression model showed greater reproducibility across technical replicates and was chosen as the final model. The final model showed sensitivity of 70% and specificity of 88% in the test set. We demonstrated that the suggested methodologies appropriately addressed challenges of the sample size, disease heterogeneity and technical batch effects and developed a highly accurate and robust classifier leveraging RNA sequencing for the classification of UIP.
Experimental Effects on IR Reflectance Spectra: Particle Size and Morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiswenger, Toya N.; Myers, Tanya L.; Brauer, Carolyn S.
For geologic and extraterrestrial samples it is known that both particle size and morphology can have strong effects on the species’ infrared reflectance spectra. Due to such effects, the reflectance spectra cannot be predicted from the absorption coefficients alone. This is because reflectance is both a surface as well as a bulk phenomenon, incorporating both dispersion as well as absorption effects. The same spectral features can even be observed as either a maximum or minimum. The complex effects depend on particle size and preparation, as well as the relative amplitudes of the optical constants n and k, i.e. the realmore » and imaginary components of the complex refractive index. While somewhat oversimplified, upward-going amplitude in the reflectance spectrum usually result from surface scattering, i.e. rays that have been reflected from the surface without penetration, whereas downward-going peaks are due to either absorption or volume scattering, i.e. rays that have penetrated or refracted into the sample interior and are not reflected. While the effects are well known, we report seminal measurements of reflectance along with quantified particle size of the samples, the sizing obtained from optical microscopy measurements. The size measurements are correlated with the reflectance spectra in the 1.3 – 16 micron range for various bulk materials that have a combination of strong and weak absorption bands in order to understand the effects on the spectral features as a function of the mean grain size of the sample. We report results for both sodium sulfate Na2SO4 as well as ammonium sulfate (NH4)2SO4; the optical constants have been measured for (NH4)2SO4. To go a step further from the field to the laboratory we explore our understanding of particle size effects on reflectance spectra in the field using standoff detection. This has helped identify weaknesses and strengths in detection using standoff distances of up 160 meters away from the Target. The studies have shown that particle size has an enormous influence on the measured reflectance spectra of such materials; successful identification requires sufficient, representative reflectance data to include the particle sizes of interest.« less
Petitpas, Christian M; Turner, Jefferson T; Deeds, Jonathan R; Keafer, Bruce A; McGillicuddy, Dennis J; Milligan, Peter J; Shue, Vangie; White, Kevin D; Anderson, Donald M
2014-05-01
As part of the Gulf of Maine Toxicity (GOMTOX) project, we determined Alexandrium fundyense abundance, paralytic shellfish poisoning (PSP) toxin levels in various plankton size fractions, and the community composition of potential grazers of A. fundyense in plankton size fractions during blooms of this toxic dinoflagellate in the coastal Gulf of Maine and on Georges Bank in spring and summer of 2007, 2008, and 2010. PSP toxins and A. fundyense cells were found throughout the sampled water column (down to 50 m) in the 20-64 μm size fractions. While PSP toxins were widespread throughout all size classes of the zooplankton grazing community, the majority of the toxin was measured in the 20-64 μm size fraction. A. fundyense cellular toxin content estimated from field samples was significantly higher in the coastal Gulf of Maine than on Georges Bank. Most samples containing PSP toxins in the present study had diverse assemblages of grazers. However, some samples clearly suggested PSP toxin accumulation in several different grazer taxa including tintinnids, heterotrophic dinoflagellates of the genus Protoperidinium , barnacle nauplii, the harpacticoid copepod Microsetella norvegica , the calanoid copepods Calanus finmarchicus and Pseudocalanus spp., the marine cladoceran Evadne nordmanni , and hydroids of the genus Clytia . Thus, a diverse assemblage of zooplankton grazers accumulated PSP toxins through food-web interactions. This raises the question of whether PSP toxins pose a potential human health risk not only from nearshore bivalve shellfish, but also potentially from fish and other upper-level consumers in zooplankton-based pelagic food webs.
The Army Communications Objectives Measurement System (ACOMS): Survey Design
1988-04-01
monthly basis so that the annual sample includes sufficient Hispanics to detect at the .80 power level: (1) Year-to-year changes of 3% in item...Hispanics. The requirements are listed in terms of power level and must be translated into requisite sample sizes. The requirements are expressed as the...annual samples needed to detect certain differences at the 80% power level. Differences in both directions are to be examined, so that a two-tailed
A new test apparatus for studying the failure process during loading experiments of snow
NASA Astrophysics Data System (ADS)
Capelli, Achille; Reiweger, Ingrid; Schweizer, Jürg
2016-04-01
We developed a new apparatus for fully load-controlled snow failure experiments. The deformation and applied load are measured with two displacement and two force sensors, respectively. The loading experiments are recorded with a high speed camera, and the local strain is derived by a particle image velocimetry (PIV) algorithm. To monitor the progressive failure process within the snow sample, our apparatus includes six piezoelectric transducers that record the acoustic emissions in the ultrasonic range. The six sensors allow localizing the sources of the acoustic emissions, i.e. where the failure process starts and how it develops with time towards catastrophic failure. The quadratic snow samples have a side length of 50 cm and a height of 10 to 20 cm. With an area of 0.25 m2 they are clearly larger than samples used in previous experiments. The size of the samples, which is comparable to the critical size for the onset of crack propagation leading to dry-snow slab avalanche release, allows studying the failure nucleation process and its relation to the spatial distribution of the recorded acoustic emissions. Furthermore the occurrence of features in the acoustic emissions typical for imminent failure of the samples can be analysed. We present preliminary results of the acoustic emissions recorded during tests with homogeneous as well as layered snow samples, including a weak layer, for varying loading rates and loading angles.
Puls, Robert W.; Eychaner, James H.; Powell, Robert M.
1996-01-01
Investigations at Pinal Creek, Arizona, evaluated routine sampling procedures for determination of aqueous inorganic geochemistry and assessment of contaminant transport by colloidal mobility. Sampling variables included pump type and flow rate, collection under air or nitrogen, and filter pore diameter. During well purging and sample collection, suspended particle size and number as well as dissolved oxygen, temperature, specific conductance, pH, and redox potential were monitored. Laboratory analyses of both unfiltered samples and the filtrates were performed by inductively coupled argon plasma, atomic absorption with graphite furnace, and ion chromatography. Scanning electron microscopy with Energy Dispersive X-ray was also used for analysis of filter particulates. Suspended particle counts consistently required approximately twice as long as the other field-monitored indicators to stabilize. High-flow-rate pumps entrained normally nonmobile particles. Difference in elemental concentrations using different filter-pore sizes were generally not large with only two wells having differences greater than 10 percent in most wells. Similar differences (>10%) were observed for some wells when samples were collected under nitrogen rather than in air. Fe2+/Fe3+ ratios for air-collected samples were smaller than for samples collected under a nitrogen atmosphere, reflecting sampling-induced oxidation.
Rakow, Tobias; El Deeb, Sami; Hahne, Thomas; El-Hady, Deia Abd; AlBishri, Hassan M; Wätzig, Hermann
2014-09-01
In this study, size-exclusion chromatography and high-resolution atomic absorption spectrometry methods have been developed and evaluated to test the stability of proteins during sample pretreatment. This especially includes different storage conditions but also adsorption before or even during the chromatographic process. For the development of the size exclusion method, a Biosep S3000 5 μm column was used for investigating a series of representative model proteins, namely bovine serum albumin, ovalbumin, monoclonal immunoglobulin G antibody, and myoglobin. Ambient temperature storage was found to be harmful to all model proteins, whereas short-term storage up to 14 days could be done in an ordinary refrigerator. Freezing the protein solutions was always complicated and had to be evaluated for each protein in the corresponding solvent. To keep the proteins in their native state a gentle freezing temperature should be chosen, hence liquid nitrogen should be avoided. Furthermore, a high-resolution continuum source atomic absorption spectrometry method was developed to observe the adsorption of proteins on container material and chromatographic columns. Adsorption to any container led to a sample loss and lowered the recovery rates. During the pretreatment and high-performance size-exclusion chromatography, adsorption caused sample losses of up to 33%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A comparative review of methods for comparing means using partially paired data.
Guo, Beibei; Yuan, Ying
2017-06-01
In medical experiments with the objective of testing the equality of two means, data are often partially paired by design or because of missing data. The partially paired data represent a combination of paired and unpaired observations. In this article, we review and compare nine methods for analyzing partially paired data, including the two-sample t-test, paired t-test, corrected z-test, weighted t-test, pooled t-test, optimal pooled t-test, multiple imputation method, mixed model approach, and the test based on a modified maximum likelihood estimate. We compare the performance of these methods through extensive simulation studies that cover a wide range of scenarios with different effect sizes, sample sizes, and correlations between the paired variables, as well as true underlying distributions. The simulation results suggest that when the sample size is moderate, the test based on the modified maximum likelihood estimator is generally superior to the other approaches when the data is normally distributed and the optimal pooled t-test performs the best when the data is not normally distributed, with well-controlled type I error rates and high statistical power; when the sample size is small, the optimal pooled t-test is to be recommended when both variables have missing data and the paired t-test is to be recommended when only one variable has missing data.
Tamburini, Elena; Vincenzi, Fabio; Costa, Stefania; Mantovi, Paolo; Pedrini, Paola; Castaldelli, Giuseppe
2017-10-17
Near-Infrared Spectroscopy is a cost-effective and environmentally friendly technique that could represent an alternative to conventional soil analysis methods, including total organic carbon (TOC). Soil fertility and quality are usually measured by traditional methods that involve the use of hazardous and strong chemicals. The effects of physical soil characteristics, such as moisture content and particle size, on spectral signals could be of great interest in order to understand and optimize prediction capability and set up a robust and reliable calibration model, with the future perspective of being applied in the field. Spectra of 46 soil samples were collected. Soil samples were divided into three data sets: unprocessed, only dried and dried, ground and sieved, in order to evaluate the effects of moisture and particle size on spectral signals. Both separate and combined normalization methods including standard normal variate (SNV), multiplicative scatter correction (MSC) and normalization by closure (NCL), as well as smoothing using first and second derivatives (DV1 and DV2), were applied to a total of seven cases. Pretreatments for model optimization were designed and compared for each data set. The best combination of pretreatments was achieved by applying SNV and DV2 on partial least squares (PLS) modelling. There were no significant differences between the predictions using the three different data sets ( p < 0.05). Finally, a unique database including all three data sets was built to include all the sources of sample variability that were tested and used for final prediction. External validation of TOC was carried out on 16 unknown soil samples to evaluate the predictive ability of the final combined calibration model. Hence, we demonstrate that sample preprocessing has minor influence on the quality of near infrared spectroscopy (NIR) predictions, laying the ground for a direct and fast in situ application of the method. Data can be acquired outside the laboratory since the method is simple and does not need more than a simple band ratio of the spectra.
The Mars Orbital Catalog of Hydrated Alteration Signatures (MOCHAS) - Initial release
NASA Astrophysics Data System (ADS)
Carter, John; OMEGA and CRISM Teams
2016-10-01
Aqueous minerals have been identified from orbit at a number of localities, and their analysis allowed refining the water story of Early Mars. They are also a main science driver when selecting current and upcoming landing sites for roving missions.Available catalogs of mineral detections exhibit a number of drawbacks such as a limited sample size (a thousand sites at most), inhomogeneous sampling of the surface and of the investigation methods, and the lack of contextual information (e.g. spatial extent, morphological context). The MOCHAS project strives to address such limitations by providing a global, detailed survey of aqueous minerals on Mars based on 10 years of data from the OMEGA and CRISM imaging spectrometers. Contextual data is provided, including deposit sizes, morphology and detailed composition when available. Sampling biases are also addressed.It will be openly distributed in GIS-ready format and will be participative. For example, it will be possible for researchers to submit requests for specific mapping of regions of interest, or add/refine mineral detections.An initial release is scheduled in Fall 2016 and will feature a two orders of magnitude increase in sample size compared to previous studies.
EXTENDING THE FLOOR AND THE CEILING FOR ASSESSMENT OF PHYSICAL FUNCTION
Fries, James F.; Lingala, Bharathi; Siemons, Liseth; Glas, Cees A. W.; Cella, David; Hussain, Yusra N; Bruce, Bonnie; Krishnan, Eswar
2014-01-01
Objective The objective of the current study was to improve the assessment of physical function by improving the precision of assessment at the floor (extremely poor function) and at the ceiling (extremely good health) of the health continuum. Methods Under the NIH PROMIS program, we developed new physical function floor and ceiling items to supplement the existing item bank. Using item response theory (IRT) and the standard PROMIS methodology, we developed 30 floor items and 26 ceiling items and administered them during a 12-month prospective observational study of 737 individuals at the extremes of health status. Change over time was compared across anchor instruments and across items by means of effect sizes. Using the observed changes in scores, we back-calculated sample size requirements for the new and comparison measures. Results We studied 444 subjects with chronic illness and/or extreme age, and 293 generally fit subjects including athletes in training. IRT analyses confirmed that the new floor and ceiling items outperformed reference items (p<0.001). The estimated post-hoc sample size requirements were reduced by a factor of two to four at the floor and a factor of two at the ceiling. Conclusion Extending the range of physical function measurement can substantially improve measurement quality, can reduce sample size requirements and improve research efficiency. The paradigm shift from Disability to Physical Function includes the entire spectrum of physical function, signals improvement in the conceptual base of outcome assessment, and may be transformative as medical goals more closely approach societal goals for health. PMID:24782194
Sepúlveda, Nuno; Drakeley, Chris
2015-04-03
In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.
NASA Astrophysics Data System (ADS)
Schulte, Wolfgang; Hofer, Stefan; Hofmann, Peter; Thiele, Hans; von Heise-Rotenburg, Ralf; Toporski, Jan; Rettberg, Petra
2007-06-01
For more than a decade Kayser-Threde, a medium-sized enterprise of the German space industry, has been involved in astrobiology research in partnership with a variety of scientific institutes from all over Europe. Previous projects include exobiology research platforms in low Earth orbit on retrievable carriers and onboard the Space Station. More recently, exobiology payloads for in situ experimentation on Mars have been studied by Kayser-Threde under ESA contracts, specifically the ExoMars Pasteur Payload. These studies included work on a sample preparation and distribution systems for Martian rock/regolith samples, instrument concepts such as Raman spectroscopy and a Life Marker Chip, advanced microscope systems as well as robotic tools for astrobiology missions. The status of the funded technical studies and major results are presented. The reported industrial work was funded by ESA and the German Aerospace Center (DLR).
Cosmic Dust in ~50 KG Blocks of Blue Ice from Cap-Prudhomme and Queen Alexandra Range, Antarctica
NASA Astrophysics Data System (ADS)
Maurette, M.; Cragin, J.; Taylor, S.
1992-07-01
Favorable Antarctic blue ice fields have produced a large number of meteorite finds because of the ice ablation concentration process (Cassidy et al., 1982). Such ice fields should also concentrate cosmic dust grains including both spherules and unmelted micrometeorites. Here we present preliminary results of concentrations of cosmic dust grains in ice from two very different Antarctic blue ice fields. The first sample (~60 kg) was collected in January 1987 from the surface of the blue ice field at Cap-Prudhomme (CP), near the French station of Dumont d'Urville, by a team from the "Laboratoire de Glaciologie du CNRS" (A. Barnola). The second sample (~50 kg), was retrieved from a meteorite stranding surface near the Queen Alexandra range (QUE) by a team (M. Burger, W. Cassidy, and R.Walker) of the ANSMET 1990 field expedition in Antarctica. Both samples were transported frozen to the laboratory where they were subdivided and processed. The CP sample was cut with a stainless steel saw into 4 pieces while the QUE sample, which had the top surface identified, was cut into three equal (~15 cm) horizontal layers to provide constituent variability with depth. All subsequent work on both samples was performed in a class 100 clean room using procedures developed by M. de Angelis and M. Maurette aimed at minimizing the loss of extraterrestrial particles. Pieces of both samples were cleaned by rinsing thoroughly with ultrapure water (Milli-O) and then melted in polyethylene containers in a microwave oven. Aliquots were decanted for chemical analysis and the remaining meltwater was filtered through stainless steel sieves for collection of large (>30 micrometers) particles. Using a 30X binocular microscope particles were hand picked for subsequent SEM/EDX analyses. Our initial objective was to compare the cosmic dust concentration in ice from the two locations. But this comparison was only partial because in the CP-ice, only magnetic spherules of >50 micrometers were studied whereas the QUE-ice studies included measurements of the depth variation of various characteristics, such as the size distribution and concentration of both cosmic spherules and unmelted chondritic micrometeorites (AMMs), the concentrations of grains in the ~1-10-micrometer size range, and the concentration of trace elements in the ice. In addition both magnetic and nonmagnetic particles were collected from the QUE-ice. The concentration of chondritic spherules 50 micrometers in size is similar at both locations: in the CP-ice 5 spherules were found in 40 kg of residual ice (after cleaning), and 7 spherules (including a nonmagnetic one) were recovered from 50kg of QUE-ice. The QUE sample contained 11 AMMs (including 3 grains with sizes ~30-50 micrometers) resulting in a ratio of unmelted to melted micrometeorites with sizes >50 micrometers (~1), which is much lower than the CP ratio of >5 (obtained for particles subsequently recovered from 360 tons of CP-ice). The QUE sample showed that particles >100 micrometers in size are found primarily within the top 15 m of ice while smaller particles are found in the bottom layers (30-50 cm). In contrast to CP-ice, QUE-ice contains many annealed stress cracks, that etch very quickly in water. Despite the very different glaciological and climatological regimes at the CP and QUE ice fields, concentrations of cosmic spherules are surprisingly similar. The ratio of AMMs to spherules does vary, however. The depth variations of the characteristics of cosmic dust grains trapped in the ~50-cm-thick top layer of a blue ice field are already very useful to select favorable zones to collect micrometeorites. In addition, they might provide insight into both climatic and ice flow parameters. Acknowledgements. We thank W.A. Cassidy and G. Crozaz for comments and R.M. Walker for his support and interest. REFERENCES. Cassidy W.A. and Rancitelli L.A. (1982) Am. Scientist 70, 156-164.
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
Experimental Design in Clinical 'Omics Biomarker Discovery.
Forshed, Jenny
2017-11-03
This tutorial highlights some issues in the experimental design of clinical 'omics biomarker discovery, how to avoid bias and get as true quantities as possible from biochemical analyses, and how to select samples to improve the chance of answering the clinical question at issue. This includes the importance of defining clinical aim and end point, knowing the variability in the results, randomization of samples, sample size, statistical power, and how to avoid confounding factors by including clinical data in the sample selection, that is, how to avoid unpleasant surprises at the point of statistical analysis. The aim of this Tutorial is to help translational clinical and preclinical biomarker candidate research and to improve the validity and potential of future biomarker candidate findings.
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.
Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique
2014-01-01
Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates.
Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique
2014-01-01
Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates. PMID:24489839
Pinto-Leite, C M; Rocha, P L B
2012-12-01
Empirical studies using visual search methods to investigate spider communities were conducted with different sampling protocols, including a variety of plot sizes, sampling efforts, and diurnal periods for sampling. We sampled 11 plots ranging in size from 5 by 10 m to 5 by 60 m. In each plot, we computed the total number of species detected every 10 min during 1 hr during the daytime and during the nighttime (0630 hours to 1100 hours, both a.m. and p.m.). We measured the influence of time effort on the measurement of species richness by comparing the curves produced by sample-based rarefaction and species richness estimation (first-order jackknife). We used a general linear model with repeated measures to assess whether the phase of the day during which sampling occurred and the differences in the plot lengths influenced the number of species observed and the number of species estimated. To measure the differences in species composition between the phases of the day, we used a multiresponse permutation procedure and a graphical representation based on nonmetric multidimensional scaling. After 50 min of sampling, we noted a decreased rate of species accumulation and a tendency of the estimated richness curves to reach an asymptote. We did not detect an effect of plot size on the number of species sampled. However, differences in observed species richness and species composition were found between phases of the day. Based on these results, we propose guidelines for visual search for tropical web spiders.
Analyzing thematic maps and mapping for accuracy
Rosenfield, G.H.
1982-01-01
Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.
Portable Device Analyzes Rocks and Minerals
NASA Technical Reports Server (NTRS)
2008-01-01
inXitu Inc., of Mountain View, California, entered into a Phase II SBIR contract with Ames Research Center to develop technologies for the next generation of scientific instruments for materials analysis. The work resulted in a sample handling system that could find a wide range of applications in research and industrial laboratories as a means to load powdered samples for analysis or process control. Potential industries include chemical, cement, inks, pharmaceutical, ceramics, and forensics. Additional applications include characterizing materials that cannot be ground to a fine size, such as explosives and research pharmaceuticals.
Apparatus for sampling and characterizing aerosols
Dunn, Patrick F.; Herceg, Joseph E.; Klocksieben, Robert H.
1986-01-01
Apparatus for sampling and characterizing aerosols having a wide particle size range at relatively low velocities may comprise a chamber having an inlet and an outlet, the chamber including: a plurality of vertically stacked, successive particle collection stages; each collection stage includes a separator plate and a channel guide mounted transverse to the separator plate, defining a labyrinthine flow path across the collection stage. An opening in each separator plate provides a path for the aerosols from one collection stage to the next. Mounted within each collection stage are one or more particle collection frames.
Opsahl, Stephen P.; Crow, Cassi L.
2014-01-01
During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.
Study samples are too small to produce sufficiently precise reliability coefficients.
Charter, Richard A
2003-04-01
In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Sources of variability in collection and preparation of paint and lead-coating samples.
Harper, S L; Gutknecht, W F
2001-06-01
Chronic exposure of children to lead (Pb) can result in permanent physiological impairment. Since surfaces coated with lead-containing paints and varnishes are potential sources of exposure, it is extremely important that reliable methods for sampling and analysis be available. The sources of variability in the collection and preparation of samples were investigated to improve the performance and comparability of methods and to ensure that data generated will be adequate for its intended use. Paint samples of varying sizes (areas and masses) were collected at different locations across a variety of surfaces including metal, plaster, concrete, and wood. A variety of grinding techniques were compared. Manual mortar and pestle grinding for at least 1.5 min and mechanized grinding techniques were found to generate similar homogenous particle size distributions required for aliquots as small as 0.10 g. When 342 samples were evaluated for sample weight loss during mortar and pestle grinding, 4% had 20% or greater loss with a high of 41%. Homogenization and sub-sampling steps were found to be the principal sources of variability related to the size of the sample collected. Analysis of samples from different locations on apparently identical surfaces were found to vary by more than a factor of two both in Pb concentration (mg cm-2 or %) and areal coating density (g cm-2). Analyses of substrates were performed to determine the Pb remaining after coating removal. Levels as high as 1% Pb were found in some substrate samples, corresponding to more than 35 mg cm-2 Pb. In conclusion, these sources of variability must be considered in development and/or application of any sampling and analysis methodologies.
NASA Astrophysics Data System (ADS)
Greene, V.; Adams, S.; Adams, A.
2017-12-01
Microplastics and plastics have polluted water all over the world including the Great Lakes. Microplastics can result from plastics that have broken up into smaller pieces or they can be purposely made and used in a variety of products. The size of a microplastic is less than 5 mm in length. These plastics can cause problems because they are non-biodegradable. Animals that have ingested these plastics have had reduced reproductive rates, health problems, and have even died from malnutrition. Our goal is to learn more about this issue. To do this, we will take water samples from different areas along the Gulf of Mexico and inland bays along the Florida coastline and compare the amount of microplastics found in each area. To analyze our samples we will vacuum filter water samples using gridded filter paper. We will then organize these samples by size and color. The control for our experiment will be filtered water. Our hypothesis is that Gulf of Mexico water samples will have more microplastics than the Bay water samples. We want to research this topic because microplastics can harm our ecosystems by affecting the health of marine animals.
Cengiz, Ibrahim Fatih; Oliveira, Joaquim Miguel; Reis, Rui L
2017-08-01
Quantitative assessment of micro-structure of materials is of key importance in many fields including tissue engineering, biology, and dentistry. Micro-computed tomography (µ-CT) is an intensively used non-destructive technique. However, the acquisition parameters such as pixel size and rotation step may have significant effects on the obtained results. In this study, a set of tissue engineering scaffolds including examples of natural and synthetic polymers, and ceramics were analyzed. We comprehensively compared the quantitative results of µ-CT characterization using 15 acquisition scenarios that differ in the combination of the pixel size and rotation step. The results showed that the acquisition parameters could statistically significantly affect the quantified mean porosity, mean pore size, and mean wall thickness of the scaffolds. The effects are also practically important since the differences can be as high as 24% regarding the mean porosity in average, and 19.5 h and 166 GB regarding the characterization time and data storage per sample with a relatively small volume. This study showed in a quantitative manner the effects of such a wide range of acquisition scenarios on the final data, as well as the characterization time and data storage per sample. Herein, a clear picture of the effects of the pixel size and rotation step on the results is provided which can notably be useful to refine the practice of µ-CT characterization of scaffolds and economize the related resources.
(Sample) Size Matters: Best Practices for Defining Error in Planktic Foraminiferal Proxy Records
NASA Astrophysics Data System (ADS)
Lowery, C.; Fraass, A. J.
2016-02-01
Paleoceanographic research is a vital tool to extend modern observational datasets and to study the impact of climate events for which there is no modern analog. Foraminifera are one of the most widely used tools for this type of work, both as paleoecological indicators and as carriers for geochemical proxies. However, the use of microfossils as proxies for paleoceanographic conditions brings about a unique set of problems. This is primarily due to the fact that groups of individual foraminifera, which usually live about a month, are used to infer average conditions for time periods ranging from hundreds to tens of thousands of years. Because of this, adequate sample size is very important for generating statistically robust datasets, particularly for stable isotopes. In the early days of stable isotope geochemistry, instrumental limitations required hundreds of individual foraminiferal tests to return a value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. While this has many advantages, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB, or 1°C. Here, we demonstrate the use of this tool to quantify error in micropaleontological datasets, and suggest best practices for minimizing error when generating stable isotope data with foraminifera.
Beverage Cans Used for Sediment Collection.
ERIC Educational Resources Information Center
Studlick, Joseph R. J.; Trautman, Timothy A.
1979-01-01
Beverage cans are well suited for sediment collection and storage containers. Advantages include being free, readily available, and the correct size for many samples. Instruction for selection, preparation, and use of cans in sediment collection and storage is provided. (RE)
Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation.
Loktev, Mikhail; Soloviev, Oleg; Savenko, Svyatoslav; Vdovin, Gleb
2011-07-15
We report on the first results to our knowledge obtained with adaptable multiaperture imaging through turbulence on a horizontal atmospheric path. We show that the resolution can be improved by adaptively matching the size of the subaperture to the characteristic size of the turbulence. Further improvement is achieved by the deconvolution of a number of subimages registered simultaneously through multiple subapertures. Different implementations of multiaperture geometry, including pupil multiplication, pupil image sampling, and a plenoptic telescope, are considered. Resolution improvement has been demonstrated on a ∼550 m horizontal turbulent path, using a combination of aperture sampling, speckle image processing, and, optionally, frame selection. © 2011 Optical Society of America
Thermophoretic separation of aerosol particles from a sampled gas stream
Postma, A.K.
1984-09-07
This disclosure relates to separation of aerosol particles from gas samples withdrawn from within a contained atmosphere, such as containment vessels for nuclear reactors or other process equipment where remote gaseous sampling is required. It is specifically directed to separation of dense aerosols including particles of any size and at high mass loadings and high corrosivity. The United States Government has rights in this invention pursuant to Contract DE-AC06-76FF02170 between the US Department of Energy and Westinghouse Electric Corporation.
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...
Melvin, Elizabeth M; Moore, Brandon R; Gilchrist, Kristin H; Grego, Sonia; Velev, Orlin D
2011-09-01
The recent development of microfluidic "lab on a chip" devices requiring sample sizes <100 μL has given rise to the need to concentrate dilute samples and trap analytes, especially for surface-based detection techniques. We demonstrate a particle collection device capable of concentrating micron-sized particles in a predetermined area by combining AC electroosmosis (ACEO) and dielectrophoresis (DEP). The planar asymmetric electrode pattern uses ACEO pumping to induce equal, quadrilateral flow directed towards a stagnant region in the center of the device. A number of system parameters affecting particle collection efficiency were investigated including electrode and gap width, chamber height, applied potential and frequency, and number of repeating electrode pairs and electrode geometry. The robustness of the on-chip collection design was evaluated against varying electrolyte concentrations, particle types, and particle sizes. These devices are amenable to integration with a variety of detection techniques such as optical evanescent waveguide sensing.
Porous silicon structures with high surface area/specific pore size
Northrup, M.A.; Yu, C.M.; Raley, N.F.
1999-03-16
Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gases in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters. 9 figs.
Porous silicon structures with high surface area/specific pore size
Northrup, M. Allen; Yu, Conrad M.; Raley, Norman F.
1999-01-01
Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gasses in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters.
Yildiz, Elvin H; Fan, Vincent C; Banday, Hina; Ramanathan, Lakshmi V; Bitra, Ratna K; Garry, Eileen; Asbell, Penny A
2009-07-01
To evaluate the repeatability and accuracy of a new tear osmometer that measures the osmolality of 0.5-microL (500-nanoliter) samples. Four standardized solutions were tested with 0.5-microL (500-nanoliter) samples for repeatability of measurements and comparability to standardized technique. Two known standard salt solutions (290 mOsm/kg H2O, 304 mOsm/kg H2O), a normal artificial tear matrix sample (306 mOsm/kg H2O), and an abnormal artificial tear matrix sample (336 mOsm/kg H2O) were repeatedly tested (n = 20 each) for osmolality with use of the Advanced Instruments Model 3100 Tear Osmometer (0.5-microL [500-nanoliter] sample size) and the FDA-approved Advanced Instruments Model 3D2 Clinical Osmometer (250-microL sample size). Four standard solutions were used, with osmolality values of 290, 304, 306, and 336 mOsm/kg H2O. The respective precision data, including the mean and standard deviation, were: 291.8 +/- 4.4, 305.6 +/- 2.4, 305.1 +/- 2.3, and 336.4 +/- 2.2 mOsm/kg H2O. The percent recoveries for the 290 mOsm/kg H2O standard solution, the 304 mOsm/kg H2O reference solution, the normal value-assigned 306 mOsm/kg H2O sample, and the abnormal value-assigned 336 mOsm/kg H2O sample were 100.3, 100.2, 99.8, and 100.3 mOsm/kg H2O, respectively. The repeatability data are in accordance with data obtained on clinical osmometers with use of larger sample sizes. All 4 samples tested on the tear osmometer have osmolality values that correlate well to the clinical instrument method. The tear osmometer is a suitable instrument for testing the osmolality of microliter-sized samples, such as tears, and therefore may be useful in diagnosing, monitoring, and classifying tear abnormalities such as the severity of dry eye disease.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz
2017-01-01
Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.
Conn, Kathleen E.; Black, Robert W.; Peterson, Norman T.; Senter, Craig A.; Chapman, Elena A.
2018-01-05
From August 2016 to March 2017, the U.S. Geological Survey (USGS) collected representative samples of filtered and unfiltered water and suspended sediment (including the colloidal fraction) at USGS streamgage 12113390 (Duwamish River at Golf Course, at Tukwila, Washington) during 13 periods of differing flow conditions. Samples were analyzed by Washington-State-accredited laboratories for a large suite of compounds, including metals, dioxins/furans, semivolatile compounds including polycyclic aromatic hydrocarbons, butyltins, the 209 polychlorinated biphenyl (PCB) congeners, and total and dissolved organic carbon. Concurrent with the chemistry sampling, water-quality field parameters were measured, and representative water samples were collected and analyzed for river suspended-sediment concentration and particle-size distribution. The results provide new data that can be used to estimate sediment and chemical loads transported by the Green River to the Lower Duwamish Waterway.
Suspended sediments from upstream tributaries as the source of downstream river sites
NASA Astrophysics Data System (ADS)
Haddadchi, Arman; Olley, Jon
2014-05-01
Understanding the efficiency with which sediment eroded from different sources is transported to the catchment outlet is a key knowledge gap that is critical to our ability to accurately target and prioritise management actions to reduce sediment delivery. Sediment fingerprinting has proven to be an efficient approach to determine the sources of sediment. This study examines the suspended sediment sources from Emu Creek catchment, south eastern Queensland, Australia. In addition to collect suspended sediments from different sites of the streams after the confluence of tributaries and outlet of the catchment, time integrated suspended samples from upper tributaries were used as the source of sediment, instead of using hillslope and channel bank samples. Totally, 35 time-integrated samplers were used to compute the contribution of suspended sediments from different upstream waterways to the downstream sediment sites. Three size fractions of materials including fine sand (63-210 μm), silt (10-63 μm) and fine silt and clay (<10 μm) were used to find the effect of particle size on the contribution of upper sediments as the sources of sediment after river confluences. And then samples were analysed by ICP-MS and -OES to find 41 sediment fingerprints. According to the results of Student's T-distribution mixing model, small creeks in the middle and lower part of the catchment were major source in different size fractions, especially in silt (10-63 μm) samples. Gowrie Creek as covers southern-upstream part of the catchment was a major contributor at the outlet of the catchment in finest size fraction (<10 μm) Large differences between the contributions of suspended sediments from upper tributaries in different size fractions necessitate the selection of appropriate size fraction on sediment tracing in the catchment and also major effect of particle size on the movement and deposition of sediments.
Evidence of a chimpanzee-sized ancestor of humans but a gibbon-sized ancestor of apes.
Grabowski, Mark; Jungers, William L
2017-10-12
Body mass directly affects how an animal relates to its environment and has a wide range of biological implications. However, little is known about the mass of the last common ancestor (LCA) of humans and chimpanzees, hominids (great apes and humans), or hominoids (all apes and humans), which is needed to evaluate numerous paleobiological hypotheses at and prior to the root of our lineage. Here we use phylogenetic comparative methods and data from primates including humans, fossil hominins, and a wide sample of fossil primates including Miocene apes from Africa, Europe, and Asia to test alternative hypotheses of body mass evolution. Our results suggest, contrary to previous suggestions, that the LCA of all hominoids lived in an environment that favored a gibbon-like size, but a series of selective regime shifts, possibly due to resource availability, led to a decrease and then increase in body mass in early hominins from a chimpanzee-sized LCA.The pattern of body size evolution in hominids can provide insight into historical human ecology. Here, Grabowski and Jungers use comparative phylogenetic analysis to reconstruct the likely size of the ancestor of humans and chimpanzees and the evolutionary history of selection on body size in primates.
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Nomogram for sample size calculation on a straightforward basis for the kappa statistic.
Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo
2014-09-01
Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.
Characterisation of Fine Ash Fractions from the AD 1314 Kaharoa Eruption
NASA Astrophysics Data System (ADS)
Weaver, S. J.; Rust, A.; Carey, R. J.; Houghton, B. F.
2012-12-01
The AD 1314±12 yr Kaharoa eruption of Tarawera volcano, New Zealand, produced deposits exhibiting both plinian and subplinian characteristics (Nairn et al., 2001; 2004, Leonard et al., 2002, Hogg et al., 2003). Their widespread dispersal yielded volumes, column heights, and mass discharge rates of plinian magnitude and intensity (Sahetapy-Engel, 2002); however, vertical shifts in grain size suggest waxing and waning within single phases and time-breaks on the order of hours between phases. These grain size shifts were quantified using sieve, laser diffraction, and image analysis of the fine ash fractions (<1 mm in diameter) of some of the most explosive phases of the eruption. These analyses served two purposes: 1) to characterise the change in eruption intensity over time, and 2) to compare the three methods of grain size analysis. Additional analyses of the proportions of components and particle shape were also conducted to aid in the interpretation of the eruption and transport dynamics. 110 samples from a single location about 6 km from source were sieved at half phi intervals between -4φ to 4φ (16 mm - 63 μm). A single sample was then chosen to test the range of grain sizes to run through the Mastersizer 2000. Three aliquots were tested; the first consisted of each sieve size fraction ranging between 0φ (1000 μm) and <4φ (<63 μm, i.e. the pan). For example, 0, 0.5, 1, …, 4φ, and the pan were ran through the Mastersizer and then their results, weighted according to their sieve weight percents, were summed together to produce a total distribution. The second aliquot included 3 samples ranging between 0-2φ (1000-250 μm), 2.5-4φ (249-63 μm), and the pan. A single sample consisting of the total range of grain sizes between 0φ and the pan was used for the final aliquot. Their results were compared and it was determined that the single sample consisting of the broadest range of grain sizes yielded an accurate grain size distribution. This data was then compared with the sieve weight percent data, and revealed that there is a significant difference in size characterisation between sieving and the Mastersizer for size fractions between 0-3φ (1000-125 μm). This is due predominantly to the differing methods that sieving and the Mastersizer use to characterise a single particle, to inhomogeneity in grain density in each grain-size fraction, and to grain-shape irregularities. This led the Mastersizer to allocate grains from a certain sieve size fraction into coarser size fractions. Therefore, only the Mastersizer data from 3.5φ and below were combined with the coarser sieve data to yield total grain size distributions. This high-resolution analysis of the grain size data enabled subtle trends in grain size to be identified and related to short timescale eruptive processes.
No brain expansion in Australopithecus boisei.
Hawks, John
2011-10-01
The endocranial volumes of robust australopithecine fossils appear to have increased in size over time. Most evidence with temporal resolution is concentrated in East African Australopithecus boisei. Including the KNM-WT 17000 cranium, this sample comprises 11 endocranial volume estimates ranging in date from 2.5 million to 1.4 million years ago. But the sample presents several difficulties to a test of trend, including substantial estimation error for some specimens and an unusually low variance. This study reevaluates the evidence, using randomization methods and a related test using an explicit model of variability. None of these tests applied to the A. boisei endocranial volume sample produces significant evidence for a trend in that species, whether or not the early KNM-WT 17000 specimen is included. Copyright © 2011 Wiley-Liss, Inc.
Ultrasonic characterization of single drops of liquids
Sinha, Dipen N.
1998-01-01
Ultrasonic characterization of single drops of liquids. The present invention includes the use of two closely spaced transducers, or one transducer and a closely spaced reflector plate, to form an interferometer suitable for ultrasonic characterization of droplet-size and smaller samples without the need for a container. The droplet is held between the interferometer elements, whose distance apart may be adjusted, by surface tension. The surfaces of the interferometer elements may be readily cleansed by a stream of solvent followed by purified air when it is desired to change samples. A single drop of liquid is sufficient for high-quality measurement. Examples of samples which may be investigated using the apparatus and method of the present invention include biological specimens (tear drops; blood and other body fluid samples; samples from tumors, tissues, and organs; secretions from tissues and organs; snake and bee venom, etc.) for diagnostic evaluation, samples in forensic investigations, and detection of drugs in small quantities.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong
2017-09-01
Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.
Sample size determination in group-sequential clinical trials with two co-primary endpoints
Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi
2014-01-01
We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
Heinz, Marlen; Zak, Dominik
2018-03-01
This study aimed to evaluate the effects of freezing and cold storage at 4 °C on bulk dissolved organic carbon (DOC) and nitrogen (DON) concentration and SEC fractions determined with size exclusion chromatography (SEC), as well as on spectral properties of dissolved organic matter (DOM) analyzed with fluorescence spectroscopy. In order to account for differences in DOM composition and source we analyzed storage effects for three different sample types, including a lake water sample representing freshwater DOM, a leaf litter leachate of Phragmites australis representing a terrestrial, 'fresh' DOM source and peatland porewater samples. According to our findings one week of cold storage can bias DOC and DON determination. Overall, the determination of DOC and DON concentration with SEC analysis for all three sample types were little susceptible to alterations due to freezing. The findings derived for the sampling locations investigated here may not apply for other sampling locations and/or sample types. However, DOC size fractions and DON concentration of formerly frozen samples should be interpreted with caution when sample concentrations are high. Alteration of some optical properties (HIX and SUVA 254 ) due to freezing were evident, and therefore we recommend immediate analysis of samples for spectral analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Young, G.; Jones, H. M.; Darbyshire, E.; Baustian, K. J.; McQuaid, J. B.; Bower, K. N.; Connolly, P. J.; Gallagher, M. W.; Choularton, T. W.
2016-03-01
Single-particle compositional analysis of filter samples collected on board the Facility for Airborne Atmospheric Measurements (FAAM) BAe-146 aircraft is presented for six flights during the springtime Aerosol-Cloud Coupling and Climate Interactions in the Arctic (ACCACIA) campaign (March-April 2013). Scanning electron microscopy was utilised to derive size-segregated particle compositions and size distributions, and these were compared to corresponding data from wing-mounted optical particle counters. Reasonable agreement between the calculated number size distributions was found. Significant variability in composition was observed, with differing external and internal mixing identified, between air mass trajectory cases based on HYbrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) analyses. Dominant particle classes were silicate-based dusts and sea salts, with particles notably rich in K and Ca detected in one case. Source regions varied from the Arctic Ocean and Greenland through to northern Russia and the European continent. Good agreement between the back trajectories was mirrored by comparable compositional trends between samples. Silicate dusts were identified in all cases, and the elemental composition of the dust was consistent for all samples except one. It is hypothesised that long-range, high-altitude transport was primarily responsible for this dust, with likely sources including the Asian arid regions.
Extreme Mean and Its Applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.
1979-01-01
Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.
Dental size variation in the Atapuerca-SH Middle Pleistocene hominids.
Bermúdez de Castro, J M; Sarmiento, S; Cunha, E; Rosas, A; Bastir, M
2001-09-01
The Middle Pleistocene Atapuerca-Sima de los Huesos (SH) site in Spain has yielded the largest sample of fossil hominids so far found from a single site and belonging to the same biological population. The SH dental sample includes a total of 452 permanent and deciduous teeth, representing a minimum of 27 individuals. We present a study of the dental size variation in these hominids, based on the analysis of the mandibular permanent dentition: lateral incisors, n=29; canines, n=27; third premolars, n=30; fourth premolars, n=34; first molars, n=38; second molars, n=38. We have obtained the buccolingual diameter and the crown area (measured on occlusal photographs) of these teeth, and used the bootstrap method to assess the amount of variation in the SH sample compared with the variation of a modern human sample from the Museu Antropologico of the Universidade of Coimbra (Portugal). The SH hominids have, in general terms, a dental size variation higher than that of the modern human sample. The analysis is especially conclusive for the canines. Furthermore, we have estimated the degree of sexual dimorphism of the SH sample by obtaining male and female dental subsamples by means of sexing the large sample of SH mandibular specimens. We obtained the index of sexual dimorphism (ISD=male mean/female mean) and the values were compared with those obtained from the sexed modern human sample from Coimbra, and with data found in the literature concerning several recent human populations. In all tooth classes the ISD of the SH hominids was higher than that of modern humans, but the differences were generally modest, except for the canines, thus suggesting that canine size sexual dimorphism in Homo heidelbergensis was probably greater than that of modern humans. Since the approach of sexing fossil specimens has some obvious limitations, these results should be assessed with caution. Additional data from SH and other European Middle Pleistocene sites would be necessary to test this hypothesis. Copyright 2001 Academic Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir
Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less
ERIC Educational Resources Information Center
Sahin, Alper; Weiss, David J.
2015-01-01
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Extraction of hydrocarbons from high-maturity Marcellus Shale using supercritical carbon dioxide
Jarboe, Palma B.; Philip A. Candela,; Wenlu Zhu,; Alan J. Kaufman,
2015-01-01
Shale is now commonly exploited as a hydrocarbon resource. Due to the high degree of geochemical and petrophysical heterogeneity both between shale reservoirs and within a single reservoir, there is a growing need to find more efficient methods of extracting petroleum compounds (crude oil, natural gas, bitumen) from potential source rocks. In this study, supercritical carbon dioxide (CO2) was used to extract n-aliphatic hydrocarbons from ground samples of Marcellus shale. Samples were collected from vertically drilled wells in central and western Pennsylvania, USA, with total organic carbon (TOC) content ranging from 1.5 to 6.2 wt %. Extraction temperature and pressure conditions (80 °C and 21.7 MPa, respectively) were chosen to represent approximate in situ reservoir conditions at sample depth (1920−2280 m). Hydrocarbon yield was evaluated as a function of sample matrix particle size (sieve size) over the following size ranges: 1000−500 μm, 250−125 μm, and 63−25 μm. Several methods of shale characterization including Rock-Eval II pyrolysis, organic petrography, Brunauer−Emmett−Teller surface area, and X-ray diffraction analyses were also performed to better understand potential controls on extraction yields. Despite high sample thermal maturity, results show that supercritical CO2 can liberate diesel-range (n-C11 through n-C21) n-aliphatic hydrocarbons. The total quantity of extracted, resolvable n-aliphatic hydrocarbons ranges from approximately 0.3 to 12 mg of hydrocarbon per gram of TOC. Sieve size does have an effect on extraction yield, with highest recovery from the 250−125 μm size fraction. However, the significance of this effect is limited, likely due to the low size ranges of the extracted shale particles. Additional trends in hydrocarbon yield are observed among all samples, regardless of sieve size: 1) yield increases as a function of specific surface area (r2 = 0.78); and 2) both yield and surface area increase with increasing TOC content (r2 = 0.97 and 0.86, respectively). Given that supercritical CO2 is able to mobilize residual organic matter present in overmature shales, this study contributes to a better understanding of the extent and potential factors affecting the extraction process.
Neurocognitive performance in family-based and case-control studies of schizophrenia.
Gur, Ruben C; Braff, David L; Calkins, Monica E; Dobie, Dorcas J; Freedman, Robert; Green, Michael F; Greenwood, Tiffany A; Lazzeroni, Laura C; Light, Gregory A; Nuechterlein, Keith H; Olincy, Ann; Radant, Allen D; Seidman, Larry J; Siever, Larry J; Silverman, Jeremy M; Sprock, Joyce; Stone, William S; Sugar, Catherine A; Swerdlow, Neal R; Tsuang, Debby W; Tsuang, Ming T; Turetsky, Bruce I; Gur, Raquel E
2015-04-01
Neurocognitive deficits in schizophrenia (SZ) are established and the Consortium on the Genetics of Schizophrenia (COGS) investigated such measures as endophenotypes in family-based (COGS-1) and case-control (COGS-2) studies. By requiring family participation, family-based sampling may result in samples that vary demographically and perform better on neurocognitive measures. The Penn computerized neurocognitive battery (CNB) evaluates accuracy and speed of performance for several domains and was administered across sites in COGS-1 and COGS-2. Most tests were included in both studies. COGS-1 included 328 patients with SZ and 497 healthy comparison subjects (HCS) and COGS-2 included 1195 patients and 1009 HCS. Demographically, COGS-1 participants were younger, more educated, with more educated parents and higher estimated IQ compared to COGS-2 participants. After controlling for demographics, the two samples produced very similar performance profiles compared to their respective controls. As expected, performance was better and with smaller effect sizes compared to controls in COGS-1 relative to COGS-2. Better performance was most pronounced for spatial processing while emotion identification had large effect sizes for both accuracy and speed in both samples. Performance was positively correlated with functioning and negatively with negative and positive symptoms in both samples, but correlations were attenuated in COGS-2, especially with positive symptoms. Patients ascertained through family-based design have more favorable demographics and better performance on some neurocognitive domains. Thus, studies that use case-control ascertainment may tap into populations with more severe forms of illness that are exposed to less favorable factors compared to those ascertained with family-based designs.
NASA Astrophysics Data System (ADS)
Caponi, Lorenzo; Formenti, Paola; Massabó, Dario; Di Biagio, Claudia; Cazaunau, Mathieu; Pangui, Edouard; Chevaillier, Servanne; Landrot, Gautier; Andreae, Meinrat O.; Kandler, Konrad; Piketh, Stuart; Saeed, Thuraya; Seibert, Dave; Williams, Earle; Balkanski, Yves; Prati, Paolo; Doussin, Jean-François
2017-06-01
This paper presents new laboratory measurements of the mass absorption efficiency (MAE) between 375 and 850 nm for 12 individual samples of mineral dust from different source areas worldwide and in two size classes: PM10. 6 (mass fraction of particles of aerodynamic diameter lower than 10.6 µm) and PM2. 5 (mass fraction of particles of aerodynamic diameter lower than 2.5 µm). The experiments were performed in the CESAM simulation chamber using mineral dust generated from natural parent soils and included optical and gravimetric analyses. The results show that the MAE values are lower for the PM10. 6 mass fraction (range 37-135 × 10-3 m2 g-1 at 375 nm) than for the PM2. 5 (range 95-711 × 10-3 m2 g-1 at 375 nm) and decrease with increasing wavelength as λ-AAE, where the Ångström absorption exponent (AAE) averages between 3.3 and 3.5, regardless of size. The size independence of AAE suggests that, for a given size distribution, the dust composition did not vary with size for this set of samples. Because of its high atmospheric concentration, light absorption by mineral dust can be competitive with black and brown carbon even during atmospheric transport over heavy polluted regions, when dust concentrations are significantly lower than at emission. The AAE values of mineral dust are higher than for black carbon (˜ 1) but in the same range as light-absorbing organic (brown) carbon. As a result, depending on the environment, there can be some ambiguity in apportioning the aerosol absorption optical depth (AAOD) based on spectral dependence, which is relevant to the development of remote sensing of light-absorbing aerosols and their assimilation in climate models. We suggest that the sample-to-sample variability in our dataset of MAE values is related to regional differences in the mineralogical composition of the parent soils. Particularly in the PM2. 5 fraction, we found a strong linear correlation between the dust light-absorption properties and elemental iron rather than the iron oxide fraction, which could ease the application and the validation of climate models that now start to include the representation of the dust composition, as well as for remote sensing of dust absorption in the UV-vis spectral region.
NASA Astrophysics Data System (ADS)
Muller, Wayne; Scheuermann, Alexander
2016-04-01
Measuring the electrical permittivity of civil engineering materials is important for a range of ground penetrating radar (GPR) and pavement moisture measurement applications. Compacted unbound granular (UBG) pavement materials present a number of preparation and measurement challenges using conventional characterisation techniques. As an alternative to these methods, a modified free-space (MFS) characterisation approach has previously been investigated. This paper describes recent work to optimise and validate the MFS technique. The research included finite difference time domain (FDTD) modelling to better understand the nature of wave propagation within material samples and the test apparatus. This research led to improvements in the test approach and optimisation of sample sizes. The influence of antenna spacing and sample thickness on the permittivity results was investigated by a series of experiments separating antennas and measuring samples of nylon and water. Permittivity measurements of samples of nylon and water approximately 100 mm and 170 mm thick were also compared, showing consistent results. These measurements also agreed well with surface probe measurements of the nylon sample and literature values for water. The results indicate permittivity estimates of acceptable accuracy can be obtained using the proposed approach, apparatus and sample sizes.
Reno, Philip L; Lovejoy, C Owen
2015-01-01
Sexual dimorphism in body size is often used as a correlate of social and reproductive behavior in Australopithecus afarensis. In addition to a number of isolated specimens, the sample for this species includes two small associated skeletons (A.L. 288-1 or "Lucy" and A.L. 128/129) and a geologically contemporaneous death assemblage of several larger individuals (A.L. 333). These have driven both perceptions and quantitative analyses concluding that Au. afarensis was markedly dimorphic. The Template Method enables simultaneous evaluation of multiple skeletal sites, thereby greatly expanding sample size, and reveals that A. afarensis dimorphism was similar to that of modern humans. A new very large partial skeleton (KSD-VP-1/1 or "Kadanuumuu") can now also be used, like Lucy, as a template specimen. In addition, the recently developed Geometric Mean Method has been used to argue that Au. afarensis was equally or even more dimorphic than gorillas. However, in its previous application Lucy and A.L. 128/129 accounted for 10 of 11 estimates of female size. Here we directly compare the two methods and demonstrate that including multiple measurements from the same partial skeleton that falls at the margin of the species size range dramatically inflates dimorphism estimates. Prevention of the dominance of a single specimen's contribution to calculations of multiple dimorphism estimates confirms that Au. afarensis was only moderately dimorphic.
Lovejoy, C. Owen
2015-01-01
Sexual dimorphism in body size is often used as a correlate of social and reproductive behavior in Australopithecus afarensis. In addition to a number of isolated specimens, the sample for this species includes two small associated skeletons (A.L. 288-1 or “Lucy” and A.L. 128/129) and a geologically contemporaneous death assemblage of several larger individuals (A.L. 333). These have driven both perceptions and quantitative analyses concluding that Au. afarensis was markedly dimorphic. The Template Method enables simultaneous evaluation of multiple skeletal sites, thereby greatly expanding sample size, and reveals that A. afarensis dimorphism was similar to that of modern humans. A new very large partial skeleton (KSD-VP-1/1 or “Kadanuumuu”) can now also be used, like Lucy, as a template specimen. In addition, the recently developed Geometric Mean Method has been used to argue that Au. afarensis was equally or even more dimorphic than gorillas. However, in its previous application Lucy and A.L. 128/129 accounted for 10 of 11 estimates of female size. Here we directly compare the two methods and demonstrate that including multiple measurements from the same partial skeleton that falls at the margin of the species size range dramatically inflates dimorphism estimates. Prevention of the dominance of a single specimen’s contribution to calculations of multiple dimorphism estimates confirms that Au. afarensis was only moderately dimorphic. PMID:25945314
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat
2018-03-01
To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
Nonsyndromic cleft palate: An association study at GWAS candidate loci in a multiethnic sample.
Ishorst, Nina; Francheschelli, Paola; Böhmer, Anne C; Khan, Mohammad Faisal J; Heilmann-Heimbach, Stefanie; Fricker, Nadine; Little, Julian; Steegers-Theunissen, Regine P M; Peterlin, Borut; Nowak, Stefanie; Martini, Markus; Kruse, Teresa; Dunsche, Anton; Kreusch, Thomas; Gölz, Lina; Aldhorae, Khalid; Halboub, Esam; Reutter, Heiko; Mossey, Peter; Nöthen, Markus M; Rubini, Michele; Ludwig, Kerstin U; Knapp, Michael; Mangold, Elisabeth
2018-06-01
Nonsyndromic cleft palate only (nsCPO) is a common and multifactorial form of orofacial clefting. In contrast to successes achieved for the other common form of orofacial clefting, that is, nonsyndromic cleft lip with/without cleft palate (nsCL/P), genome wide association studies (GWAS) of nsCPO have identified only one genome wide significant locus. Aim of the present study was to investigate whether common variants contribute to nsCPO and, if so, to identify novel risk loci. We genotyped 33 SNPs at 27 candidate loci from 2 previously published nsCPO GWAS in an independent multiethnic sample. It included: (i) a family-based sample of European ancestry (n = 212); and (ii) two case/control samples of Central European (n = 94/339) and Arabian ancestry (n = 38/231), respectively. A separate association analysis was performed for each genotyped dataset, and meta-analyses were performed. After association analysis and meta-analyses, none of the 33 SNPs showed genome-wide significance. Two variants showed nominally significant association in the imputed GWAS dataset and exhibited a further decrease in p-value in a European and an overall meta-analysis including imputed GWAS data, respectively (rs395572: P MetaEU = 3.16 × 10 -4 ; rs6809420: P MetaAll = 2.80 × 10 -4 ). Our findings suggest that there is a limited contribution of common variants to nsCPO. However, the individual effect sizes might be too small for detection of further associations in the present sample sizes. Rare variants may play a more substantial role in nsCPO than in nsCL/P, for which GWAS of smaller sample sizes have identified genome-wide significant loci. Whole-exome/genome sequencing studies of nsCPO are now warranted. © 2018 Wiley Periodicals, Inc.
Gilbert, Andrew T.; O'Connell, Allan F.; Annand, Elizabeth M.; Talancy, Neil W.; Sauer, John R.; Nichols, James D.
2008-01-01
An inventory of mammals was conducted during 2004 at nine national park sites in the Northeast Temperate Network (NETN): Acadia National Park (NP), Marsh-Billings-Rockefeller National Historical Park (NHP), Minute Man NHP, Morristown NHP, Roosevelt-Vanderbilt National Historic Site (NHS), Saint-Gaudens NHS, Saugus Iron Works NHS, Saratoga NHP, and Weir Farm NHS. Sagamore Hill NHS, part of the Northeast Coastal and Barrier Network (NCBN), was also surveyed. Each park except Acadia NP was sampled twice, once in the winter/spring and again in the summer/fall. During the winter/spring visit, indirect measure (IM) sampling arrays were employed at 2 to 16 stations and included sampling by remote cameras, cubby boxes (covered trackplates), and hair traps. IM stations were established and re-used during the summer/fall sampling period. Trapping was conducted at 2 to 12 stations at all parks except Acadia NP during the summer/fall period and consisted of arrays of small-mammal traps, squirrel-sized live traps, and some fox-sized live traps. We used estimation-based procedures and probabilistic sampling techniques to design this inventory. A total of 38 species was detected by IM sampling, trapping, and field observations. Species diversity (number of species) varied among parks, ranging from 8 to 24, with Minute Man NHP having the most species detected. Raccoon (Procyon lotor), Virginia Opossum (Didelphis virginiana), Fisher (Martes pennanti), and Domestic Cat (Felis silvestris) were the most common medium-sized mammals detected in this study and White-footed Mouse (Peromyscus leucopus), Northern Short-tailed Shrew (Blarina brevicauda), Deer Mouse (P. maniculatus), and Meadow Vole (Microtus pennsylvanicus) the most common small mammals detected. All species detected are considered fairly common throughout their range including the Fisher, which has been reintroduced in several New England states. We did not detect any state or federal endangered or threatened species.
Eduardoff, Mayra; Xavier, Catarina; Strobl, Christina; Casas-Vargas, Andrea; Parson, Walther
2017-01-01
The analysis of mitochondrial DNA (mtDNA) has proven useful in forensic genetics and ancient DNA (aDNA) studies, where specimens are often highly compromised and DNA quality and quantity are low. In forensic genetics, the mtDNA control region (CR) is commonly sequenced using established Sanger-type Sequencing (STS) protocols involving fragment sizes down to approximately 150 base pairs (bp). Recent developments include Massively Parallel Sequencing (MPS) of (multiplex) PCR-generated libraries using the same amplicon sizes. Molecular genetic studies on archaeological remains that harbor more degraded aDNA have pioneered alternative approaches to target mtDNA, such as capture hybridization and primer extension capture (PEC) methods followed by MPS. These assays target smaller mtDNA fragment sizes (down to 50 bp or less), and have proven to be substantially more successful in obtaining useful mtDNA sequences from these samples compared to electrophoretic methods. Here, we present the modification and optimization of a PEC method, earlier developed for sequencing the Neanderthal mitochondrial genome, with forensic applications in mind. Our approach was designed for a more sensitive enrichment of the mtDNA CR in a single tube assay and short laboratory turnaround times, thus complying with forensic practices. We characterized the method using sheared, high quantity mtDNA (six samples), and tested challenging forensic samples (n = 2) as well as compromised solid tissue samples (n = 15) up to 8 kyrs of age. The PEC MPS method produced reliable and plausible mtDNA haplotypes that were useful in the forensic context. It yielded plausible data in samples that did not provide results with STS and other MPS techniques. We addressed the issue of contamination by including four generations of negative controls, and discuss the results in the forensic context. We finally offer perspectives for future research to enable the validation and accreditation of the PEC MPS method for final implementation in forensic genetic laboratories. PMID:28934125
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
Michaud, S; Levasseur, M; Doucette, G; Cantin, G
2002-10-01
We determined the seasonal distribution of paralytic shellfish toxins (PSTs) and PST producing bacteria in > 15, 5-15, and 0.22-5 microm size fractions in the St Lawrence. We also measured PSTs in a local population of Mytilus edulis. PST concentrations were determined in each size fraction and in laboratory incubations of sub-samples by high performance liquid chromatography (HPLC), including the rigorous elimination of suspected toxin 'imposter' peaks. Mussel toxin levels were determined by mouse bioassay and HPLC. PSTs were detected in all size fractions during the summer sampling season, with 47% of the water column toxin levels associated with particles smaller than Alexandrium tamarense (< 15 microm). Even in the > 15 microm size fraction, we estimated that as much as 92% of PSTs could be associated with particles other than A. tamarense. Our results stress the importance of taking into account the potential presence of PSTs in size fractions other than that containing the known algal producer when attempting to model shellfish intoxication, especially during years of low cell abundance. Finally, our HPLC results confirmed the presence of bacteria capable of autonomous PST production in the St Lawrence as well as demonstrating their regular presence and apparent diversity in the plankton. Copyright 2002 Elsevier Science Ltd.
ERIC Educational Resources Information Center
Peng, Peng; Namkung, Jessica; Barnes, Marcia; Sun, Congying
2016-01-01
The purpose of this meta-analysis was to determine the relation between mathematics and working memory (WM) and to identify possible moderators of this relation including domains of WM, types of mathematics skills, and sample type. A meta-analysis of 110 studies with 829 effect sizes found a significant medium correlation of mathematics and WM, r…
Scaling ice microstructures from the laboratory to nature: cryo-EBSD on large samples.
NASA Astrophysics Data System (ADS)
Prior, David; Craw, Lisa; Kim, Daeyeong; Peyroux, Damian; Qi, Chao; Seidemann, Meike; Tooley, Lauren; Vaughan, Matthew; Wongpan, Pat
2017-04-01
Electron backscatter diffraction (EBSD) has extended significantly our ability to conduct detailed quantitative microstructural investigations of rocks, metals and ceramics. EBSD on ice was first developed in 2004. Techniques have improved significantly in the last decade and EBSD is now becoming more common in the microstructural analysis of ice. This is particularly true for laboratory-deformed ice where, in some cases, the fine grain sizes exclude the possibility of using a thin section of the ice. Having the orientations of all axes (rather than just the c-axis as in an optical method) yields important new information about ice microstructure. It is important to examine natural ice samples in the same way so that we can scale laboratory observations to nature. In the case of ice deformation, higher strain rates are used in the laboratory than those seen in nature. These are achieved by increasing stress and/or temperature and it is important to assess that the microstructures produced in the laboratory are comparable with those observed in nature. Natural ice samples are coarse grained. Glacier and ice sheet ice has a grain size from a few mm up to several cm. Sea and lake ice has grain sizes of a few cm to many metres. Thus extending EBSD analysis to larger sample sizes to include representative microstructures is needed. The chief impediments to working on large ice samples are sample exchange, limitations on stage motion and temperature control. Large ice samples cannot be transferred through a typical commercial cryo-transfer system that limits sample sizes. We transfer through a nitrogen glove box that encloses the main scanning electron microscope (SEM) door. The nitrogen atmosphere prevents the cold stage and the sample from becoming covered in frost. Having a long optimal working distance for EBSD (around 30mm for the Otago cryo-EBSD facility) , by moving the camera away from the pole piece, enables the stage to move without crashing into either the EBSD camera or the SEM pole piece (final lens). In theory a sample up to 100mm perpendicular to the tilt axis by 150mm parallel to the tilt axis can be analysed. In practice, the motion of our stage is restricted to maximum dimensions of 100 by 50mm by a conductive copper braid on our cold stage. Temperature control becomes harder as the samples become larger. If the samples become too warm then they will start to sublime and the quality of EBSD data will reduce. Large samples need to be relatively thin ( 5mm or less) so that conduction of heat to the cold stage is more effective at keeping the surface temperature low. In the Otago facility samples of up to 40mm by 40mm present little problem and can be analysed for several hours without significant sublimation. Larger samples need more care, e.g. fast sample transfer to keep the sample very cold. The largest samples we work on routinely are 40 by 60mm in size. We will show examples of EBSD data from glacial ice and sea ice from Antarctica and from large laboratory ice samples.
Johnston, Lisa G; Hakim, Avi J; Dittrich, Samantha; Burnett, Janet; Kim, Evelyn; White, Richard G
2016-08-01
Reporting key details of respondent-driven sampling (RDS) survey implementation and analysis is essential for assessing the quality of RDS surveys. RDS is both a recruitment and analytic method and, as such, it is important to adequately describe both aspects in publications. We extracted data from peer-reviewed literature published through September, 2013 that reported collected biological specimens using RDS. We identified 151 eligible peer-reviewed articles describing 222 surveys conducted in seven regions throughout the world. Most published surveys reported basic implementation information such as survey city, country, year, population sampled, interview method, and final sample size. However, many surveys did not report essential methodological and analytical information for assessing RDS survey quality, including number of recruitment sites, seeds at start and end, maximum number of waves, and whether data were adjusted for network size. Understanding the quality of data collection and analysis in RDS is useful for effectively planning public health service delivery and funding priorities.
NASA Astrophysics Data System (ADS)
Pawcenis, Dominika; Koperska, Monika A.; Milczarek, Jakub M.; Łojewski, Tomasz; Łojewska, Joanna
2014-02-01
A direct goal of this paper was to improve the methods of sample preparation and separation for analyses of fibroin polypeptide with the use of size exclusion chromatography (SEC). The motivation for the study arises from our interest in natural polymers included in historic textile and paper artifacts, and is a logical response to the urgent need for developing rationale-based methods for materials conservation. The first step is to develop a reliable analytical tool which would give insight into fibroin structure and its changes caused by both natural and artificial ageing. To investigate the influence of preparation conditions, two sets of artificially aged samples were prepared (with and without NaCl in sample solution) and measured by the means of SEC with multi angle laser light scattering detector. It was shown that dialysis of fibroin dissolved in LiBr solution allows removal of the salt which destroys stacks chromatographic columns and prevents reproducible analyses. Salt rich (NaCl) water solutions of fibroin improved the quality of chromatograms.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.
Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe
2015-08-01
The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Kidney function endpoints in kidney transplant trials: a struggle for power.
Ibrahim, A; Garg, A X; Knoll, G A; Akbari, A; White, C A
2013-03-01
Kidney function endpoints are commonly used in randomized controlled trials (RCTs) in kidney transplantation (KTx). We conducted this study to estimate the proportion of ongoing RCTs with kidney function endpoints in KTx where the proposed sample size is large enough to detect meaningful differences in glomerular filtration rate (GFR) with adequate statistical power. RCTs were retrieved using the key word "kidney transplantation" from the National Institute of Health online clinical trial registry. Included trials had at least one measure of kidney function tracked for at least 1 month after transplant. We determined the proportion of two-arm parallel trials that had sufficient sample sizes to detect a minimum 5, 7.5 and 10 mL/min difference in GFR between arms. Fifty RCTs met inclusion criteria. Only 7% of the trials were above a sample size of 562, the number needed to detect a minimum 5 mL/min difference between the groups should one exist (assumptions: α = 0.05; power = 80%, 10% loss to follow-up, common standard deviation of 20 mL/min). The result increased modestly to 36% of trials when a minimum 10 mL/min difference was considered. Only a minority of ongoing trials have adequate statistical power to detect between-group differences in kidney function using conventional sample size estimating parameters. For this reason, some potentially effective interventions which ultimately could benefit patients may be abandoned from future assessment. © Copyright 2013 The American Society of Transplantation and the American Society of Transplant Surgeons.
Duan, Jiankuan; Hu, Bin; He, Man
2012-10-01
In this paper, a new method of nanometer-sized alumina packed microcolumn SPE combined with field-amplified sample stacking (FASS)-CE-UV detection was developed for the speciation analysis of inorganic selenium in environmental water samples. Self-synthesized nanometer-sized alumina was packed in a microcolumn as the SPE adsorbent to retain Se(IV) and Se(VI) simultaneously at pH 6 and the retained inorganic selenium was eluted by concentrated ammonia. The eluent was used for FASS-CE-UV analysis after NH₃ evaporation. The factors affecting the preconcentration of both Se(IV) and Se(VI) by SPE and FASS were studied and the optimal CE separation conditions for Se(IV) and Se(VI) were obtained. Under the optimal conditions, the LODs of 57 ng L⁻¹ (Se(IV)) and 71 ng L⁻¹ (Se(VI)) were obtained, respectively. The developed method was validated by the analysis of a certified reference material of GBW(E)080395 environmental water and the determined value was in a good agreement with the certified value. It was also successfully applied to the speciation analysis of inorganic selenium in environmental water samples, including Yangtze River water, spring water, and tap water. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Miao, Chengcheng; Zhu, Yanjuan; Huang, Liangguo; Zhao, Tengqi
2015-01-01
The multi-element doped alpha nickel hydroxide has been prepared by supersonic co-precipitation method. Three kinds of samples A, B and C are prepared by chemically coprecipitating Ni/Al, Ni/Al/Mn and Ni/Al/Mn/Yb, respectively. Inductively coupled plasma atomic emission spectroscopy (ICP-AES), Particle size distribution (PSD) measurement, X-ray diffraction (XRD), scanning electron microscope (SEM) and Fourier transform infrared spectroscopy (FT-IR) are used to characterize the physical properties of the synthesized α-Ni(OH)2 samples, such as chemical composition, morphology, structural stability of the crystal. The results show that all samples are nano-sized materials and the interlayer spacing becomes larger and the structural stability becomes better with the increase of doped elements and doped ratio. The prepared alpha nickel hydroxide samples are added into micro-sized beta nickel hydroxide to form biphase electrode materials for Ni-MH battery. The electrochemical characterization of the biphase electrodes, including cyclic voltammetry (CV) and charge/discharge test, are also performed. The results demonstrate that the biphase electrode with sample C exhibits better electrochemical reversibility and cyclic stability, higher charge efficient and discharge potential, larger proton diffusion coefficient (5.81 × 10-12 cm2 s-1) and discharge capacity (309.0 mAh g-1). Hence, it indicates that all doped elements can produce the synergic effect and further improve the electrochemical properties of the alpha nickel hydroxide.
The Relationship between Organizational Culture Types and Innovation in Aerospace Companies
NASA Astrophysics Data System (ADS)
Nelson, Adaora N.
Innovation in the aerospace industry has proven to be an effective strategy for competitiveness and sustainability. The organizational culture of the firm must be conducive to innovation. The problem was that although innovation is needed for aerospace companies to be competitive and sustainable, certain organizational culture issues might hinder leaders from successfully innovating (Emery, 2010; Ramanigopal, 2012). The purpose of this study was to assess the relationship of hierarchical, clan, adhocracy and market organizational types and innovation in aerospace companies within the U.S while controlling for company size and length of time in business. The non-experimental quantitative study included a random sample of 136 aerospace leaders in the U.S. There was a significant relationship between market organizational culture and innovation, F(1,132) = 4.559, p = .035. No significant relationships were found between hierarchical organizational culture and innovation and between clan culture and innovation. The relationship between adhocracy culture and innovation was not significant, possible due to inadequate sample size. Company size was shown to be a justifiable covariate in the study, due to a significant relationship with innovative (F(1, 130) = 4.66, p < .1, r = .19). Length of time in business had no relationship with innovation. The findings imply that market organizational cultures are more likely to result in innovative outcomes in the aerospace industry. Organizational leaders are encouraged to adopt a market culture and adopt smaller organizational structures. Recommendations for further research include investigating the relationship between adhocracy culture and innovation using an adequate sample size. Research is needed to determine other variables that predict innovation. This study should be repeated at periodic intervals and across other industrial sectors and countries.
Batista, Cristiane B; Carvalho, Márcia L de; Vasconcelos, Ana Glória G
To analyze the factors associated with neonatal mortality related to health services accessibility and use. Case-control study of live births in 2008 in small- and medium-sized municipalities in the North, Northeast, and Vale do Jequitinhonha regions, Brazil. A probabilistic sample stratified by region, population size, and information adequacy was generated for the choice of municipalities. Of these, all municipalities with 20,000 inhabitants or less were included in the study (36 municipalities), whereas the remainder were selected according to the probability method proportional to population size, totaling 20 cities with 20,001-50,000 inhabitants and 19 municipalities with 50,001-200,000 inhabitants. All deaths of live births in these cities were included. Controls were randomly sampled, considered as four times the number of cases. The sample size comprised 412 cases and 1772 controls. Hierarchical multiple logistic regression was used for data analysis. The risk factors for neonatal death were socioeconomic class D and E (OR=1.28), history of child death (OR=1.74), high-risk pregnancy (OR=4.03), peregrination in antepartum (OR=1.46), lack of prenatal care (OR=2.81), absence of professional for the monitoring of labor (OR=3.34), excessive time waiting for delivery (OR=1.97), borderline preterm birth (OR=4.09) and malformation (OR=13.66). These results suggest multiple causes of neonatal mortality, as well as the need to improve access to good quality maternal-child health care services in the assessed places of study. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology
Vavrek, Matthew J.
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770
Suspended sediment transport under estuarine tidal channel conditions
Sternberg, R.W.; Kranck, K.; Cacchione, D.A.; Drake, D.E.
1988-01-01
A modified version of the GEOPROBE tripod has been used to monitor flow conditions and suspended sediment distribution in the bottom boundary layer of a tidal channel within San Francisco Bay, California. Measurements were made every 15 minutes over three successive tidal cycles. They included mean velocity profiles from four electromagnetic current meters within 1 m of the seabed; mean suspended sediment concentration profiles from seven miniature nephelometers operated within 1 m of the seabed; near-bottom pressure fluctuations; vertical temperature gradient; and bottom photographs. Additionally, suspended sediment was sampled from four levels within 1 m of the seabed three times during each successive flood and ebb cycle. While the instrument was deployed, STD-nephelometer measurements were made throughout the water column, water samples were collected each 1-2 hours, and bottom sediment was sampled at the deployment site. From these measurements, estimates were made of particle settling velocity (ws) from size distributions of the suspended sediment, friction velocity (U*) from the velocity profiles, and reference concentration (Ca) was measured at z = 20 cm. These parameters were used in the suspended sediment distribution equations to evaluate their ability to predict the observed suspended sediment profiles. Three suspended sediment particle conditions were evaluated: (1) individual particle size in the 4-11 ?? (62.5-0.5 ??m) range with the reference concentration Ca at z = 20 cm (C??), (2) individual particle size in the 4-6 ?? size range, flocs representing the 7-11 ?? size range with the reference concentration Ca at z = 20 cm (Cf), and (3) individual particle size in the 4-6 ?? size range, flocs representing the 7-11 ?? size range with the reference concentration predicted as a function of the bed sediment size distribution and the square of the excess shear stress. In addition, computations of particle flux were made in order to show vertical variations in horizontal mass flux for varying flow conditions. ?? 1988.
Williams, Michael S; Cao, Yong; Ebel, Eric D
2013-07-15
Levels of pathogenic organisms in food and water have steadily declined in many parts of the world. A consequence of this reduction is that the proportion of samples that test positive for the most contaminated product-pathogen pairings has fallen to less than 0.1. While this is unequivocally beneficial to public health, datasets with very few enumerated samples present an analytical challenge because a large proportion of the observations are censored values. One application of particular interest to risk assessors is the fitting of a statistical distribution function to datasets collected at some point in the farm-to-table continuum. The fitted distribution forms an important component of an exposure assessment. A number of studies have compared different fitting methods and proposed lower limits on the proportion of samples where the organisms of interest are identified and enumerated, with the recommended lower limit of enumerated samples being 0.2. This recommendation may not be applicable to food safety risk assessments for a number of reasons, which include the development of new Bayesian fitting methods, the use of highly sensitive screening tests, and the generally larger sample sizes found in surveys of food commodities. This study evaluates the performance of a Markov chain Monte Carlo fitting method when used in conjunction with a screening test and enumeration of positive samples by the Most Probable Number technique. The results suggest that levels of contamination for common product-pathogen pairs, such as Salmonella on poultry carcasses, can be reliably estimated with the proposed fitting method and samples sizes in excess of 500 observations. The results do, however, demonstrate that simple guidelines for this application, such as the proportion of positive samples, cannot be provided. Published by Elsevier B.V.
Pinto, Colin A; Saripella, Kalyan K; Loka, Nikhil C; Neau, Steven H
2018-04-01
Certain issues with the use of particles of chitosan (Ch) cross-linked with tripolyphosphate (TPP) in sustained release formulations include inefficient drug loading, burst drug release, and incomplete drug release. Acetaminophen was added to Ch:TPP particles to test for advantages of drug addition extragranularly over drug addition made during cross-linking. The influences of Ch concentration, Ch:TPP ratio, temperature, ionic strength, and pH were assessed. Design of experiments allowed identification of factors and 2-factor interactions that have significant effects on average particle size and size distribution, yield, zeta potential, and true density of the particles, as well as drug release from the directly compressed tablets. Statistical model equations directed production of a control batch that minimized span, maximized yield, and targeted a t 50 of 90 min (sample A); sample B that differed by targeting a t 50 of 240-300 min to provide sustained release; and sample C that differed from sample B by maximizing span. Sample B maximized yield and provided its targeted t 50 and the smallest average particle size, with the higher zeta potential and the lower span of samples B and C. Extragranular addition of a drug to Ch:TPP particles achieved 100% drug loading, eliminated a burst drug release, and can accomplish complete drug release. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Near real time vapor detection and enhancement using aerosol adsorption
Novick, Vincent J.; Johnson, Stanley A.
1999-01-01
A vapor sample detection method where the vapor sample contains vapor and ambient air and surrounding natural background particles. The vapor sample detection method includes the steps of generating a supply of aerosol that have a particular effective median particle size, mixing the aerosol with the vapor sample forming aerosol and adsorbed vapor suspended in an air stream, impacting the suspended aerosol and adsorbed vapor upon a reflecting element, alternatively directing infrared light to the impacted aerosol and adsorbed vapor, detecting and analyzing the alternatively directed infrared light in essentially real time using a spectrometer and a microcomputer and identifying the vapor sample.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardiner, W.W.; Barrows, E.S.; Antrim, L.D
Buttermilk Channel was one of seven waterways that was sampled and evaluated for dredging and sediment disposal. Sediment samples were collected and analyses were conducted on sediment core samples. The evaluation of proposed dredged material from the channel included bulk sediment chemical analyses, chemical analyses of site water and elutriate, water column and benthic acute toxicity tests, and bioaccumulation studies. Individual sediment core samples were analyzed for grain size, moisture content, and total organic carbon. A composite sediment samples, representing the entire area proposed for dredging, was analyzed for bulk density, polynuclear aromatic hydrocarbons, and 1,4-dichlorobenzene. Site water and elutriatemore » were analyzed for metals, pesticides, and PCBs.« less
Near real time vapor detection and enhancement using aerosol adsorption
Novick, V.J.; Johnson, S.A.
1999-08-03
A vapor sample detection method is described where the vapor sample contains vapor and ambient air and surrounding natural background particles. The vapor sample detection method includes the steps of generating a supply of aerosol that have a particular effective median particle size, mixing the aerosol with the vapor sample forming aerosol and adsorbed vapor suspended in an air stream, impacting the suspended aerosol and adsorbed vapor upon a reflecting element, alternatively directing infrared light to the impacted aerosol and adsorbed vapor, detecting and analyzing the alternatively directed infrared light in essentially real time using a spectrometer and a microcomputer and identifying the vapor sample. 13 figs.
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Kellner, Elliott; Hubbart, Jason A
2017-11-15
Given the importance of suspended sediment to biogeochemical functioning of aquatic ecosystems, and the increasing concern of mixed-land-use effects on pollutant loading, there is an urgent need for research that quantitatively characterizes spatiotemporal variation of suspended sediment dynamics in contemporary watersheds. A study was conducted in a representative watershed of the central United States utilizing a nested-scale experimental watershed design, including five gauging sites (n=5) partitioning the catchment into five sub-watersheds. Hydroclimate stations at gauging sites were used to monitor air temperature, precipitation, and stream stage at 30-min intervals during the study (Oct. 2009-Feb. 2014). Streamwater grab samples were collected four times per week, at each site, for the duration of the study (Oct. 2009-Feb. 2014). Water samples were analyzed for suspended sediment using laser particle diffraction. Results showed significant differences (p<0.05) between monitoring sites for total suspended sediment concentration, mean particle size, and silt volume. Total concentration and silt volume showed a decreasing trend from the primarily agricultural upper watershed to the urban mid-watershed, and a subsequent increasing trend to the more suburban lower watershed. Conversely, mean particle size showed an opposite spatial trend. Results are explained by a combination of land use (e.g. urban stormwater dilution) and surficial geology (e.g. supply-controlled spatial variation of particle size). Correlation analyses indicated weak relationships with both hydroclimate and land use, indicating non-linear sediment dynamics. Suspended sediment parameters displayed consistent seasonality during the study, with total concentration decreasing through the growing season and mean particle size inversely tracking air temperature. Likely explanations include vegetation influences and climate-driven weathering cycles. Results reflect unique observations of spatiotemporal variation of suspended sediment particle size class. Such information is crucial for land and water resource managers working to mitigate aquatic ecosystem degradation and improve water resource sustainability in mixed-land-use watersheds globally. Copyright © 2017 Elsevier B.V. All rights reserved.
Hobin, E; Sacco, J; Vanderlee, L; White, C M; Zuo, F; Sheeshka, J; McVey, G; Fodor O'Brien, M; Hammond, D
2015-12-01
Given the proposed changes to nutrition labelling in Canada and the dearth of research examining comprehension and use of nutrition facts tables (NFts) by adolescents and young adults, our objective was to experimentally test the efficacy of modifications to NFts on young Canadians' ability to interpret, compare and mathematically manipulate nutrition information in NFts on prepackaged food. An online survey was conducted among 2010 Canadians aged 16 to 24 years drawn from a consumer sample. Participants were randomized to view two NFts according to one of six experimental conditions, using a between-groups 2 x 3 factorial design: serving size (current NFt vs. standardized serving-sizes across similar products) x percent daily value (% DV) (current NFt vs. "low/med/high" descriptors vs. colour coding). The survey included seven performance tasks requiring participants to interpret, compare and mathematically manipulate nutrition information on NFts. Separate modified Poisson regression models were conducted for each of the three outcomes. The ability to compare two similar products was significantly enhanced in NFt conditions that included standardized serving-sizes (p ≤ .001 for all). Adding descriptors or colour coding of % DV next to calories and nutrients on NFts significantly improved participants' ability to correctly interpret % DV information (p ≤ .001 for all). Providing both standardized serving-sizes and descriptors of % DV had a modest effect on participants' ability to mathematically manipulate nutrition information to calculate the nutrient content of multiple servings of a product (relative ratio = 1.19; 95% confidence limit: 1.04-1.37). Standardizing serving-sizes and adding interpretive % DV information on NFts improved young Canadians' comprehension and use of nutrition information. Some caution should be exercised in generalizing these findings to all Canadian youth due to the sampling issues associated with the study population. Further research is needed to replicate this study in a more heterogeneous sample in Canada and across a range of food products and categories.
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method
Lu, Zhaolin
2017-01-01
Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis. PMID:28298925
Sample size and power for cost-effectiveness analysis (part 1).
Glick, Henry A
2011-03-01
Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.
Mayer, B; Muche, R
2013-01-01
Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.
Invited Review Small is beautiful: The analysis of nanogram-sized astromaterials
NASA Astrophysics Data System (ADS)
Zolensky, M. E.; Pieters, C.; Clark, B.; Papike, J. J.
2000-01-01
The capability of modern methods to characterize ultra-small samples is well established from analysis of interplanetary dust particles (IDPs), interstellar grains recovered from meteorites, and other materials requiring ultra-sensitive analytical capabilities. Powerful analytical techniques are available that require, under favorable circumstances, single particles of only a few nanograms for entire suites of fairly comprehensive characterizations. A returned sample of >1,000 particles with total mass of just one microgram permits comprehensive quantitative geochemical measurements that are impractical to carry out in situ by flight instruments. The main goal of this paper is to describe the state-of-the-art in microanalysis of astromaterials. Given that we can analyze fantastically small quantities of asteroids and comets, etc., we have to ask ourselves how representative are microscopic samples of bodies that measure a few to many km across? With the Galileo flybys of Gaspra and Ida, it is now recognized that even very small airless bodies have indeed developed a particulate regolith. Acquiring a sample of the bulk regolith, a simple sampling strategy, provides two critical pieces of information about the body. Regolith samples are excellent bulk samples since they normally contain all the key components of the local environment, albeit in particulate form. Furthermore, since this fine fraction dominates remote measurements, regolith samples also provide information about surface alteration processes and are a key link to remote sensing of other bodies. Studies indicate that a statistically significant number of nanogram-sized particles should be able to characterize the regolith of a primitive asteroid, although the presence of larger components within even primitive meteorites (e.g.. Murchison), e.g. chondrules, CAI, large crystal fragments, etc., points out the limitations of using data obtained from nanogram-sized samples to characterize entire primitive asteroids. However, most important asteroidal geological processes have left their mark on the matrix, since this is the finest-grained portion and therefore most sensitive to chemical and physical changes. Thus, the following information can be learned from this fine grain size fraction alone: (1) mineral paragenesis; (2) regolith processes, (3) bulk composition; (4) conditions of thermal and aqueous alteration (if any); (5) relationships to planets, comets, meteorites (via isotopic analyses, including oxygen; (6) abundance of water and hydrated material; (7) abundance of organics; (8) history of volatile mobility, (9) presence and origin of presolar and/or interstellar material. Most of this information can even be obtained from dust samples from bodies for which nanogram-sized samples are not truly representative. Future advances in sensitivity and accuracy of laboratory analytical techniques can be expected to enhance the science value of nano- to microgram sized samples even further. This highlights a key advantage of sample returns - that the most advanced analysis techniques can always be applied in the laboratory, and that well-preserved samples are available for future investigations.
Kondrashova, Olga; Love, Clare J.; Lunke, Sebastian; Hsu, Arthur L.; Waring, Paul M.; Taylor, Graham R.
2015-01-01
Whilst next generation sequencing can report point mutations in fixed tissue tumour samples reliably, the accurate determination of copy number is more challenging. The conventional Multiplex Ligation-dependent Probe Amplification (MLPA) assay is an effective tool for measurement of gene dosage, but is restricted to around 50 targets due to size resolution of the MLPA probes. By switching from a size-resolved format, to a sequence-resolved format we developed a scalable, high-throughput, quantitative assay. MLPA-seq is capable of detecting deletions, duplications, and amplifications in as little as 5ng of genomic DNA, including from formalin-fixed paraffin-embedded (FFPE) tumour samples. We show that this method can detect BRCA1, BRCA2, ERBB2 and CCNE1 copy number changes in DNA extracted from snap-frozen and FFPE tumour tissue, with 100% sensitivity and >99.5% specificity. PMID:26569395
RESIDENTIAL INDOOR EXPOSURES OF CHILDREN TO PESTICIDES FOLLOWING LAWN APPLICATIONS
Methods have been developed to estimate children's residential exposures to pesticide residues and applied in a small field study of indoor exposures resulting from the intrusion of lawn-applied herbicide into the home. Sampling methods included size-selective indoor air sampli...
Automatic classification techniques for type of sediment map from multibeam sonar data
NASA Astrophysics Data System (ADS)
Zakariya, R.; Abdullah, M. A.; Che Hasan, R.; Khalil, I.
2018-02-01
Sediment map can be important information for various applications such as oil drilling, environmental and pollution study. A study on sediment mapping was conducted at a natural reef (rock) in Pulau Payar using Sound Navigation and Ranging (SONAR) technology which is Multibeam Echosounder R2-Sonic. This study aims to determine sediment type by obtaining backscatter and bathymetry data from multibeam echosounder. Ground truth data were used to verify the classification produced. The method used to analyze ground truth samples consists of particle size analysis (PSA) and dry sieving methods. Different analysis being carried out due to different sizes of sediment sample obtained. The smaller size was analyzed using PSA with the brand CILAS while bigger size sediment was analyzed using sieve. For multibeam, data acquisition includes backscatter strength and bathymetry data were processed using QINSy, Qimera, and ArcGIS. This study shows the capability of multibeam data to differentiate the four types of sediments which are i) very coarse sand, ii) coarse sand, iii) very coarse silt and coarse silt. The accuracy was reported as 92.31% overall accuracy and 0.88 kappa coefficient.
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
Tran, Viet-Thi; Porcher, Raphael; Falissard, Bruno; Ravaud, Philippe
2016-12-01
To describe methods to determine sample sizes in surveys using open-ended questions and to assess how resampling methods can be used to determine data saturation in these surveys. We searched the literature for surveys with open-ended questions and assessed the methods used to determine sample size in 100 studies selected at random. Then, we used Monte Carlo simulations on data from a previous study on the burden of treatment to assess the probability of identifying new themes as a function of the number of patients recruited. In the literature, 85% of researchers used a convenience sample, with a median size of 167 participants (interquartile range [IQR] = 69-406). In our simulation study, the probability of identifying at least one new theme for the next included subject was 32%, 24%, and 12% after the inclusion of 30, 50, and 100 subjects, respectively. The inclusion of 150 participants at random resulted in the identification of 92% themes (IQR = 91-93%) identified in the original study. In our study, data saturation was most certainly reached for samples >150 participants. Our method may be used to determine when to continue the study to find new themes or stop because of futility. Copyright © 2016 Elsevier Inc. All rights reserved.
A new sampler design for measuring sedimentation in streams
Hedrick, Lara B.; Welsh, S.A.; Hedrick, J.D.
2005-01-01
Sedimentation alters aquatic habitats and negatively affects fish and invertebrate communities but is difficult to quantify. To monitor bed load sedimentation, we designed a sampler with a 10.16-cm polyvinyl chloride coupling and removable sediment trap. We conducted a trial study of our samplers in riffle and pool habitats upstream and downstream of highway construction on a first-order Appalachian stream. Sediment samples were collected over three 6-week intervals, dried, and separated into five size-classes by means of nested sieves (U.S. standard sieve numbers 4, 8, 14, and 20). Downstream sediment accumulated in size-classes 1 and 2, and the total amount accumulated was significantly greater during all three sampling periods. Size-classes 3 and 4 had significantly greater amounts of sediment for the first two sampling periods at the downstream site. Differences between upstream and downstream sites narrowed during the 5-month sampling period. This probably reflects changes in site conditions, including the addition of more effective sediment control measures after the first 6-week period of the study. The sediment sampler design allowed for long-term placement of traps without continual disturbance of the streambed and was successful at providing repeat measures of sediment at paired sites. ?? Copyright by the American Fisheries Society 2005.
Yasaki, Hirotoshi; Yasui, Takao; Yanagida, Takeshi; Kaji, Noritada; Kanai, Masaki; Nagashima, Kazuki; Kawai, Tomoji; Baba, Yoshinobu
2017-10-11
Measuring ionic currents passing through nano- or micropores has shown great promise for the electrical discrimination of various biomolecules, cells, bacteria, and viruses. However, conventional measurements have shown there is an inherent limitation to the detectable particle volume (1% of the pore volume), which critically hinders applications to real mixtures of biomolecule samples with a wide size range of suspended particles. Here we propose a rational methodology that can detect samples with the detectable particle volume of 0.01% of the pore volume by measuring a transient current generated from the potential differences in a microfluidic bridge circuit. Our method substantially suppresses the background ionic current from the μA level to the pA level, which essentially lowers the detectable particle volume limit even for relatively large pore structures. Indeed, utilizing a microscale long pore structure (volume of 5.6 × 10 4 aL; height and width of 2.0 × 2.0 μm; length of 14 μm), we successfully detected various samples including polystyrene nanoparticles (volume: 4 aL), bacteria, cancer cells, and DNA molecules. Our method will expand the applicability of ionic current sensing systems for various mixed biomolecule samples with a wide size range, which have been difficult to measure by previously existing pore technologies.
Sample Size Determination for One- and Two-Sample Trimmed Mean Tests
ERIC Educational Resources Information Center
Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng
2008-01-01
Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…
Carter, James L.; Resh, Vincent H.
2001-01-01
A survey of methods used by US state agencies for collecting and processing benthic macroinvertebrate samples from streams was conducted by questionnaire; 90 responses were received and used to describe trends in methods. The responses represented an estimated 13,000-15,000 samples collected and processed per year. Kicknet devices were used in 64.5% of the methods; other sampling devices included fixed-area samplers (Surber and Hess), artificial substrates (Hester-Dendy and rock baskets), grabs, and dipnets. Regional differences existed, e.g., the 1-m kicknet was used more often in the eastern US than in the western US. Mesh sizes varied among programs but 80.2% of the methods used a mesh size between 500 and 600 (mu or u)m. Mesh size variations within US Environmental Protection Agency regions were large, with size differences ranging from 100 to 700 (mu or u)m. Most samples collected were composites; the mean area sampled was 1.7 m2. Samples rarely were collected using a random method (4.7%); most samples (70.6%) were collected using "expert opinion", which may make data obtained operator-specific. Only 26.3% of the methods sorted all the organisms from a sample; the remainder subsampled in the laboratory. The most common method of subsampling was to remove 100 organisms (range = 100-550). The magnification used for sorting ranged from 1 (sorting by eye) to 30x, which results in inconsistent separation of macroinvertebrates from detritus. In addition to subsampling, 53% of the methods sorted large/rare organisms from a sample. The taxonomic level used for identifying organisms varied among taxa; Ephemeroptera, Plecoptera, and Trichoptera were generally identified to a finer taxonomic resolution (genus and species) than other taxa. Because there currently exists a large range of field and laboratory methods used by state programs, calibration among all programs to increase data comparability would be exceptionally challenging. However, because many techniques are shared among methods, limited testing could be designed to evaluate whether procedural differences affect the ability to determine levels of environmental impairment using benthic macroinvertebrate communities.
Rast, Philippe; Hofer, Scott M.
2014-01-01
We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harb, J.N.
This report describes work performed in the fifteenth quarter of a fundamental study to examine the effect of staged combustion on ash formation and deposition. Efforts this quarter included addition of a new cyclone for improved particle sampling and modification of the existing sampling probe. Particulate samples were collected under a variety of experimental conditions for both coals under investigation. Deposits formed from the Black Thunder coal were also collected. Particle size and composition from the Pittsburgh No. 8 ash samples support previously reported results. In addition, the authors ability to distinguish char/ash associations has been refined and applied tomore » a variety of ash samples from this coal. The results show a clear difference between the behavior of included and excluded pyrite, and provide insight into the extent of pyrite oxidation. Ash samples from the Black Thunder coal have also been collected and analyzed. Results indicate a significant difference in the particle size of {open_quotes}unclassifiable{close_quotes} particles for ash formed during staged combustion. A difference in composition also appears to be present and is currently under investigation. Finally, deposits were collected under staged conditions for the Black Thunder coal. Specifically, two deposits were formed under similar conditions and allowed to mature under either reducing or oxidizing conditions in natural gas. Differences between the samples due to curing were noted. In addition, both deposits showed skeletal ash structures which resulted from in-situ burnout of the char after deposition.« less
The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.
Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S
2016-10-01
The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Jeffrey H. Gove
2003-01-01
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...
Genuine non-self-averaging and ultraslow convergence in gelation.
Cho, Y S; Mazza, M G; Kahng, B; Nagler, J
2016-08-01
In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Methodological reporting of randomized clinical trials in respiratory research in 2010.
Lu, Yi; Yao, Qiuju; Gu, Jie; Shen, Ce
2013-09-01
Although randomized controlled trials (RCTs) are considered the highest level of evidence, they are also subject to bias, due to a lack of adequately reported randomization, and therefore the reporting should be as explicit as possible for readers to determine the significance of the contents. We evaluated the methodological quality of RCTs in respiratory research in high ranking clinical journals, published in 2010. We assessed the methodological quality, including generation of the allocation sequence, allocation concealment, double-blinding, sample-size calculation, intention-to-treat analysis, flow diagrams, number of medical centers involved, diseases, funding sources, types of interventions, trial registration, number of times the papers have been cited, journal impact factor, journal type, and journal endorsement of the CONSORT (Consolidated Standards of Reporting Trials) rules, in RCTs published in 12 top ranking clinical respiratory journals and 5 top ranking general medical journals. We included 176 trials, of which 93 (53%) reported adequate generation of the allocation sequence, 66 (38%) reported adequate allocation concealment, 79 (45%) were double-blind, 123 (70%) reported adequate sample-size calculation, 88 (50%) reported intention-to-treat analysis, and 122 (69%) included a flow diagram. Multivariate logistic regression analysis revealed that journal impact factor ≥ 5 was the only variable that significantly influenced adequate allocation sequence generation. Trial registration and journal impact factor ≥ 5 significantly influenced adequate allocation concealment. Medical interventions, trial registration, and journal endorsement of the CONSORT statement influenced adequate double-blinding. Publication in one of the general medical journal influenced adequate sample-size calculation. The methodological quality of RCTs in respiratory research needs improvement. Stricter enforcement of the CONSORT statement should enhance the quality of RCTs.
Rethinking non-inferiority: a practical trial design for optimising treatment duration.
Quartagno, Matteo; Walker, A Sarah; Carpenter, James R; Phillips, Patrick Pj; Parmar, Mahesh Kb
2018-06-01
Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm.
Assessing the Application of a Geographic Presence-Only Model for Land Suitability Mapping
Heumann, Benjamin W.; Walsh, Stephen J.; McDaniel, Phillip M.
2011-01-01
Recent advances in ecological modeling have focused on novel methods for characterizing the environment that use presence-only data and machine-learning algorithms to predict the likelihood of species occurrence. These novel methods may have great potential for land suitability applications in the developing world where detailed land cover information is often unavailable or incomplete. This paper assesses the adaptation and application of the presence-only geographic species distribution model, MaxEnt, for agricultural crop suitability mapping in a rural Thailand where lowland paddy rice and upland field crops predominant. To assess this modeling approach, three independent crop presence datasets were used including a social-demographic survey of farm households, a remote sensing classification of land use/land cover, and ground control points, used for geodetic and thematic reference that vary in their geographic distribution and sample size. Disparate environmental data were integrated to characterize environmental settings across Nang Rong District, a region of approximately 1,300 sq. km in size. Results indicate that the MaxEnt model is capable of modeling crop suitability for upland and lowland crops, including rice varieties, although model results varied between datasets due to the high sensitivity of the model to the distribution of observed crop locations in geographic and environmental space. Accuracy assessments indicate that model outcomes were influenced by the sample size and the distribution of sample points in geographic and environmental space. The need for further research into accuracy assessments of presence-only models lacking true absence data is discussed. We conclude that the Maxent model can provide good estimates of crop suitability, but many areas need to be carefully scrutinized including geographic distribution of input data and assessment methods to ensure realistic modeling results. PMID:21860606
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Multi-Mission System Analysis for Planetary Entry (M-SAPE) Version 1
NASA Technical Reports Server (NTRS)
Samareh, Jamshid; Glaab, Louis; Winski, Richard G.; Maddock, Robert W.; Emmett, Anjie L.; Munk, Michelle M.; Agrawal, Parul; Sepka, Steve; Aliaga, Jose; Zarchi, Kerry;
2014-01-01
This report describes an integrated system for Multi-mission System Analysis for Planetary Entry (M-SAPE). The system in its current form is capable of performing system analysis and design for an Earth entry vehicle suitable for sample return missions. The system includes geometry, mass sizing, impact analysis, structural analysis, flight mechanics, TPS, and a web portal for user access. The report includes details of M-SAPE modules and provides sample results. Current M-SAPE vehicle design concept is based on Mars sample return (MSR) Earth entry vehicle design, which is driven by minimizing risk associated with sample containment (no parachute and passive aerodynamic stability). By M-SAPE exploiting a common design concept, any sample return mission, particularly MSR, will benefit from significant risk and development cost reductions. The design provides a platform by which technologies and design elements can be evaluated rapidly prior to any costly investment commitment.
NASA Technical Reports Server (NTRS)
Bond, Thomas H. (Technical Monitor); Anderson, David N.
2004-01-01
This manual reviews the derivation of the similitude relationships believed to be important to ice accretion and examines ice-accretion data to evaluate their importance. Both size scaling and test-condition scaling methods employing the resulting similarity parameters are described, and experimental icing tests performed to evaluate scaling methods are reviewed with results. The material included applies primarily to unprotected, unswept geometries, but some discussion of how to approach other situations is included as well. The studies given here and scaling methods considered are applicable only to Appendix-C icing conditions. Nearly all of the experimental results presented have been obtained in sea-level tunnels. Recommendations are given regarding which scaling methods to use for both size scaling and test-condition scaling, and icing test results are described to support those recommendations. Facility limitations and size-scaling restrictions are discussed. Finally, appendices summarize the air, water and ice properties used in NASA scaling studies, give expressions for each of the similarity parameters used and provide sample calculations for the size-scaling and test-condition scaling methods advocated.
van Lieshout, Jan; Grol, Richard; Campbell, Stephen; Falcoff, Hector; Capell, Eva Frigola; Glehr, Mathias; Goldfracht, Margalit; Kumpusalo, Esko; Künzi, Beat; Ludt, Sabine; Petek, Davorina; Vanderstighelen, Veerle; Wensing, Michel
2012-10-05
Primary care has an important role in cardiovascular risk management (CVRM) and a minimum size of scale of primary care practices may be needed for efficient delivery of CVRM . We examined CVRM in patients with coronary heart disease (CHD) in primary care and explored the impact of practice size. In an observational study in 8 countries we sampled CHD patients in primary care practices and collected data from electronic patient records. Practice samples were stratified according to practice size and urbanisation; patients were selected using coded diagnoses when available. CVRM was measured on the basis of internationally validated quality indicators. In the analyses practice size was defined in terms of number of patients registered of visiting the practice. We performed multilevel regression analyses controlling for patient age and sex. We included 181 practices (63% of the number targeted). Two countries included a convenience sample of practices. Data from 2960 CHD patients were available. Some countries used methods supplemental to coded diagnoses or other inclusion methods introducing potential inclusion bias. We found substantial variation on all CVRM indicators across practices and countries. We computed aggregated practice scores as percentage of patients with a positive outcome. Rates of risk factor recording varied from 55% for physical activity as the mean practice score across all practices (sd 32%) to 94% (sd 10%) for blood pressure. Rates for reaching treatment targets for systolic blood pressure, diastolic blood pressure and LDL cholesterol were 46% (sd 21%), 86% (sd 12%) and 48% (sd 22%) respectively. Rates for providing recommended cholesterol lowering and antiplatelet drugs were around 80%, and 70% received influenza vaccination. Practice size was not associated to indicator scores with one exception: in Slovenia larger practices performed better. Variation was more related to differences between practices than between countries. CVRM measured by quality indicators showed wide variation within and between countries and possibly leaves room for improvement in all countries involved. Few associations of performance scores with practice size were found.
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiser, I; Lu, Z
2014-06-01
Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions includedmore » two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task.« less
76 FR 56141 - Notice of Intent To Request New Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... level surveys of similar scope and size. The sample for each selected community will be strategically... of 2 hours per sample community. Full Study: The maximum sample size for the full study is 2,812... questionnaires. The initial sample size for this phase of the research is 100 respondents (10 respondents per...
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
ERIC Educational Resources Information Center
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
Chen, Mo; Hyppa-Martin, Jolene K.; Reichle, Joe E.; Symons, Frank J.
2017-01-01
Meaningfully synthesizing single case experimental data from intervention studies comprised of individuals with low incidence conditions and generating effect size estimates remains challenging. Seven effect size metrics were compared for single case design (SCD) data focused on teaching speech generating device use to individuals with intellectual and developmental disabilities (IDD) with moderate to profound levels of impairment. The effect size metrics included percent of data points exceeding the median (PEM), percent of nonoverlapping data (PND), improvement rate difference (IRD), percent of all nonoverlapping data (PAND), Phi, nonoverlap of all pairs (NAP), and Taunovlap. Results showed that among the seven effect size metrics, PAND, Phi, IRD, and PND were more effective in quantifying intervention effects for the data sample (N = 285 phase or condition contrasts). Results are discussed with respect to issues concerning extracting and calculating effect sizes, visual analysis, and SCD intervention research in IDD. PMID:27119210
[Practical aspects regarding sample size in clinical research].
Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S
1996-01-01
The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.
Breaking Free of Sample Size Dogma to Perform Innovative Translational Research
Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.
2011-01-01
Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197
What is the optimum sample size for the study of peatland testate amoeba assemblages?
Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J
2017-10-01
Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.
Measuring solids concentration in stormwater runoff: comparison of analytical methods.
Clark, Shirley E; Siu, Christina Y S
2008-01-15
Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.
Modeling end-use quality in U. S. soft wheat germplasm
USDA-ARS?s Scientific Manuscript database
End-use quality in soft wheat (Triticum aestivum L.) can be assessed by a wide array of measurements, generally categorized into grain, milling, and baking characteristics. Samples were obtained from four regional nurseries. Selected parameters included: test weight, kernel hardness, kernel size, ke...
Ellis, Alisha M.; Marot, Marci E.; Wheaton, Cathryn J.; Bernier, Julie C.; Smith, Christopher G.
2016-02-03
This report is an archive for sedimentological data derived from the surface sediment of Chincoteague Bay. Data are available for the spring (March/April 2014) and fall (October 2014) samples collected. Downloadable data are provided as Excel spreadsheets and as JPEG files. Additional files include ArcGIS shapefiles of the sampling sites, detailed results of sediment grain-size analyses, and formal Federal Geographic Data Committee metadata (data downloads).
Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests
Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...
Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods
2016-11-01
ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample
ERIC Educational Resources Information Center
Shieh, Gwowen
2013-01-01
The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…
Bello, Dhimiter; Wardle, Brian L; Zhang, Jie; Yamamoto, Namiko; Santeufemio, Christopher; Hallock, Marilyn; Virji, M Abbas
2010-01-01
This work investigated exposures to nanoparticles and nanofibers during solid core drilling of two types of advanced carbon nanotube (CNT)-hybrid composites: (1) reinforced plastic hybrid laminates (alumina fibers and CNT); and (2) graphite-epoxy composites (carbon fibers and CNT). Multiple real-time instruments were used to characterize the size distribution (5.6 nm to 20 microm), number and mass concentration, particle-bound polyaromatic hydrocarbons (b-PAHs), and surface area of airborne particles at the source and breathing zone. Time-integrated samples included grids for electron microscopy characterization of particle morphology and size resolved (2 nm to 20 microm) samples for the quantification of metals. Several new important findings herein include generation of airborne clusters of CNTs not seen during saw-cutting of similar composites, fewer nanofibers and respirable fibers released, similarly high exposures to nanoparticles with less dependence on the composite thickness, and ultrafine (< 5 nm) aerosol originating from thermal degradation of the composite material.
Brain Stimulation in Alzheimer's Disease.
Chang, Chun-Hung; Lane, Hsien-Yuan; Lin, Chieh-Hsin
2018-01-01
Brain stimulation techniques can modulate cognitive functions in many neuropsychiatric diseases. Pilot studies have shown promising effects of brain stimulations on Alzheimer's disease (AD). Brain stimulations can be categorized into non-invasive brain stimulation (NIBS) and invasive brain stimulation (IBS). IBS includes deep brain stimulation (DBS), and invasive vagus nerve stimulation (VNS), whereas NIBS includes transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS), transcranial alternating current stimulation (tACS), electroconvulsive treatment (ECT), magnetic seizure therapy (MST), cranial electrostimulation (CES), and non-invasive VNS. We reviewed the cutting-edge research on these brain stimulation techniques and discussed their therapeutic effects on AD. Both IBS and NIBS may have potential to be developed as novel treatments for AD; however, mixed findings may result from different study designs, patients selection, population, or samples sizes. Therefore, the efficacy of NIBS and IBS in AD remains uncertain, and needs to be further investigated. Moreover, more standardized study designs with larger sample sizes and longitudinal follow-up are warranted for establishing a structural guide for future studies and clinical application.
The Discovery of Single-Nucleotide Polymorphisms—and Inferences about Human Demographic History
Wakeley, John; Nielsen, Rasmus; Liu-Cordero, Shau Neen; Ardlie, Kristin
2001-01-01
A method of historical inference that accounts for ascertainment bias is developed and applied to single-nucleotide polymorphism (SNP) data in humans. The data consist of 84 short fragments of the genome that were selected, from three recent SNP surveys, to contain at least two polymorphisms in their respective ascertainment samples and that were then fully resequenced in 47 globally distributed individuals. Ascertainment bias is the deviation, from what would be observed in a random sample, caused either by discovery of polymorphisms in small samples or by locus selection based on levels or patterns of polymorphism. The three SNP surveys from which the present data were derived differ both in their protocols for ascertainment and in the size of the samples used for discovery. We implemented a Monte Carlo maximum-likelihood method to fit a subdivided-population model that includes a possible change in effective size at some time in the past. Incorrectly assuming that ascertainment bias does not exist causes errors in inference, affecting both estimates of migration rates and historical changes in size. Migration rates are overestimated when ascertainment bias is ignored. However, the direction of error in inferences about changes in effective population size (whether the population is inferred to be shrinking or growing) depends on whether either the numbers of SNPs per fragment or the SNP-allele frequencies are analyzed. We use the abbreviation “SDL,” for “SNP-discovered locus,” in recognition of the genomic-discovery context of SNPs. When ascertainment bias is modeled fully, both the number of SNPs per SDL and their allele frequencies support a scenario of growth in effective size in the context of a subdivided population. If subdivision is ignored, however, the hypothesis of constant effective population size cannot be rejected. An important conclusion of this work is that, in demographic or other studies, SNP data are useful only to the extent that their ascertainment can be modeled. PMID:11704929
Anomalous permittivity in fine-grain barium titanate
NASA Astrophysics Data System (ADS)
Ostrander, Steven Paul
Fine-grain barium titanate capacitors exhibit anomalously large permittivity. It is often observed that these materials will double or quadruple the room temperature permittivity of a coarse-grain counterpart. However, aside from a general consensus on this permittivity enhancement, the properties of the fine-grain material are poorly understood. This thesis examines the effect of grain size on dielectric properties of a self-consistent set of high density undoped barium titanate capacitors. This set included samples with grain sizes ranging from submicron to ˜20 microns, and with densities generally above 95% of the theoretical. A single batch of well characterized powder was milled, dry-pressed then isostatically-pressed. Compacts were fast-fired, but sintering temperature alone was used to control the grain size. With this approach, the extrinsic influences are minimized within the set of samples, but more importantly, they are normalized between samples. That is, with a single batch of powder and with identical green processing, uniform impurity concentration is expected. The fine-grain capacitors exhibited a room temperature permittivity of ˜5500 and dielectric losses of ˜2%. The Curie-temperature decreased by {˜}5sp°C from that of the coarse-grain material, and the two ferroelectric-ferroelectric phase transition temperatures increased by {˜}10sp°C. The grain size induced permittivity enhancement was only active in the tetragonal and orthorhombic phases. Strong dielectric anomalies were observed in samples with grain size as small as {˜}0.4\\ mum. It is suggested that the strong first-order character observed in the present data is related to control of microstructure and stoichiometry. Grain size effects on conductivity losses, ferroelectric losses, ferroelectric dispersion, Maxwell-Wagner dispersion, and dielectric aging of permittivity and loss were observed. For the fine-grain material, these observations suggest the suppression of domain wall motion below the Curie transition, and the suppression of conductivity above the Curie transition.
Thompson, Steven K
2006-12-01
A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.
Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach
NASA Technical Reports Server (NTRS)
Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.
Nationwide Databases in Orthopaedic Surgery Research.
Bohl, Daniel D; Singh, Kern; Grauer, Jonathan N
2016-10-01
The use of nationwide databases to conduct orthopaedic research has expanded markedly in recent years. Nationwide databases offer large sample sizes, sampling of patients who are representative of the country as a whole, and data that enable investigation of trends over time. The most common use of nationwide databases is to study the occurrence of postoperative adverse events. Other uses include the analysis of costs and the investigation of critical hospital metrics, such as length of stay and readmission rates. Although nationwide databases are powerful research tools, readers should be aware of the differences between them and their limitations. These include variations and potential inaccuracies in data collection, imperfections in patient sampling, insufficient postoperative follow-up, and lack of orthopaedic-specific outcomes.
Electrical and magnetic properties of nano-sized magnesium ferrite
NASA Astrophysics Data System (ADS)
T, Smitha; X, Sheena; J, Binu P.; Mohammed, E. M.
2015-02-01
Nano-sized magnesium ferrite was synthesized using sol-gel techniques. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. Vibration Sample Magnetometer was used to record the magnetic measurements. XRD analysis reveals the prepared sample is single phasic without any impurity. Particle size calculation shows the average crystallite size of the sample is 19nm. FTIR analysis confirmed spinel structure of the prepared samples. Magnetic measurement study shows that the sample is ferromagnetic with high degree of isotropy. Hysterisis loop was traced at temperatures 100K and 300K. DC electrical resistivity measurements show semiconducting nature of the sample.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
Sample Size in Qualitative Interview Studies: Guided by Information Power.
Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit
2015-11-27
Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.
Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F
2015-01-01
Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.
Frictional behaviour of sandstone: A sample-size dependent triaxial investigation
NASA Astrophysics Data System (ADS)
Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus
2017-01-01
Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.
A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models
ERIC Educational Resources Information Center
Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.
2013-01-01
Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…
[The research protocol III. Study population].
Arias-Gómez, Jesús; Villasís-Keever, Miguel Ángel; Miranda-Novales, María Guadalupe
2016-01-01
The study population is defined as a set of cases, determined, limited, and accessible, that will constitute the subjects for the selection of the sample, and must fulfill several characteristics and distinct criteria. The objectives of this manuscript are focused on specifying each one of the elements required to make the selection of the participants of a research project, during the elaboration of the protocol, including the concepts of study population, sample, selection criteria and sampling methods. After delineating the study population, the researcher must specify the criteria that each participant has to comply. The criteria that include the specific characteristics are denominated selection or eligibility criteria. These criteria are inclusion, exclusion and elimination, and will delineate the eligible population. The sampling methods are divided in two large groups: 1) probabilistic or random sampling and 2) non-probabilistic sampling. The difference lies in the employment of statistical methods to select the subjects. In every research, it is necessary to establish at the beginning the specific number of participants to be included to achieve the objectives of the study. This number is the sample size, and can be calculated or estimated with mathematical formulas and statistic software.
Cratering in glasses impacted by debris or micrometeorites
NASA Technical Reports Server (NTRS)
Wiedlocher, David E.; Kinser, Donald L.
1993-01-01
Mechanical strength measurements on five glasses and one glass-ceramic exposed on LDEF revealed no damage exceeding experimental limits of error. The measurement technique subjected less than 5 percent of the sample surface area to stresses above 90 percent of the failure strength. Seven micrometeorite or space debris impacts occurred at locations which were not in that portion of the sample subjected to greater than 90 percent of the applied stress. As a result of this, the impact events on the sample were not detected in the mechanical strength measurements. The physical form and structure of the impact sites was carefully examined to determine the influence of those events upon stress concentration associated with the impact and the resulting mechanical strength. The size of the impact site, insofar as it determines flaw size for fracture purposes, was examined. Surface topography of the impacts reveals that six of the seven sites display impact melting. The classical melt crater structure is surrounded by a zone of fractured glass. Residual stresses arising from shock compression and from cooling of the fused zone cannot be included in the fracture mechanics analyses based on simple flaw size measurements. Strategies for refining estimates of mechanical strength degradation by impact events are presented.
NASA Astrophysics Data System (ADS)
Young, G.; Jones, H. M.; Darbyshire, E.; Baustian, K. J.; McQuaid, J. B.; Bower, K. N.; Connolly, P. J.; Gallagher, M. W.; Choularton, T. W.
2015-10-01
Single-particle compositional analysis of filter samples collected on-board the FAAM BAe-146 aircraft is presented for six flights during the springtime Aerosol-Cloud Coupling and Climate Interactions in the Arctic (ACCACIA) campaign (March-April 2013). Scanning electron microscopy was utilised to derive size distributions and size-segregated particle compositions. These data were compared to corresponding data from wing-mounted optical particle counters and reasonable agreement between the calculated number size distributions was found. Significant variability in composition was observed, with differing external and internal mixing identified, between air mass trajectory cases based on HYSPLIT analyses. Dominant particle classes were silicate-based dusts and sea salts, with particles notably rich in K and Ca detected in one case. Source regions varied from the Arctic Ocean and Greenland through to northern Russia and the European continent. Good agreement between the back trajectories was mirrored by comparable compositional trends between samples. Silicate dusts were identified in all cases, and the elemental composition of the dust was consistent for all samples except one. It is hypothesised that long-range, high-altitude transport was primarily responsible for this dust, with likely sources including the Asian arid regions.
A computer program for sample size computations for banding studies
Wilson, K.R.; Nichols, J.D.; Hines, J.E.
1989-01-01
Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.
Neurocognitive performance in family-based and case-control studies of schizophrenia
Gur, Ruben C.; Braff, David L.; Calkins, Monica E.; Dobie, Dorcas J.; Freedman, Robert; Green, Michael F.; Greenwood, Tiffany A.; Lazzeroni, Laura C.; Light, Gregory A.; Nuechterlein, Keith H.; Olincy, Ann; Radant, Allen D.; Seidman, Larry J.; Siever, Larry J.; Silverman, Jeremy M.; Sprock, Joyce; Stone, William S.; Sugar, Catherine A.; Swerdlow, Neal R.; Tsuang, Debby W.; Tsuang, Ming T.; Turetsky, Bruce I.; Gur, Raquel E.
2014-01-01
Background Neurocognitive deficits in schizophrenia (SZ) are established and the Consortium on the Genetics of Schizophrenia (COGS) investigated such measures as endophenotypes in family-based (COGS-1) and case-control (COGS-2) studies. By requiring family participation, family-based sampling may result in samples that vary demographically and perform better on neurocognitive measures. Methods The Penn computerized neurocognitive battery (CNB) evaluates accuracy and speed of performance for several domains and was administered across sites in COGS-1 and COGS-2. Most tests were included in both studies. COGS-1 included 328 patients with SZ and 497 healthy comparison subjects (HCS) and COGS-2 included 1195 patients and 1009 HCS. Results Demographically, COGS-1 participants were younger, more educated, with more educated parents and higher estimated IQ compared to COGS-2 participants. After controlling for demographics, the two samples produced very similar performance profiles compared to their respective controls. As expected, performance was better and with smaller effect sizes compared to controls in COGS-1 relative to COGS-2. Better performance was most pronounced for spatial processing while emotion identification had large effect sizes for both accuracy and speed in both samples. Performance was positively correlated with functioning and negatively with negative and positive symptoms in both samples, but correlations were attenuated in COGS-2, especially with positive symptoms. Conclusions Patients ascertained through family-based design have more favorable demographics and better performance on some neurocognitive domains. Thus, studies that use case-control ascertainment may tap into populations with more severe forms of illness that are exposed to less favorable factors compared to those ascertained with family-based designs. PMID:25432636
Probability of coincidental similarity among the orbits of small bodies - I. Pairing
NASA Astrophysics Data System (ADS)
Jopek, Tadeusz Jan; Bronikowska, Małgorzata
2017-09-01
Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.
Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products
NASA Astrophysics Data System (ADS)
Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun
2011-10-01
To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."
Coalescent: an open-science framework for importance sampling in coalescent theory.
Tewari, Susanta; Spouge, John L
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.
Ma, Yan; Xie, Jiawen; Jin, Jing; Wang, Wei; Yao, Zhijian; Zhou, Qing; Li, Aimin; Liang, Ying
2015-07-01
A novel magnetic solid phase extraction coupled with high-performance liquid chromatography method was established to analyze polyaromatic hydrocarbons in environmental water samples. The extraction conditions, including the amount of extraction agent, extraction time, pH and the surface structure of the magnetic extraction agent, were optimized. The results showed that the amount of extraction agent and extraction time significantly influenced the extraction performance. The increase in the specific surface area, the enlargement of pore size, and the reduction of particle size could enhance the extraction performance of the magnetic microsphere. The optimized magnetic extraction agent possessed a high surface area of 1311 m(2) /g, a large pore size of 6-9 nm, and a small particle size of 6-9 μm. The limit of detection for phenanthrene and benzo[g,h,i]perylene in the developed analysis method was 3.2 and 10.5 ng/L, respectively. When applied to river water samples, the spiked recovery of phenanthrene and benzo[g,h,i]perylene ranged from 89.5-98.6% and 82.9-89.1%, respectively. Phenanthrene was detected over a concentration range of 89-117 ng/L in three water samples withdrawn from the midstream of the Huai River, and benzo[g,h,i]perylene was below the detection limit. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Effective population sizes of a major vector of human diseases, Aedes aegypti.
Saarman, Norah P; Gloria-Soria, Andrea; Anderson, Eric C; Evans, Benjamin R; Pless, Evlyn; Cosme, Luciano V; Gonzalez-Acosta, Cassandra; Kamgang, Basile; Wesson, Dawn M; Powell, Jeffrey R
2017-12-01
The effective population size ( N e ) is a fundamental parameter in population genetics that determines the relative strength of selection and random genetic drift, the effect of migration, levels of inbreeding, and linkage disequilibrium. In many cases where it has been estimated in animals, N e is on the order of 10%-20% of the census size. In this study, we use 12 microsatellite markers and 14,888 single nucleotide polymorphisms (SNPs) to empirically estimate N e in Aedes aegypti , the major vector of yellow fever, dengue, chikungunya, and Zika viruses. We used the method of temporal sampling to estimate N e on a global dataset made up of 46 samples of Ae. aegypti that included multiple time points from 17 widely distributed geographic localities. Our N e estimates for Ae. aegypti fell within a broad range (~25-3,000) and averaged between 400 and 600 across all localities and time points sampled. Adult census size (N c ) estimates for this species range between one and five thousand, so the N e / N c ratio is about the same as for most animals. These N e values are lower than estimates available for other insects and have important implications for the design of genetic control strategies to reduce the impact of this species of mosquito on human health.
Herzog, Sereina A; Low, Nicola; Berghold, Andrea
2015-06-19
The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.
Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples
NASA Astrophysics Data System (ADS)
Petit, Johan; Lallemant, Lucile
2017-05-01
In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.
Jorgenson, Andrew K; Clark, Brett
2013-01-01
This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region.
Production of medical radioactive isotopes using KIPT electron driven subcritical facility.
Talamo, Alberto; Gohar, Yousry
2008-05-01
Kharkov Institute of Physics and Technology (KIPT) of Ukraine in collaboration with Argonne National Laboratory (ANL) has a plan to construct an electron accelerator driven subcritical assembly. One of the facility objectives is the production of medical radioactive isotopes. This paper presents the ANL collaborative work performed for characterizing the facility performance for producing medical radioactive isotopes. First, a preliminary assessment was performed without including the self-shielding effect of the irradiated samples. Then, more detailed investigation was carried out including the self-shielding effect, which defined the sample size and location for producing each medical isotope. In the first part, the reaction rates were calculated as the multiplication of the cross section with the unperturbed neutron flux of the facility. Over fifty isotopes have been considered and all transmutation channels are used including (n, gamma), (n, 2n), (n, p), and (gamma, n). In the second part, the parent isotopes with high reaction rate were explicitly modeled in the calculations. Four irradiation locations were considered in the analyses to study the medical isotope production rate. The results show the self-shielding effect not only reduces the specific activity but it also changes the irradiation location that maximizes the specific activity. The axial and radial distributions of the parent capture rates have been examined to define the irradiation sample size of each parent isotope.
Sample size calculation for a proof of concept study.
Yin, Yin
2002-05-01
Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.
INDUSTRIAL RADIOGRAPHY COURSE, INSTRUCTORS' GUIDE. VOLUME 2.
ERIC Educational Resources Information Center
Texas A and M Univ., College Station. Engineering Extension Service.
INFORMATION RELATIVE TO THE LESSON PLANS IN "INDUSTRIAL RADIOGRAPHY COURSE, INSTRUCTOR'S GUIDE, VOLUME I" (VT 003 565) IS PRESENTED ON 52 INFORMATION SHEETS INCLUDING THE SUBJECTS SHIELDING EQUATIONS AND LOGARITHMS, METAL PROPERTIES, FIELD TRIP INSTRUCTIONS FOR STUDENTS, WELDING SYMBOLS AND SIZES, SAMPLE REPORT FORMS, AND TYPICAL SHIPPING…
VizieR Online Data Catalog: Local Swift-BAT AGN observed with Herschel (Lutz+, 2018)
NASA Astrophysics Data System (ADS)
Lutz, D.; Shimizu, T.; Davies, R. I.; Herrera Camus, R.; Sturm, E.; Tacconi, L. J.; Veilleux, S.
2017-09-01
Table A.1 lists the basic properties of the BAT AGN and reference samples, and the derived far-infrared sizes. For guidance, part of the table and related notes are also included in an appendix to the paper. (1 data file).
A Laboratory Experiment for the Statistical Evaluation of Aerosol Retrieval (STEAR) Algorithms
NASA Astrophysics Data System (ADS)
Schuster, G. L.; Espinosa, R.; Ziemba, L. D.; Beyersdorf, A. J.; Rocha Lima, A.; Anderson, B. E.; Martins, J. V.; Dubovik, O.; Ducos, F.; Fuertes, D.; Lapyonok, T.; Shook, M.; Derimian, Y.; Moore, R.
2016-12-01
We have developed a method for validating Aerosol Robotic Network (AERONET) retrieval algorithms by mimicking atmospheric extinction and radiance measurements in a laboratory experiment. This enables radiometric retrievals that utilize the same sampling volumes, relative humidities, and particle size ranges as observed by other in situ instrumentation in the experiment. We utilize three Cavity Attenuated Phase Shift (CAPS) monitors for extinction and UMBC's three-wavelength Polarized Imaging Nephelometer (PI-Neph) for angular scattering measurements. We subsample the PI-Neph radiance measurements to angles that correspond to AERONET almucantar scans, with solar zenith angles ranging from 50 to 77 degrees. These measurements are then used as input to the Generalized Retrieval of Aerosol and Surface Properties (GRASP) algorithm, which retrieves size distributions, complex refractive indices, single-scatter albedos (SSA), and lidar ratios for the in situ samples. We obtained retrievals with residuals R < 10% for 100 samples. The samples that we tested include Arizona Test Dust, Arginotec NX, Senegal clay, Israel clay, montmorillonite, hematite, goethite, volcanic ash, ammonium nitrate, ammonium sulfate, and fullerene soot. Samples were alternately dried or humidified, and size distributions were limited to diameters of 1.0 or 2.5 um by using a cyclone. The SSA at 532 nm for these samples ranged from 0.59 to 1.00 when computed with CAPS extinction and PSAP absorption measurements. The GRASP retrieval provided SSAs that are highly correlated with the in situ SSAs, and the correlation coefficients ranged from 0.955 to 0.976, depending upon the simulated solar zenith angle. The GRASP SSAs exhibited an average absolute bias of +0.023 +/-0.01 with respect to the extinction and absorption measurements for the entire dataset. Although our apparatus was not capable of measuring backscatter lidar ratio, we did measure bistatic lidar ratios at a scattering angle of 173 deg. The GRASP bistatic lidar ratios had correlations of 0.488 to 0.735 (depending upon simulated SZA) with respect to in situ measurements, positive relative biases of 6-10%, and average absolute biases of 4.0-6.6 sr. We also compared the GRASP size distributions to aerodynamic particle size measurements.
Bulluck, Heerajnarain; Hammond-Haley, Matthew; Weinmann, Shane; Martinez-Macias, Roberto; Hausenloy, Derek J
2017-03-01
The aim of this study was to review randomized controlled trials (RCTs) using cardiac magnetic resonance (CMR) to assess myocardial infarct (MI) size in reperfused patients with ST-segment elevation myocardial infarction (STEMI). There is limited guidance on the use of CMR in clinical cardioprotection RCTs in patients with STEMI treated by primary percutaneous coronary intervention. All RCTs in which CMR was used to quantify MI size in patients with STEMI treated with primary percutaneous coronary intervention were identified and reviewed. Sixty-two RCTs (10,570 patients, January 2006 to November 2016) were included. One-third did not report CMR vendor or scanner strength, the contrast agent and dose used, and the MI size quantification technique. Gadopentetate dimeglumine was most commonly used, followed by gadoterate meglumine and gadobutrol at 0.20 mmol/kg each, with late gadolinium enhancement acquired at 10 min; in most RCTs, MI size was quantified manually, followed by the 5 standard deviation threshold; dropout rates were 9% for acute CMR only and 16% for paired acute and follow-up scans. Weighted mean acute and chronic MI sizes (≤12 h, initial TIMI [Thrombolysis in Myocardial Infarction] flow grade 0 to 3) from the control arms were 21 ± 14% and 15 ± 11% of the left ventricle, respectively, and could be used for future sample-size calculations. Pre-selecting patients most likely to benefit from the cardioprotective therapy (≤6 h, initial TIMI flow grade 0 or 1) reduced sample size by one-third. Other suggested recommendations for standardizing CMR in future RCTs included gadobutrol at 0.15 mmol/kg with late gadolinium enhancement at 15 min, manual or 6-SD threshold for MI quantification, performing acute CMR at 3 to 5 days and follow-up CMR at 6 months, and adequate reporting of the acquisition and analysis of CMR. There is significant heterogeneity in RCT design using CMR in patients with STEMI. The authors provide recommendations for standardizing the assessment of MI size using CMR in future clinical cardioprotection RCTs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Physical properties of glasses exposed to Earth-facing and trailing-side environments on LDEF
NASA Technical Reports Server (NTRS)
Wiedlocher, David E.; Kinser, Donald L.; Weller, Robert A.; Weeks, Robert A.; Mendenhall, Marcus H.
1993-01-01
The exposure of 108 glass samples and 12 glass-ceramic samples to Earth-orbit environments permitted measurements which establish the effects of each environment. Examination of five glass types and one glass ceramic located on both the Earth-facing side and the trailing edge revealed no reduction in strength within experimental limits. Strength measurements subjected less than 5 percent of the sample surface area to stresses above 90 percent of the glass's failure strength. Seven micrometeorite or space debris impacts occurred on trailing edge samples. One of those impacts occurred in a location which was subjected to 50 percent of the applied stress at failure. Micrometeorite or space debris impacts were not observed on Earth-facing samples. The physical shape and structure of the impact sites were carefully examined using stereographic scanning electron microscopy. These impacts induce a stress concentration at the damaged region which influences mechanical strength. The flaw size produced by such damage was examined to determine the magnitude of strength degradation in micrometeorite or space-debris impacted glasses. Scanning electron microscopy revealed topographical details of impact sites which included central melt zones and glass fiber production. The overall crater structure is similar to much larger impacts of large meteorite on the Moon in that the melt crater is surrounded by shocked regions of material which fracture zones and spall areas. Residual stresses arising from shock compression and cooling of the fused zone cannot currently be included in fracture mechanics analyses based on simple flaw size examination.
NASA Astrophysics Data System (ADS)
Bolling, Denzell Tamarcus
A significant amount of research has been devoted to the characterization of new engineering materials. Searching for new alloys which may improve weight, ultimate strength, or fatigue life are just a few of the reasons why researchers study different materials. In support of that mission this study focuses on the effects of specimen geometry and size on the dynamic failure of AA2219 aluminum alloy subjected to impact loading. Using the Split Hopkinson Pressure Bar (SHPB) system different geometric samples including cubic, rectangular, cylindrical, and frustum samples are loaded at different strain rates ranging from 1000s-1 to 6000s-1. The deformation properties, including the potential for the formation of adiabatic shear bands, of the different geometries are compared. Overall the cubic geometry achieves the highest critical strain and the maximum stress values at low strain rates and the rectangular geometry has the highest critical strain and the maximum stress at high strain rates. The frustum geometry type consistently achieves the lowest the maximum stress value compared to the other geometries under equal strain rates. All sample types clearly indicated susceptibility to strain localization at different locations within the sample geometry. Micrograph analysis indicated that adiabatic shear band geometry was influenced by sample geometry, and that specimens with a circular cross section are more susceptible to shear band formation than specimens with a rectangular cross section.
Chenais, Erika; Sternberg-Lewerin, Susanna; Boqvist, Sofia; Liu, Lihong; LeBlanc, Neil; Aliro, Tonny; Masembe, Charles; Ståhl, Karl
2017-02-01
In Uganda, a low-income country in east Africa, African swine fever (ASF) is endemic with yearly outbreaks. In the prevailing smallholder subsistence farming systems, farm biosecurity is largely non-existent. Outbreaks of ASF, particularly in smallholder farms, often go unreported, creating significant epidemiological knowledge gaps. The continuous circulation of ASF in smallholder settings also creates biosecurity challenges for larger farms. In this study, an on-going outbreak of ASF in an endemic area was investigated on farm level, including analyses of on-farm environmental virus contamination. The study was carried out on a medium-sized pig farm with 35 adult pigs and 103 piglets or growers at the onset of the outbreak. Within 3 months, all pigs had died or were slaughtered. The study included interviews with farm representatives as well as biological and environmental sampling. ASF was confirmed by the presence of ASF virus (ASFV) genomic material in biological (blood, serum) and environmental (soil, water, feed, manure) samples by real-time PCR. The ASFV-positive biological samples confirmed the clinical assessment and were consistent with known virus characteristics. Most environmental samples were found to be positive. Assessment of farm biosecurity, interviews, and the results from the biological and environmental samples revealed that breaches and non-compliance with biosecurity protocols most likely led to the introduction and within-farm spread of the virus. The information derived from this study provides valuable insight regarding the implementation of biosecurity measures, particularly in endemic areas.
SOLAR SYSTEM MOONS AS ANALOGS FOR COMPACT EXOPLANETARY SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, Stephen R.; Hinkel, Natalie R.; Raymond, Sean N., E-mail: skane@ipac.caltech.edu
2013-11-01
The field of exoplanetary science has experienced a recent surge of new systems that is largely due to the precision photometry provided by the Kepler mission. The latest discoveries have included compact planetary systems in which the orbits of the planets all lie relatively close to the host star, which presents interesting challenges in terms of formation and dynamical evolution. The compact exoplanetary systems are analogous to the moons orbiting the giant planets in our solar system, in terms of their relative sizes and semimajor axes. We present a study that quantifies the scaled sizes and separations of the solarmore » system moons with respect to their hosts. We perform a similar study for a large sample of confirmed Kepler planets in multi-planet systems. We show that a comparison between the two samples leads to a similar correlation between their scaled sizes and separation distributions. The different gradients of the correlations may be indicative of differences in the formation and/or long-term dynamics of moon and planetary systems.« less
Melvin, Elizabeth M.; Moore, Brandon R.; Gilchrist, Kristin H.; Grego, Sonia; Velev, Orlin D.
2011-01-01
The recent development of microfluidic “lab on a chip” devices requiring sample sizes <100 μL has given rise to the need to concentrate dilute samples and trap analytes, especially for surface-based detection techniques. We demonstrate a particle collection device capable of concentrating micron-sized particles in a predetermined area by combining AC electroosmosis (ACEO) and dielectrophoresis (DEP). The planar asymmetric electrode pattern uses ACEO pumping to induce equal, quadrilateral flow directed towards a stagnant region in the center of the device. A number of system parameters affecting particle collection efficiency were investigated including electrode and gap width, chamber height, applied potential and frequency, and number of repeating electrode pairs and electrode geometry. The robustness of the on-chip collection design was evaluated against varying electrolyte concentrations, particle types, and particle sizes. These devices are amenable to integration with a variety of detection techniques such as optical evanescent waveguide sensing. PMID:22662040
NASA Astrophysics Data System (ADS)
Peller, Joseph A.; Ceja, Nancy K.; Wawak, Amanda J.; Trammell, Susan R.
2018-02-01
Polarized light imaging and optical spectroscopy can be used to distinguish between healthy and diseased tissue. In this study, the design and testing of a single-pixel hyperspectral imaging system that uses differences in the polarization of light reflected from tissue to differentiate between healthy and thermally damaged tissue is discussed. Thermal lesions were created in porcine skin (n = 8) samples using an IR laser. The damaged regions were clearly visible in the polarized light hyperspectral images. Reflectance hyperspectral and white light imaging was also obtained for all tissue samples. Sizes of the thermally damaged regions as measured via polarized light hyperspectral imaging are compared to sizes of these regions as measured in the reflectance hyperspectral images and white light images. Good agreement between the sizes measured by all three imaging modalities was found. Hyperspectral polarized light imaging can differentiate between healthy and damaged tissue. Possible applications of this imaging system include determination of tumor margins during cancer surgery or pre-surgical biopsy.
Tran, Ngoc Han; Ngo, Huu Hao; Urase, Taro; Gin, Karina Yew-Hoong
2015-10-01
The presence of organic matter (OM) in raw wastewater, treated wastewater effluents, and natural water samples has been known to cause many problems in wastewater treatment and water reclamation processes, such as treatability, membrane fouling, and the formation of potentially toxic by-products during wastewater treatment. This paper summarizes the current knowledge on the methods for characterization and quantification of OM in water samples in relation to wastewater and water treatment processes including: (i) characterization based on the biodegradability; (ii) characterization based on particle size distribution; (iii) fractionation based on the hydrophilic/hydrophobic properties; (iv) characterization based on the molecular weight (MW) size distribution; and (v) characterization based on fluorescence excitation emission matrix. In addition, the advantages, disadvantages and applications of these methods are discussed in detail. The establishment of correlations among biodegradability, hydrophobic/hydrophilic fractions, MW size distribution of OM, membrane fouling and formation of toxic by-products potential is highly recommended for further studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Reduced amygdalar and hippocampal size in adults with generalized social phobia.
Irle, Eva; Ruhleder, Mirjana; Lange, Claudia; Seidler-Brandler, Ulrich; Salzer, Simone; Dechent, Peter; Weniger, Godehard; Leibing, Eric; Leichsenring, Falk
2010-03-01
Structural and functional brain imaging studies suggest abnormalities of the amygdala and hippocampus in posttraumatic stress disorder and major depressive disorder. However, structural brain imaging studies in social phobia are lacking. In total, 24 patients with generalized social phobia (GSP) and 24 healthy controls underwent 3-dimensional structural magnetic resonance imaging of the amygdala and hippocampus and a clinical investigation. Compared with controls, GSP patients had significantly reduced amygdalar (13%) and hippocampal (8%) size. The reduction in the size of the amygdala was statistically significant for men but not women. Smaller right-sided hippocampal volumes of GSP patients were significantly related to stronger disorder severity. Our sample included only patients with the generalized subtype of social phobia. Because we excluded patients with comorbid depression, our sample may not be representative. We report for the first time volumetric results in patients with GSP. Future assessment of these patients will clarify whether these changes are reversed after successful treatment and whether they predict treatment response.
NASA Astrophysics Data System (ADS)
Meng, Chao; Zhou, Hong; Cong, Dalong; Wang, Chuanwei; Zhang, Peng; Zhang, Zhihui; Ren, Luquan
2012-06-01
The thermal fatigue behavior of hot-work tool steel processed by a biomimetic coupled laser remelting process gets a remarkable improvement compared to untreated sample. The 'dowel pin effect', the 'dam effect' and the 'fence effect' of non-smooth units are the main reason of the conspicuous improvement of the thermal fatigue behavior. In order to get a further enhancement of the 'dowel pin effect', the 'dam effect' and the 'fence effect', this study investigated the effect of different unit morphologies (including 'prolate', 'U' and 'V' morphology) and the same unit morphology in different sizes on the thermal fatigue behavior of H13 hot-work tool steel. The results showed that the 'U' morphology unit had the optimum thermal fatigue behavior, then the 'V' morphology which was better than the 'prolate' morphology unit; when the unit morphology was identical, the thermal fatigue behavior of the sample with large unit sizes was better than that of the small sizes.
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA
Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe
2015-01-01
Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. Results: We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. Availability and implementation: http://github.com/brendankelly/micropower. Contact: brendank@mail.med.upenn.edu or hongzhe@upenn.edu PMID:25819674
NASA Astrophysics Data System (ADS)
Shin, Jung-Wook; Park, Jinku; Choi, Jang-Geun; Jo, Young-Heon; Kang, Jae Joong; Joo, HuiTae; Lee, Sang Heon
2017-12-01
The aim of this study was to examine the size structure of phytoplankton under varying coastal upwelling intensities and to determine the resulting primary productivity in the southwestern East Sea. Samples of phytoplankton assemblages were collected on five occasions from the Hupo Bank, off the east coast of Korea, during 2012-2013. Because two major surface currents have a large effect on water mass transport in this region, we first performed a Backward Particle Tracking Experiment (BPTE) to determine the coastal sea from which the collected samples originated according to advection time of BPTE particles, following which we used upwelling age (UA) to determine the intensity of coastal upwelling in the region of origin for each sample. Only samples that were affected by coastal upwelling in the region of origin were included in subsequent analyses. We found that as UA increased, there was a decreasing trend in the concentration of picophytoplankton, and increasing trends in the concentration of nanophytoplankton and microphytoplankton. We also examined the relationship between the size structure of phytoplankton and primary productivity in the Ulleung Basin (UB), which has experienced significant variation over the past decade. We found that primary productivity in UB was closely related to the strength of the southerly wind, which is the most important mechanism for coastal upwelling in the southwestern East Sea. Thus, the size structure of phytoplankton is determined by the intensity of coastal upwelling, which is regulated by the southerly wind, and makes an important contribution to primary productivity.
Microfluidic integration of parallel solid-phase liquid chromatography.
Huft, Jens; Haynes, Charles A; Hansen, Carl L
2013-03-05
We report the development of a fully integrated microfluidic chromatography system based on a recently developed column geometry that allows for robust packing of high-performance separation columns in poly(dimethylsiloxane) microfluidic devices having integrated valves made by multilayer soft lithography (MSL). The combination of parallel high-performance separation columns and on-chip plumbing was used to achieve a fully integrated system for on-chip chromatography, including all steps of automated sample loading, programmable gradient generation, separation, fluorescent detection, and sample recovery. We demonstrate this system in the separation of fluorescently labeled DNA and parallel purification of reverse transcription polymerase chain reaction (RT-PCR) amplified variable regions of mouse immunoglobulin genes using a strong anion exchange (AEX) resin. Parallel sample recovery in an immiscible oil stream offers the advantage of low sample dilution and high recovery rates. The ability to perform nucleic acid size selection and recovery on subnanogram samples of DNA holds promise for on-chip genomics applications including sequencing library preparation, cloning, and sample fractionation for diagnostics.
Ultrasonic characterization of single drops of liquids
Sinha, D.N.
1998-04-14
Ultrasonic characterization of single drops of liquids is disclosed. The present invention includes the use of two closely spaced transducers, or one transducer and a closely spaced reflector plate, to form an interferometer suitable for ultrasonic characterization of droplet-size and smaller samples without the need for a container. The droplet is held between the interferometer elements, whose distance apart may be adjusted, by surface tension. The surfaces of the interferometer elements may be readily cleansed by a stream of solvent followed by purified air when it is desired to change samples. A single drop of liquid is sufficient for high-quality measurement. Examples of samples which may be investigated using the apparatus and method of the present invention include biological specimens (tear drops; blood and other body fluid samples; samples from tumors, tissues, and organs; secretions from tissues and organs; snake and bee venom, etc.) for diagnostic evaluation, samples in forensic investigations, and detection of drugs in small quantities. 5 figs.
Ultrasonic characterization of single drops of liquids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinha, D.N.
Ultrasonic characterization of single drops of liquids is disclosed. The present invention includes the use of two closely spaced transducers, or one transducer and a closely spaced reflector plate, to form an interferometer suitable for ultrasonic characterization of droplet-size and smaller samples without the need for a container. The droplet is held between the interferometer elements, whose distance apart may be adjusted, by surface tension. The surfaces of the interferometer elements may be readily cleansed by a stream of solvent followed by purified air when it is desired to change samples. A single drop of liquid is sufficient for high-qualitymore » measurement. Examples of samples which may be investigated using the apparatus and method of the present invention include biological specimens (tear drops; blood and other body fluid samples; samples from tumors, tissues, and organs; secretions from tissues and organs; snake and bee venom, etc.) for diagnostic evaluation, samples in forensic investigations, and detection of drugs in small quantities. 5 figs.« less
NASA Technical Reports Server (NTRS)
Eigenbrode, J. L.; Glavin, D.; Coll, P.; Summons, R. E.; Mahaffy, P.; Archer, D.; Brunner, A.; Conrad, P.; Freissinet, C.; Martin, M.;
2013-01-01
key challenge in assessing the habitability of martian environments is the detection of organic matter - a requirement of all life as we know it. The Curiosity rover, which landed on August 6, 2012 in Gale Crater of Mars, includes the Sample Analysis at Mars (SAM) instrument suite capable of in situ analysis of gaseous organic components thermally evolved from sediment samples collected, sieved, and delivered by the MSL rover. On Sol 94, SAM received its first solid sample: scooped sediment from Rocknest that was sieved to <150 m particle size. Multiple 10-40 mg portions of the scoop #5 sample were delivered to SAM for analyses. Prior to their introduction, a blank (empty cup) analysis was performed. This blank served 1) to clean the analytical instrument of SAMinternal materials that accumulated in the gas processing system since integration into the rover, and 2) to characterize the background signatures of SAM. Both the blank and the Rocknest samples showed the presence of hydrocarbon components.
NASA Astrophysics Data System (ADS)
Szabo, Zoltan; Oden, Jeannette H.; Gibs, Jacob; Rice, Donald E.; Ding, Yuan
2002-02-01
Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering.
Valsecchi, E; Palsbøll, P; Hale, P; Glockner-Ferrari, D; Ferrari, M; Clapham, P; Larsen, F; Mattila, D; Sears, R; Sigurjonsson, J; Brown, M; Corkeron, P; Amos, B
1997-04-01
Mitochondrial DNA haplotypes of humpback whales show strong segregation between oceanic populations and between feeding grounds within oceans, but this highly structured pattern does not exclude the possibility of extensive nuclear gene flow. Here we present allele frequency data for four microsatellite loci typed across samples from four major oceanic regions: the North Atlantic (two mitochondrially distinct populations), the North Pacific, and two widely separated Antarctic regions, East Australia and the Antarctic Peninsula. Allelic diversity is a little greater in the two Antarctic samples, probably indicating historically greater population sizes. Population subdivision was examined using a wide range of measures, including Fst, various alternative forms of Slatkin's Rst, Goldstein and colleagues' delta mu, and a Monte Carlo approximation to Fisher's exact test. The exact test revealed significant heterogeneity in all but one of the pairwise comparisons between geographically adjacent populations, including the comparison between the two North Atlantic populations, suggesting that gene flow between oceans is minimal and that dispersal patterns may sometimes be restricted even in the absence of obvious barriers, such as land masses, warm water belts, and antitropical migration behavior. The only comparison where heterogeneity was not detected was the one between the two Antarctic population samples. It is unclear whether failure to find a difference here reflects gene flow between the regions or merely lack of statistical power arising from the small size of the Antarctic Peninsula sample. Our comparison between measures of population subdivision revealed major discrepancies between methods, with little agreement about which populations were most and least separated. We suggest that unbiased Rst (URst, see Goodman 1995) is currently the most reliable statistic, probably because, unlike the other methods, it allows for unequal sample sizes. However, in view of the fact that these alternative measures often contradict one another, we urge caution in the use of microsatellite data to quantify genetic distance.
Optimal Inspection of Imports to Prevent Invasive Pest Introduction.
Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G
2018-03-01
The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
Biased phylodynamic inferences from analysing clusters of viral sequences
Xiang, Fei; Frost, Simon D. W.
2017-01-01
Abstract Phylogenetic methods are being increasingly used to help understand the transmission dynamics of measurably evolving viruses, including HIV. Clusters of highly similar sequences are often observed, which appear to follow a ‘power law’ behaviour, with a small number of very large clusters. These clusters may help to identify subpopulations in an epidemic, and inform where intervention strategies should be implemented. However, clustering of samples does not necessarily imply the presence of a subpopulation with high transmission rates, as groups of closely related viruses can also occur due to non-epidemiological effects such as over-sampling. It is important to ensure that observed phylogenetic clustering reflects true heterogeneity in the transmitting population, and is not being driven by non-epidemiological effects. We qualify the effect of using a falsely identified ‘transmission cluster’ of sequences to estimate phylodynamic parameters including the effective population size and exponential growth rate under several demographic scenarios. Our simulation studies show that taking the maximum size cluster to re-estimate parameters from trees simulated under a randomly mixing, constant population size coalescent process systematically underestimates the overall effective population size. In addition, the transmission cluster wrongly resembles an exponential or logistic growth model 99% of the time. We also illustrate the consequences of false clusters in exponentially growing coalescent and birth-death trees, where again, the growth rate is skewed upwards. This has clear implications for identifying clusters in large viral databases, where a false cluster could result in wasted intervention resources. PMID:28852573
Phonological skills and their role in learning to read: a meta-analytic review.
Melby-Lervåg, Monica; Lyster, Solveig-Alma Halaas; Hulme, Charles
2012-03-01
The authors report a systematic meta-analytic review of the relationships among 3 of the most widely studied measures of children's phonological skills (phonemic awareness, rime awareness, and verbal short-term memory) and children's word reading skills. The review included both extreme group studies and correlational studies with unselected samples (235 studies were included, and 995 effect sizes were calculated). Results from extreme group comparisons indicated that children with dyslexia show a large deficit on phonemic awareness in relation to typically developing children of the same age (pooled effect size estimate: -1.37) and children matched on reading level (pooled effect size estimate: -0.57). There were significantly smaller group deficits on both rime awareness and verbal short-term memory (pooled effect size estimates: rime skills in relation to age-matched controls, -0.93, and reading-level controls, -0.37; verbal short-term memory skills in relation to age-matched controls, -0.71, and reading-level controls, -0.09). Analyses of studies of unselected samples showed that phonemic awareness was the strongest correlate of individual differences in word reading ability and that this effect remained reliable after controlling for variations in both verbal short-term memory and rime awareness. These findings support the pivotal role of phonemic awareness as a predictor of individual differences in reading development. We discuss whether such a relationship is a causal one and the implications of research in this area for current approaches to the teaching of reading and interventions for children with reading difficulties.
Seven ways to increase power without increasing N.
Hansen, W B; Collins, L M
1994-01-01
Many readers of this monograph may wonder why a chapter on statistical power was included. After all, by now the issue of statistical power is in many respects mundane. Everyone knows that statistical power is a central research consideration, and certainly most National Institute on Drug Abuse grantees or prospective grantees understand the importance of including a power analysis in research proposals. However, there is ample evidence that, in practice, prevention researchers are not paying sufficient attention to statistical power. If they were, the findings observed by Hansen (1992) in a recent review of the prevention literature would not have emerged. Hansen (1992) examined statistical power based on 46 cohorts followed longitudinally, using nonparametric assumptions given the subjects' age at posttest and the numbers of subjects. Results of this analysis indicated that, in order for a study to attain 80-percent power for detecting differences between treatment and control groups, the difference between groups at posttest would need to be at least 8 percent (in the best studies) and as much as 16 percent (in the weakest studies). In order for a study to attain 80-percent power for detecting group differences in pre-post change, 22 of the 46 cohorts would have needed relative pre-post reductions of greater than 100 percent. Thirty-three of the 46 cohorts had less than 50-percent power to detect a 50-percent relative reduction in substance use. These results are consistent with other review findings (e.g., Lipsey 1990) that have shown a similar lack of power in a broad range of research topics. Thus, it seems that, although researchers are aware of the importance of statistical power (particularly of the necessity for calculating it when proposing research), they somehow are failing to end up with adequate power in their completed studies. This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. It is easy to see how this overemphasis has come about. Sample size is easy to manipulate, has the advantage of being related to power in a straight-forward way, and usually is under the direct control of the researcher, except for limitations imposed by finances or subject availability. Another option for increasing power is to increase the alpha used for hypothesis-testing but, as very few researchers seriously consider significance levels much larger than the traditional .05, this strategy seldom is used. Of course, sample size is important, and the authors of this chapter are not recommending that researchers cease choosing sample sizes carefully. Rather, they argue that researchers should not confine themselves to increasing N to enhance power. It is important to take additional measures to maintain and improve power over and above making sure the initial sample size is sufficient. The authors recommend two general strategies. One strategy involves attempting to maintain the effective initial sample size so that power is not lost needlessly. The other strategy is to take measures to maximize the third factor that determines statistical power: effect size.
Recall of health warnings in smokeless tobacco ads
Truitt, L; Hamilton, W; Johnston, P; Bacani, C; Crawford, S; Hozik, L; Celebucki, C
2002-01-01
Design: Subjects examined two distracter ads and one of nine randomly assigned smokeless tobacco ads varying in health warning presence, size (8 to 18 point font), and contrast (low versus high)—including no health warning. They were then interviewed about ad content using recall and recognition questions. Subjects: A convenience sample of 895 English speaking males aged 16–24 years old who were intercepted at seven shopping malls throughout Massachusetts during May 2000. Main outcome measures: Proven aided recall, or recall of a health warning and correct recognition of the warning message among distracters, and false recall. Results: Controlling for covariates such as education, employment/student status, and Hispanic background, proven aided recall increased significantly with font size; doubling size from 10 to 20 point font would increase recall from 63% to 76%. Although not statistically significant, recall was somewhat better for high contrast warnings. Ten per cent of the sample mistakenly recalled the warning where none existed. Conclusions: As demonstrated by substantially greater recall among ads that included health warnings over ads that had none, health warnings retained their value to consumers despite years of exposure (that can produce false recall). Larger health warnings would enhance recall, and the proposed model can be used to estimate potential recall that affects communication, perceived health risk, and behaviour modification. PMID:12034984
Cell wall microstructure, pore size distribution and absolute density of hemp shiv
Lawrence, M.; Ansell, M. P.; Hussain, A.
2018-01-01
This paper, for the first time, fully characterizes the intrinsic physical parameters of hemp shiv including cell wall microstructure, pore size distribution and absolute density. Scanning electron microscopy revealed microstructural features similar to hardwoods. Confocal microscopy revealed three major layers in the cell wall: middle lamella, primary cell wall and secondary cell wall. Computed tomography improved the visualization of pore shape and pore connectivity in three dimensions. Mercury intrusion porosimetry (MIP) showed that the average accessible porosity was 76.67 ± 2.03% and pore size classes could be distinguished into micropores (3–10 nm) and macropores (0.1–1 µm and 20–80 µm). The absolute density was evaluated by helium pycnometry, MIP and Archimedes' methods. The results show that these methods can lead to misinterpretation of absolute density. The MIP method showed a realistic absolute density (1.45 g cm−3) consistent with the density of the known constituents, including lignin, cellulose and hemi-cellulose. However, helium pycnometry and Archimedes’ methods gave falsely low values owing to 10% of the volume being inaccessible pores, which require sample pretreatment in order to be filled by liquid or gas. This indicates that the determination of the cell wall density is strongly dependent on sample geometry and preparation. PMID:29765652
Bastistella, Luciane; Rousset, Patrick; Aviz, Antonio; Caldeira-Pires, Armando; Humbert, Gilles; Nogueira, Manoel
2018-02-09
New experimental techniques, as well as modern variants on known methods, have recently been employed to investigate the fundamental reactions underlying the oxidation of biochar. The purpose of this paper was to experimentally and statistically study how the relative humidity of air, mass, and particle size of four biochars influenced the adsorption of water and the increase in temperature. A random factorial design was employed using the intuitive statistical software Xlstat. A simple linear regression model and an analysis of variance with a pairwise comparison were performed. The experimental study was carried out on the wood of Quercus pubescens , Cyclobalanopsis glauca , Trigonostemon huangmosun , and Bambusa vulgaris , and involved five relative humidity conditions (22, 43, 75, 84, and 90%), two mass samples (0.1 and 1 g), and two particle sizes (powder and piece). Two response variables including water adsorption and temperature increase were analyzed and discussed. The temperature did not increase linearly with the adsorption of water. Temperature was modeled by nine explanatory variables, while water adsorption was modeled by eight. Five variables, including factors and their interactions, were found to be common to the two models. Sample mass and relative humidity influenced the two qualitative variables, while particle size and biochar type only influenced the temperature.
Cell wall microstructure, pore size distribution and absolute density of hemp shiv
NASA Astrophysics Data System (ADS)
Jiang, Y.; Lawrence, M.; Ansell, M. P.; Hussain, A.
2018-04-01
This paper, for the first time, fully characterizes the intrinsic physical parameters of hemp shiv including cell wall microstructure, pore size distribution and absolute density. Scanning electron microscopy revealed microstructural features similar to hardwoods. Confocal microscopy revealed three major layers in the cell wall: middle lamella, primary cell wall and secondary cell wall. Computed tomography improved the visualization of pore shape and pore connectivity in three dimensions. Mercury intrusion porosimetry (MIP) showed that the average accessible porosity was 76.67 ± 2.03% and pore size classes could be distinguished into micropores (3-10 nm) and macropores (0.1-1 µm and 20-80 µm). The absolute density was evaluated by helium pycnometry, MIP and Archimedes' methods. The results show that these methods can lead to misinterpretation of absolute density. The MIP method showed a realistic absolute density (1.45 g cm-3) consistent with the density of the known constituents, including lignin, cellulose and hemi-cellulose. However, helium pycnometry and Archimedes' methods gave falsely low values owing to 10% of the volume being inaccessible pores, which require sample pretreatment in order to be filled by liquid or gas. This indicates that the determination of the cell wall density is strongly dependent on sample geometry and preparation.
Sample size determination for equivalence assessment with multiple endpoints.
Sun, Anna; Dong, Xiaoyu; Tsong, Yi
2014-01-01
Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.
Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B
2018-06-01
The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.
Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios
NASA Technical Reports Server (NTRS)
Juarez, Alfredo; Harper, Susana A.
2016-01-01
The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.
Advanced hierarchical distance sampling
Royle, Andy
2016-01-01
In this chapter, we cover a number of important extensions of the basic hierarchical distance-sampling (HDS) framework from Chapter 8. First, we discuss the inclusion of “individual covariates,” such as group size, in the HDS model. This is important in many surveys where animals form natural groups that are the primary observation unit, with the size of the group expected to have some influence on detectability. We also discuss HDS integrated with time-removal and double-observer or capture-recapture sampling. These “combined protocols” can be formulated as HDS models with individual covariates, and thus they have a commonality with HDS models involving group structure (group size being just another individual covariate). We cover several varieties of open-population HDS models that accommodate population dynamics. On one end of the spectrum, we cover models that allow replicate distance sampling surveys within a year, which estimate abundance relative to availability and temporary emigration through time. We consider a robust design version of that model. We then consider models with explicit dynamics based on the Dail and Madsen (2011) model and the work of Sollmann et al. (2015). The final major theme of this chapter is relatively newly developed spatial distance sampling models that accommodate explicit models describing the spatial distribution of individuals known as Point Process models. We provide novel formulations of spatial DS and HDS models in this chapter, including implementations of those models in the unmarked package using a hack of the pcount function for N-mixture models.
Heidel, R Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
McRae, Jacqui M; Kirby, Nigel; Mertens, Haydyn D T; Kassara, Stella; Smith, Paul A
2014-07-23
The molecular size of wine tannins can influence astringency, and yet it has been unclear as to whether the standard methods for determining average tannin molecular weight (MW), including gel-permeation chromatography (GPC) and depolymerization reactions, are actually related to the size of the tannin in wine-like conditions. Small-angle X-ray scattering (SAXS) was therefore used to determine the molecular sizes and corresponding MWs of wine tannin samples from 3 and 7 year old Cabernet Sauvignon wine in a variety of wine-like matrixes: 5-15% and 100% ethanol; 0-200 mM NaCl and pH 3.0-4.0, and compared to those measured using the standard methods. The SAXS results indicated that the tannin samples from the older wine were larger than those of the younger wine and that wine composition did not greatly impact on tannin molecular size. The average tannin MWs as determined by GPC correlated strongly with the SAXS results, suggesting that this method does give a good indication of tannin molecular size in wine-like conditions. The MW as determined from the depolymerization reactions did not correlate as strongly with the SAXS results. To our knowledge, SAXS measurements have not previously been attempted for wine tannins.
Comparative tests of ectoparasite species richness in seabirds
Hughes, Joseph; Page, Roderic DM
2007-01-01
Background The diversity of parasites attacking a host varies substantially among different host species. Understanding the factors that explain these patterns of parasite diversity is critical to identifying the ecological principles underlying biodiversity. Seabirds (Charadriiformes, Pelecaniformes and Procellariiformes) and their ectoparasitic lice (Insecta: Phthiraptera) are ideal model groups in which to study correlates of parasite species richness. We evaluated the relative importance of morphological (body size, body weight, wingspan, bill length), life-history (longevity, clutch size), ecological (population size, geographical range) and behavioural (diving versus non-diving) variables as predictors of louse diversity on 413 seabird hosts species. Diversity was measured at the level of louse suborder, genus, and species, and uneven sampling of hosts was controlled for using literature citations as a proxy for sampling effort. Results The only variable consistently correlated with louse diversity was host population size and to a lesser extent geographic range. Other variables such as clutch size, longevity, morphological and behavioural variables including body mass showed inconsistent patterns dependent on the method of analysis. Conclusion The comparative analysis presented herein is (to our knowledge) the first to test correlates of parasite species richness in seabirds. We believe that the comparative data and phylogeny provide a valuable framework for testing future evolutionary hypotheses relating to the diversity and distribution of parasites on seabirds. PMID:18005412
Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G
2012-10-01
Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.
An inexpensive and portable microvolumeter for rapid evaluation of biological samples.
Douglass, John K; Wcislo, William T
2010-08-01
We describe an improved microvolumeter (MVM) for rapidly measuring volumes of small biological samples, including live zooplankton, embryos, and small animals and organs. Portability and low cost make this instrument suitable for widespread use, including at remote field sites. Beginning with Archimedes' principle, which states that immersing an arbitrarily shaped sample in a fluid-filled container displaces an equivalent volume, we identified procedures that maximize measurement accuracy and repeatability across a broad range of absolute volumes. Crucial steps include matching the overall configuration to the size of the sample, using reflected light to monitor fluid levels precisely, and accounting for evaporation during measurements. The resulting precision is at least 100 times higher than in previous displacement-based methods. Volumes are obtained much faster than by traditional histological or confocal methods and without shrinkage artifacts due to fixation or dehydration. Calibrations using volume standards confirmed accurate measurements of volumes as small as 0.06 microL. We validated the feasibility of evaluating soft-tissue samples by comparing volumes of freshly dissected ant brains measured with the MVM and by confocal reconstruction.
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
40 CFR 80.127 - Sample size guidelines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qualheim, B.J.
This report presents the results of the geochemical reconnaissance sampling in the Kingman 1 x 2 quadrangle of the National Topographical Map Series (NTMS). Wet and dry sediment samples were collected throughout the 18,770-km arid to semiarid area and water samples at available streams, springs, and wells. Neutron activation analysis of uranium and trace elements and other measurements made in the field and laboratory are presented in tabular hardcopy and microfiche format. The report includes five full-size overlays for use with the Kingman NTMS 1 : 250,000 quadrangle. Water sampling sites, water sample uranium concentrations, water-sample conductivity, sediment sampling sites,more » and sediment-sample total uranium and thorium concentrations are shown on the separate overlays. General geological and structural descriptions of the area are included and known uranium occurrences on this quadrangle are delineated. Results of the reconnaissance are briefly discussed and related to rock types in the final section of the report. The results are suggestive of uranium mineralization in only two areas: the Cerbat Mountains and near some of the western intrusives.« less
Stackable differential mobility analyzer for aerosol measurement
Cheng, Meng-Dawn [Oak Ridge, TN; Chen, Da-Ren [Creve Coeur, MO
2007-05-08
A multi-stage differential mobility analyzer (MDMA) for aerosol measurements includes a first electrode or grid including at least one inlet or injection slit for receiving an aerosol including charged particles for analysis. A second electrode or grid is spaced apart from the first electrode. The second electrode has at least one sampling outlet disposed at a plurality different distances along its length. A volume between the first and the second electrode or grid between the inlet or injection slit and a distal one of the plurality of sampling outlets forms a classifying region, the first and second electrodes for charging to suitable potentials to create an electric field within the classifying region. At least one inlet or injection slit in the second electrode receives a sheath gas flow into an upstream end of the classifying region, wherein each sampling outlet functions as an independent DMA stage and classifies different size ranges of charged particles based on electric mobility simultaneously.
Kristunas, Caroline A; Smith, Karen L; Gray, Laura J
2017-03-07
The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.
Dumouchelle, D.H.; De Roche, Jeffrey T.
1991-01-01
Wright-Patterson Air Force Base, in southwestern Ohio, overlies a buried-valley aquifer. The U.S. Geological Survey installed 35 observation wells at 13 sites on the base from fall 1988 through spring 1990. Fourteen of the wells were completed in bedrock; the remaining wells were completed in unconsolidated sediments. Split-spoon and bedrock cores were collected from all of the bedrock wells. Shelby-tube samples were collected from four wells. The wells were drilled by either the cable-tool or rotary method. Data presented in this report include lithologic and natural-gamma logs, and, for selected sediment samples, grain-size distributions of permeability. Final well-construction details, such as the total depth of well, screened interval, and grouting details, also are presented.
Surface plasmon enhanced cell microscopy with blocked random spatial activation
NASA Astrophysics Data System (ADS)
Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun
2016-03-01
We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.
NASA Technical Reports Server (NTRS)
Hixson, M. M.; Bauer, M. E.; Davis, B. J.
1979-01-01
The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.
Protection of obstetric dimensions in a small-bodied human sample.
Kurki, Helen K
2007-08-01
In human females, the bony pelvis must find a balance between being small (narrow) for efficient bipedal locomotion, and being large to accommodate a relatively large newborn. It has been shown that within a given population, taller/larger-bodied women have larger pelvic canals. This study investigates whether in a population where small body size is the norm, pelvic geometry (size and shape), on average, shows accommodation to protect the obstetric canal. Osteometric data were collected from the pelves, femora, and clavicles (body size indicators) of adult skeletons representing a range of adult body size. Samples include Holocene Later Stone Age (LSA) foragers from southern Africa (n = 28 females, 31 males), Portuguese from the Coimbra-identified skeletal collection (CISC) (n = 40 females, 40 males) and European-Americans from the Hamann-Todd osteological collection (H-T) (n = 40 females, 40 males). Patterns of sexual dimorphism are similar in the samples. Univariate and multivariate analyses of raw and Mosimann shape-variables indicate that compared to the CISC and H-T females, the LSA females have relatively large midplane and outlet canal planes (particularly posterior and A-P lengths). The LSA males also follow this pattern, although with absolutely smaller pelves in multivariate space. The CISC females, who have equally small stature, but larger body mass, do not show the same type of pelvic canal size and shape accommodation. The results suggest that adaptive allometric modeling in at least some small-bodied populations protects the obstetric canal. These findings support the use of population-specific attributes in the clinical evaluation of obstetric risk. (c) 2007 Wiley-Liss, Inc.
Steep discounting of delayed monetary and food rewards in obesity: a meta-analysis.
Amlung, M; Petker, T; Jackson, J; Balodis, I; MacKillop, J
2016-08-01
An increasing number of studies have investigated delay discounting (DD) in relation to obesity, but with mixed findings. This meta-analysis synthesized the literature on the relationship between monetary and food DD and obesity, with three objectives: (1) to characterize the relationship between DD and obesity in both case-control comparisons and continuous designs; (2) to examine potential moderators, including case-control v. continuous design, money v. food rewards, sample sex distribution, and sample age (18 years); and (3) to evaluate publication bias. From 134 candidate articles, 39 independent investigations yielded 29 case-control and 30 continuous comparisons (total n = 10 278). Random-effects meta-analysis was conducted using Cohen's d as the effect size. Publication bias was evaluated using fail-safe N, Begg-Mazumdar and Egger tests, meta-regression of publication year and effect size, and imputation of missing studies. The primary analysis revealed a medium effect size across studies that was highly statistically significant (d = 0.43, p < 10-14). None of the moderators examined yielded statistically significant differences, although notably larger effect sizes were found for studies with case-control designs, food rewards and child/adolescent samples. Limited evidence of publication bias was present, although the Begg-Mazumdar test and meta-regression suggested a slightly diminishing effect size over time. Steep DD of food and money appears to be a robust feature of obesity that is relatively consistent across the DD assessment methodologies and study designs examined. These findings are discussed in the context of research on DD in drug addiction, the neural bases of DD in obesity, and potential clinical applications.
Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei
2015-01-01
A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.
Neuromuscular dose-response studies: determining sample size.
Kopman, A F; Lien, C A; Naguib, M
2011-02-01
Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.
Zhu, Hong; Xu, Xiaohan; Ahn, Chul
2017-01-01
Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.
Constraining Ω0 with the Angular Size-Redshift Relation of Double-lobed Quasars in the FIRST Survey
NASA Astrophysics Data System (ADS)
Buchalter, Ari; Helfand, David J.; Becker, Robert H.; White, Richard L.
1998-02-01
In previous attempts to measure cosmological parameters from the angular size-redshift (θ-z) relation of double-lobed radio sources, the observed data have generally been consistent with a static Euclidean universe rather than with standard Friedmann models, and past authors have disagreed significantly as to what effects are responsible for this observation. These results and different interpretations may be due largely to a variety of selection effects and differences in the sample definitions destroying the integrity of the data sets, and inconsistencies in the analysis undermining the results. Using the VLA FIRST survey, we investigate the θ-z relation for a new sample of double-lobed quasars. We define a set of 103 sources, carefully addressing the various potential problems that, we believe, have compromised past work, including a robust definition of size and the completeness and homogeneity of the sample, and further devise a self-consistent method to assure accurate morphological classification and account for finite resolution effects in the analysis. Before focusing on cosmological constraints, we investigate the possible impact of correlations among the intrinsic properties of these sources over the entire assumed range of allowed cosmological parameter values. For all cases, we find apparent size evolution of the form l ~ (1 + z)c, with c ~ -0.8 +/- 0.4, which is found to arise mainly from a power-size correlation of the form l ~ Pβ (β ~ - 0.13 +/- 0.06) coupled with a power-redshift correlation. Intrinsic size evolution is consistent with zero. We also find that in all cases, a subsample with c ~ 0 can be defined, whose θ-z relation should therefore arise primarily from cosmological effects. These results are found to be independent of orientation effects, although other evidence indicates that orientation effects are present and consistent with predictions of the unified scheme for radio-loud active galactic nuclei. The above results are all confirmed by nonparametric analysis. Contrary to past work, we find that the observed θ-z relation for our sample is more consistent with standard Friedmann models than with a static Euclidean universe. Though the current data cannot distinguish with high significance between various Friedmann models, significant constraints on the cosmological parameters within a given model are obtained. In particular, we find that a flat, matter-dominated universe (Ω0 = 1), a flat universe with a cosmological constant, and an open universe all provide comparably good fits to the data, with the latter two models both yielding Ω0 ~ 0.35 with 1 σ ranges including values between ~0.25 and 1.0; the c ~ 0 subsamples yield values of Ω0 near unity in these models, though with even greater error ranges. We also examine the values of H0 implied by the data, using plausible assumptions about the intrinsic source sizes, and find these to be consistent with the currently accepted range of values. We determine the sample size needed to improve significantly the results and outline future strategies for such work.
Effect of Differential Item Functioning on Test Equating
ERIC Educational Resources Information Center
Kabasakal, Kübra Atalay; Kelecioglu, Hülya
2015-01-01
This study examines the effect of differential item functioning (DIF) items on test equating through multilevel item response models (MIRMs) and traditional IRMs. The performances of three different equating models were investigated under 24 different simulation conditions, and the variables whose effects were examined included sample size, test…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-23
... precision requirements or power calculations that justify the proposed sample size, the expected response... Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY... ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery.'' Also include...
EFFECTS OF SIZE FRACTIONATED AMBIENT PM SAMPLES ON INDUCTION OF PULMONARY ALLERGY IN MICE
There is increasing evidence that exposure to certain air pollutants including ozone and diesel exhaust can enhance allergic sensitization to allergens. Previous work from our laboratory has shown that exposure to residual oil fly ash or its associated transition metals can ...
Neurobehavioral studies pose unique challenges for dose-response modeling, including small sample size and relatively large intra-subject variation, repeated measurements over time, multiple endpoints with both continuous and ordinal scales, and time dependence of risk characteri...
How Large Should a Statistical Sample Be?
ERIC Educational Resources Information Center
Menil, Violeta C.; Ye, Ruili
2012-01-01
This study serves as a teaching aid for teachers of introductory statistics. The aim of this study was limited to determining various sample sizes when estimating population proportion. Tables on sample sizes were generated using a C[superscript ++] program, which depends on population size, degree of precision or error level, and confidence…
Size and modal analyses of fines and ultrafines from some Apollo 17 samples
NASA Technical Reports Server (NTRS)
Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.
1975-01-01
Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.
Sample size, confidence, and contingency judgement.
Clément, Mélanie; Mercier, Pierre; Pastò, Luigi
2002-06-01
According to statistical models, the acquisition function of contingency judgement is due to confidence increasing with sample size. According to associative models, the function reflects the accumulation of associative strength on which the judgement is based. Which view is right? Thirty university students assessed the relation between a fictitious medication and a symptom of skin discoloration in conditions that varied sample size (4, 6, 8 or 40 trials) and contingency (delta P = .20, .40, .60 or .80). Confidence was also collected. Contingency judgement was lower for smaller samples, while confidence level correlated inversely with sample size. This dissociation between contingency judgement and confidence contradicts the statistical perspective.
Methods to increase reproducibility in differential gene expression via meta-analysis
Sweeney, Timothy E.; Haynes, Winston A.; Vallania, Francesco; Ioannidis, John P.; Khatri, Purvesh
2017-01-01
Findings from clinical and biological studies are often not reproducible when tested in independent cohorts. Due to the testing of a large number of hypotheses and relatively small sample sizes, results from whole-genome expression studies in particular are often not reproducible. Compared to single-study analysis, gene expression meta-analysis can improve reproducibility by integrating data from multiple studies. However, there are multiple choices in designing and carrying out a meta-analysis. Yet, clear guidelines on best practices are scarce. Here, we hypothesized that studying subsets of very large meta-analyses would allow for systematic identification of best practices to improve reproducibility. We therefore constructed three very large gene expression meta-analyses from clinical samples, and then examined meta-analyses of subsets of the datasets (all combinations of datasets with up to N/2 samples and K/2 datasets) compared to a ‘silver standard’ of differentially expressed genes found in the entire cohort. We tested three random-effects meta-analysis models using this procedure. We showed relatively greater reproducibility with more-stringent effect size thresholds with relaxed significance thresholds; relatively lower reproducibility when imposing extraneous constraints on residual heterogeneity; and an underestimation of actual false positive rate by Benjamini–Hochberg correction. In addition, multivariate regression showed that the accuracy of a meta-analysis increased significantly with more included datasets even when controlling for sample size. PMID:27634930
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.
2018-04-01
Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.
Hislop, Jenni; Adewuyi, Temitope E; Vale, Luke D; Harrild, Kirsten; Fraser, Cynthia; Gurung, Tara; Altman, Douglas G; Briggs, Andrew H; Fayers, Peter; Ramsay, Craig R; Norrie, John D; Harvey, Ian M; Buckley, Brian; Cook, Jonathan A
2014-05-01
Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation. A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified-anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size. A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts.
Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.
Morgan, Timothy M; Case, L Douglas
2013-07-05
In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.
Synthesis of macroporous structures
Stein, Andreas; Holland, Brian T.; Blanford, Christopher F.; Yan, Hongwei
2004-01-20
The present application discloses a method of forming an inorganic macroporous material. In some embodiments, the method includes: providing a sample of organic polymer particles having a particle size distribution of no greater than about 10%; forming a colloidal crystal template of the sample of organic polymer particles, the colloidal crystal template including a plurality of organic polymer particles and interstitial spaces therebetween; adding an inorganic precursor composition including a noncolloidal inorganic precursor to the colloidal crystal template such that the precursor composition permeates the interstitial spaces between the organic polymer particles; converting the noncolloidal inorganic precursor to a hardened inorganic framework; and removing the colloidal crystal template from the hardened inorganic framework to form a macroporous material. Inorganic macroporous materials are also disclosed.
Ambrosius, Walter T; Polonsky, Tamar S; Greenland, Philip; Goff, David C; Perdue, Letitia H; Fortmann, Stephen P; Margolis, Karen L; Pajewski, Nicholas M
2012-04-01
Although observational evidence has suggested that the measurement of coronary artery calcium (CAC) may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether CAC testing leads to improved patient outcomes. To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of nonfatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, nonfatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. We have proposed a sample size of 30,000 (800 events), which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events), and 27,078 (722 events) provide 80%, 85%, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8-89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9-78.0), 82.5% (82.5-82.6), and 87.2% (87.2-87.3), respectively. These power estimates are dependent on form and parameters of the prior distributions. Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power.
Ambrosius, Walter T.; Polonsky, Tamar S.; Greenland, Philip; Goff, David C.; Perdue, Letitia H.; Fortmann, Stephen P.; Margolis, Karen L.; Pajewski, Nicholas M.
2014-01-01
Background Although observational evidence has suggested that the measurement of CAC may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether coronary artery calcium (CAC) testing leads to improved patient outcomes. Purpose To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. Methods The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of non-fatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, non-fatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including: (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. Results We have proposed a sample size of 30,000 (800 events) which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events) and 27,078 (722 events) provide 80, 85, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8 to 89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9 to 78.0), 82.5% (82.5 to 82.6), and 87.2% (87.2 to 87.3), respectively. Limitations These power estimates are dependent on form and parameters of the prior distributions. Conclusions Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power. PMID:22333998
Davenport, M.S.
1993-01-01
Water and bottom-sediment samples were collected at 26 sites in the 65-square-mile High Point Lake watershed area of Guilford County, North Carolina, from December 1988 through December 1989. Sampling locations included 10 stream sites, 8 lake sites, and 8 ground-water sites. Generally, six steady-flow samples were collected at each stream site and three storm samples were collected at five sites. Four lake samples and eight ground-water samples also were collected. Chemical analyses of stream and lake sediments and particle-size analyses of lake sediments were performed once during the study. Most stream and lake samples were analyzed for field characteristics, nutrients, major ions, trace elements, total organic carbon, and chemical-oxygen demand. Analyses were performed to detect concentrations of 149 selected organic compounds, including acid and base/neutral extractable and volatile constituents and carbamate, chlorophenoxy acid, triazine, organochlorine, and organophosphorus pesticides and herbicides. Selected lake samples were analyzed for all constituents listed in the Safe Drinking Water Act of 1986, including Giardia, Legionella, radiochemicals, asbestos, and viruses. Various chromatograms from organic analyses were submitted to computerized library searches. The results of these and all other analyses presented in this report are in tabular form.
Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S
2015-01-01
Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908
Granberg, Sarah; Dahlström, Jennie; Möller, Claes; Kähäri, Kim; Danermark, Berth
2014-02-01
To review the literature in order to identify outcome measures used in research on adults with hearing loss (HL) as part of the ICF Core Sets development project, and to describe study and population characteristics of the reviewed studies. A systematic review methodology was applied using multiple databases. A comprehensive search was conducted and two search pools were created, pool I and pool II. The study population included adults (≥ 18 years of age) with HL and oral language as the primary mode of communication. 122 studies were included. Outcome measures were distinguished by 'instrument type', and 10 types were identified. In total, 246 (pool I) and 122 (pool II) different measures were identified, and only approximately 20% were extracted twice or more. Most measures were related to speech recognition. Fifty-one different questionnaires were identified. Many studies used small sample sizes, and the sex of participants was not revealed in several studies. The low prevalence of identified measures reflects a lack of consensus regarding the optimal outcome measures to use in audiology. Reflections and discussions are made in relation to small sample sizes and the lack of sex differentiation/descriptions within the included articles.
Bayesian evaluation of effect size after replicating an original study
van Aert, Robbie C. M.; van Assen, Marcel A. L. M.
2017-01-01
The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study’s significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method. PMID:28388646
Effect of roll hot press temperature on crystallite size of PVDF film
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartono, Ambran, E-mail: ambranhartono@yahoo.com; Sanjaya, Edi; Djamal, Mitra
2014-03-24
Fabrication PVDF films have been made using Hot Roll Press. Preparation of samples carried out for nine different temperatures. This condition is carried out to see the effect of Roll Hot Press temperature on the size of the crystallite of PVDF films. To obtain the diffraction pattern of sample characterization is performed using X-Ray Diffraction. Furthermore, from the diffraction pattern is obtained, the calculation to determine the crystallite size of the sample by using the Scherrer equation. From the experimental results and the calculation of crystallite sizes obtained for the samples with temperature 130 °C up to 170 °C respectivelymore » increased from 7.2 nm up to 20.54 nm. These results show that increasing temperatures will also increase the size of the crystallite of the sample. This happens because with the increasing temperature causes the higher the degree of crystallization of PVDF film sample is formed, so that the crystallite size also increases. This condition indicates that the specific volume or size of the crystals depends on the magnitude of the temperature as it has been studied by Nakagawa.« less
The effect of precursor types on the magnetic properties of Y-type hexa-ferrite composite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Chin Mo; Na, Eunhye; Kim, Ingyu
2015-05-07
With magnetic composite including uniform magnetic particles, we expect to realize good high-frequency soft magnetic properties. We produced needle-like (α-FeOOH) nanoparticles with nearly uniform diameter and length of 20 and 500 nm. Zn-doped Y-type hexa-ferrite samples were prepared by solid state reaction method using the uniform goethite and non-uniform hematite (Fe{sub 2}O{sub 3}) with size of <1 μm, respectively. The micrographs observed by scanning electron microscopy show that more uniform hexagonal plates are observed in ZYG-sample (Zn-doped Y-type hexa-ferrite prepared with non-uniform hematite) than in ZYH-sample (Zn-doped Y-type hexa-ferrite prepared with uniform goethite). The permeability (μ′) and loss tangent (δ) atmore » 2 GHz are 2.31 and 0.07 in ZYG-sample and 2.0 and 0.07 in ZYH sample, respectively. We can observe that permeability and loss tangent are strongly related to the particle size and uniformity based on the nucleation, growth, and two magnetizing mechanisms: spin rotation and domain wall motion. The complex permeability spectra also can be numerically separated into spin rotational and domain wall resonance components.« less
Breault, Robert F.; Cooke, Matthew G.; Merrill, Michael
2004-01-01
The U.S. Geological Survey, in cooperation with the Massachusetts Executive Office of Environmental Affairs Department of Fish and Game Riverways Program, and the U.S. Environmental Protection Agency, studied sediment and water quality in the lower Neponset River, which is a tributary to Boston Harbor. Grab and core samples of sediment were tested for elements and organic compounds including polyaromatic hydrocarbons, organochlorine pesticides, and polychlorinated biphenyls. Physical properties of sediment samples, including grain size, were also measured. Selected sediment-core samples were tested for reactive sulfides and metals by means of the toxicity characteristic leaching procedure, which are sediment-disposal-related tests. Water quality, with respect to polychlorinated biphenyl contamination, was determined by testing samples collected by PISCES passive-water-column samplers for polychlorinated biphenyl congeners. Total concentrations of polychlorinated biphenyls were calculated by congener and by Aroclor.
NASA Technical Reports Server (NTRS)
Whiitson, Peggy A. (Inventor); Clift, Vaughan L. (Inventor)
1999-01-01
The present invention provides a method and apparatus for separating a blood sample having a volume of up to about 20 milliliters into cellular and acellular fractions. The apparatus includes a housing divided by a fibrous filter into a blood sample collection chamber having a volume of at least about 1 milliliter and a serum sample collection chamber. The fibrous filter has a pore size of less than about 3 microns, and is coated with a mixture including between about 1-40 wt/vol % mannitol and between about 0.1-15 wt/vol % of plasma fraction protein (or an animal or vegetable equivalent thereof). The coating causes the cellular fraction to be trapped by the small pores, leaving the cellular fraction intact on the fibrous filter while the acellular fraction passes through the filter for collection in unaltered form from the serum sample collection chamber.
Multiscale modeling of porous ceramics using movable cellular automaton method
NASA Astrophysics Data System (ADS)
Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.
2017-10-01
The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.
Assessment of sampling stability in ecological applications of discriminant analysis
Williams, B.K.; Titus, K.
1988-01-01
A simulation study was undertaken to assess the sampling stability of the variable loadings in linear discriminant function analysis. A factorial design was used for the factors of multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. A review of 60 published studies and 142 individual analyses indicated that sample sizes in ecological studies often have met that requirement. However, individual group sample sizes frequently were very unequal, and checks of assumptions usually were not reported. The authors recommend that ecologists obtain group sample sizes that are at least three times as large as the number of variables measured.
Evaluating change in bruise colorimetry and the effect of subject characteristics over time.
Scafide, Katherine R N; Sheridan, Daniel J; Campbell, Jacquelyn; Deleon, Valerie B; Hayat, Matthew J
2013-09-01
Forensic clinicians are routinely asked to estimate the age of cutaneous bruises. Unfortunately, existing research on noninvasive methods to date bruises has been mostly limited to relatively small, homogeneous samples or cross-sectional designs. The purpose of this prospective, foundational study was to examine change in bruise colorimetry over time and evaluate the effects of bruise size, skin color, gender, and local subcutaneous fat on that change. Bruises were created by a controlled application of a paintball pellet to 103 adult, healthy volunteers. Daily colorimetry measures were obtained for four consecutive days using the Minolta Chroma-meter(®). The sample was nearly equal by gender and skin color (light, medium, dark). Analysis included general linear mixed modeling (GLMM). Change in bruise colorimetry over time was significant for all three color parameters (L*a*b*), the most notable changes being the decrease in red (a*) and increase in yellow (b*) starting at 24 h. Skin color was a significant predictor for all three colorimetry values but sex or subcutaneous fat levels were not. Bruise size was a significant predictor and moderator and may have accounted for the lack of effect of gender or subcutaneous fat. Study results demonstrated the ability to model the change in bruise colorimetry over time in a diverse sample of healthy adults. Multiple factors, including skin color and bruise size must be considered when assessing bruise color in relation to its age. This study supports the need for further research that could build the science to allow more accurate bruise age estimations.
ROC curves in clinical chemistry: uses, misuses, and possible solutions.
Obuchowski, Nancy A; Lieber, Michael L; Wians, Frank H
2004-07-01
ROC curves have become the standard for describing and comparing the accuracy of diagnostic tests. Not surprisingly, ROC curves are used often by clinical chemists. Our aims were to observe how the accuracy of clinical laboratory diagnostic tests is assessed, compared, and reported in the literature; to identify common problems with the use of ROC curves; and to offer some possible solutions. We reviewed every original work using ROC curves and published in Clinical Chemistry in 2001 or 2002. For each article we recorded phase of the research, prospective or retrospective design, sample size, presence/absence of confidence intervals (CIs), nature of the statistical analysis, and major analysis problems. Of 58 articles, 31% were phase I (exploratory), 50% were phase II (challenge), and 19% were phase III (advanced) studies. The studies increased in sample size from phase I to III and showed a progression in the use of prospective designs. Most phase I studies were powered to assess diagnostic tests with ROC areas >/=0.70. Thirty-eight percent of studies failed to include CIs for diagnostic test accuracy or the CIs were constructed inappropriately. Thirty-three percent of studies provided insufficient analysis for comparing diagnostic tests. Other problems included dichotomization of the gold standard scale and inappropriate analysis of the equivalence of two diagnostic tests. We identify available software and make some suggestions for sample size determination, testing for equivalence in diagnostic accuracy, and alternatives to a dichotomous classification of a continuous-scale gold standard. More methodologic research is needed in areas specific to clinical chemistry.