Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir
Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less
The cost of large numbers of hypothesis tests on power, effect size and sample size.
Lazzeroni, L C; Ray, A
2012-01-01
Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Effect of roll hot press temperature on crystallite size of PVDF film
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartono, Ambran, E-mail: ambranhartono@yahoo.com; Sanjaya, Edi; Djamal, Mitra
2014-03-24
Fabrication PVDF films have been made using Hot Roll Press. Preparation of samples carried out for nine different temperatures. This condition is carried out to see the effect of Roll Hot Press temperature on the size of the crystallite of PVDF films. To obtain the diffraction pattern of sample characterization is performed using X-Ray Diffraction. Furthermore, from the diffraction pattern is obtained, the calculation to determine the crystallite size of the sample by using the Scherrer equation. From the experimental results and the calculation of crystallite sizes obtained for the samples with temperature 130 °C up to 170 °C respectivelymore » increased from 7.2 nm up to 20.54 nm. These results show that increasing temperatures will also increase the size of the crystallite of the sample. This happens because with the increasing temperature causes the higher the degree of crystallization of PVDF film sample is formed, so that the crystallite size also increases. This condition indicates that the specific volume or size of the crystals depends on the magnitude of the temperature as it has been studied by Nakagawa.« less
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach
NASA Technical Reports Server (NTRS)
Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.
Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F
2015-01-01
Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.
NASA Astrophysics Data System (ADS)
Atapour, Hadi; Mortazavi, Ali
2018-04-01
The effects of textural characteristics, especially grain size, on index properties of weakly solidified artificial sandstones are studied. For this purpose, a relatively large number of laboratory tests were carried out on artificial sandstones that were produced in the laboratory. The prepared samples represent fifteen sandstone types consisting of five different median grain sizes and three different cement contents. Indices rock properties including effective porosity, bulk density, point load strength index, and Schmidt hammer values (SHVs) were determined. Experimental results showed that the grain size has significant effects on index properties of weakly solidified sandstones. The porosity of samples is inversely related to the grain size and decreases linearly as grain size increases. While a direct relationship was observed between grain size and dry bulk density, as bulk density increased with increasing median grain size. Furthermore, it was observed that the point load strength index and SHV of samples increased as a result of grain size increase. These observations are indirectly related to the porosity decrease as a function of median grain size.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Hixson, M. M.; Bauer, M. E.; Davis, B. J.
1979-01-01
The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.
Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy
2016-01-01
Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David R.
2015-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.
Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran
2010-05-01
Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping.
Jorgenson, Andrew K; Clark, Brett
2013-01-01
This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region.
Controlled synthesis and luminescence properties of CaMoO4:Eu3+ microcrystals
NASA Astrophysics Data System (ADS)
Xie, Ying; Ma, Siming; Wang, Yu; Xu, Mai; Lu, Chengxi; Xiao, Linjiu; Deng, Shuguang
2018-03-01
Pure tetragonal-phased Ca0.9MoO4:0.1Eu3+ (CaMoO4:Eu3+) microcrystals with varying particle sizes were prepared via a co-deposition in water/oil (w/o) phase method. The particle sizes of as-prepared samples were controlled by calcination temperature and calcination time, and the crystallinity of the samples enhances with increasing particle size. The luminescence properties of CaMoO4:Eu3+ microcrystals were studied with varying particle size. The results reveal that the intensity of emission spectra of the CaMoO4:Eu3+ samples increases with increasing particle size, and they have closely correlation with each other. It is the same with the luminescence lifetime. The luminescence lifetime of the CaMoO4:Eu3+ samples decreases from 0.637 ms to 0.447 ms with increasing particle size from 0.12 μm to 1.79 μm, respectively. This study not only provides information for size-dependent luminescence properties of CaMoO4:Eu3+ but also gives a reference for potential applications in high voltage electric porcelain material.
Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.
Youssef, Noha H; Elshahed, Mostafa S
2008-09-01
Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.
Modeling ultrasound propagation through material of increasing geometrical complexity.
Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen
2018-06-01
Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Cui, Zaixu; Gong, Gaolang
2018-06-02
Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon
2016-01-01
Introduction Crowdsourcing has become an increasingly important tool to address many problems – from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. Methods We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. Results The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14–16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94–0.96). Conclusions Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45–55 experts. PMID:27350874
Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon
2016-06-01
Crowdsourcing has become an increasingly important tool to address many problems - from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14-16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94-0.96). Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45-55 experts.
Hydroxyapatite coatings containing Zn and Si on Ti-6Al-4Valloy by plasma electrolytic oxidation
NASA Astrophysics Data System (ADS)
Hwang, In-Jo; Choe, Han-Cheol
2018-02-01
In this study, hydroxyapatite coatings containing Zn and Si on Ti-6Al-4Valloy by plasma electrolytic oxidation were researched using various experimental instruments. The pore size is depended on the electrolyte concentration and the particle size and number of pore increase on surface part and pore part. In the case of Zn/Si sample, pore size was larger than that of Zn samples. The maximum size of pores decreased and minimum size of pores increased up to 10Zn/Si and Zn and Si affect the formation of pore shapes. As Zn ion concentration increases, the size of the particle tends to increase, the number of particles on the surface part is reduced, whereas the size of the particles and the number of particles on pore part increased. Zn is mainly detected at pore part, and Si is mainly detected at surface part. The crystallite size of anatase increased as the Zn ion concentration, whereas, in the case of Si ion added, crystallite size of anatase decreased.
Simulation analyses of space use: Home range estimates, variability, and sample size
Bekoff, Marc; Mech, L. David
1984-01-01
Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size (number of locations). As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecific variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size
Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa
2016-01-01
Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913
Jorgenson, Andrew K.; Clark, Brett
2013-01-01
This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region. PMID:23437323
Size Effect on the Mechanical Properties of CF Winding Composite
NASA Astrophysics Data System (ADS)
Cui, Yuqing; Yin, Zhongwei
2017-12-01
Mechanical properties of filament winding composites are usually tested by NOL ring samples. Few people have studied the size effect of winding composite samples on the testing result of mechanical property. In this research, winding composite thickness, diameter, and geometry of NOL ring samples were prepared to investigate the size effect on the mechanical strength of carbon fiber (CF) winding composite. The CF T700, T1000, M40, and M50 were adopted for the winding composite, while the matrix was epoxy resin. Test results show that the tensile strength and ILSS of composites decreases monotonically with an increase of thickness from 1 mm to 4 mm. The mechanical strength of composite samples increases monotonically with the increase in diameter from 100 mm to 189 mm. The mechanical strength of composite samples with two flat sides are higher than those of cyclic annular samples.
ERIC Educational Resources Information Center
Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W.
2013-01-01
Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb.…
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
ERIC Educational Resources Information Center
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
Neuromuscular dose-response studies: determining sample size.
Kopman, A F; Lien, C A; Naguib, M
2011-02-01
Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.
Shen, You-xin; Liu, Wei-li; Li, Yu-hui; Guan, Hui-lin
2014-01-01
A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1) Does this conventional sampling strategy limit the detection of seeds of woody species? 2) Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length) × 10 cm (width) × 10 cm (depth), referred to as larger number of small-sized samples (LNSS) in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS) and placed them (10 each) in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.
Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong
2017-01-01
Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01). These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11–15. PMID:28934362
Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong; Li, Hong-Qing
2017-01-01
Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01). These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A standardized sampling protocol for channel catfish in prairie streams
Vokoun, Jason C.; Rabeni, Charles F.
2001-01-01
Three alternative gears—an AC electrofishing raft, bankpoles, and a 15-hoop-net set—were used in a standardized manner to sample channel catfish Ictalurus punctatus in three prairie streams of varying size in three seasons. We compared these gears as to time required per sample, size selectivity, mean catch per unit effort (CPUE) among months, mean CPUE within months, effect of fluctuating stream stage, and sensitivity to population size. According to these comparisons, the 15-hoop-net set used during stable water levels in October had the most desirable characteristics. Using our catch data, we estimated the precision of CPUE and size structure by varying sample sizes for the 15-hoop-net set. We recommend that 11–15 repetitions of the 15-hoop-net set be used for most management activities. This standardized basic unit of effort will increase the precision of estimates and allow better comparisons among samples as well as increased confidence in management decisions.
Qualitative Meta-Analysis on the Hospital Task: Implications for Research
ERIC Educational Resources Information Center
Noll, Jennifer; Sharma, Sashi
2014-01-01
The "law of large numbers" indicates that as sample size increases, sample statistics become less variable and more closely estimate their corresponding population parameters. Different research studies investigating how people consider sample size when evaluating the reliability of a sample statistic have found a wide range of…
Sampling strategies for estimating brook trout effective population size
Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher
2012-01-01
The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...
Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples
NASA Astrophysics Data System (ADS)
Petit, Johan; Lallemant, Lucile
2017-05-01
In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.
Breaking Free of Sample Size Dogma to Perform Innovative Translational Research
Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.
2011-01-01
Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197
Characterization of the enhancement effect of Na2CO3 on the sulfur capture capacity of limestones.
Laursen, Karin; Kern, Arnt A; Grace, John R; Lim, C Jim
2003-08-15
It has been known for a long time that certain additives (e.g., NaCl, CaCl2, Na2CO3, Fe2O3) can increase the sulfur dioxide capture-capacity of limestones. In a recent study we demonstrated that very small amounts of Na2CO3 can be very beneficial for producing sorbents of very high sorption capacities. This paper explores what contributes to these significant increases. Mercury porosimetry measurements of calcined limestone samples reveal a change in the pore-size from 0.04-0.2 microm in untreated samples to 2-10 microm in samples treated with Na2CO3--a pore-size more favorable for penetration of sulfur into the particles. The change in pore-size facilitates reaction with lime grains throughout the whole particle without rapid plugging of pores, avoiding premature change from a fast chemical reaction to a slow solid-state diffusion controlled process, as seen for untreated samples. Calcination in a thermogravimetric reactor showed that Na2CO3 increased the rate of calcination of CaCO3 to CaO, an effect which was slightly larger at 825 degrees C than at 900 degrees C. Peak broadening analysis of powder X-ray diffraction data of the raw, calcined, and sulfated samples revealed an unaffected calcite size (approximately 125-170 nm) but a significant increase in the crystallite size for lime (approximately 60-90 nm to approximately 250-300 nm) and less for anhydrite (approximately 125-150 nm to approximately 225-250 nm). The increase in the crystallite and pore-size of the treated limestones is attributed to an increase in ionic mobility in the crystal lattice due to formation of vacancies in the crystals when Ca is partly replaced by Na.
Page, G P; Amos, C I; Boerwinkle, E
1998-04-01
We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.
Single and simultaneous binary mergers in Wright-Fisher genealogies.
Melfi, Andrew; Viswanath, Divakar
2018-05-01
The Kingman coalescent is a commonly used model in genetics, which is often justified with reference to the Wright-Fisher (WF) model. Current proofs of convergence of WF and other models to the Kingman coalescent assume a constant sample size. However, sample sizes have become quite large in human genetics. Therefore, we develop a convergence theory that allows the sample size to increase with population size. If the haploid population size is N and the sample size is N 1∕3-ϵ , ϵ>0, we prove that Wright-Fisher genealogies involve at most a single binary merger in each generation with probability converging to 1 in the limit of large N. Single binary merger or no merger in each generation of the genealogy implies that the Kingman partition distribution is obtained exactly. If the sample size is N 1∕2-ϵ , Wright-Fisher genealogies may involve simultaneous binary mergers in a single generation but do not involve triple mergers in the large N limit. The asymptotic theory is verified using numerical calculations. Variable population sizes are handled algorithmically. It is found that even distant bottlenecks can increase the probability of triple mergers as well as simultaneous binary mergers in WF genealogies. Copyright © 2018 Elsevier Inc. All rights reserved.
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
NASA Astrophysics Data System (ADS)
Jamil, Farinaa Md; Sulaiman, Mohd Ali; Ibrahim, Suhaina Mohd; Masrom, Abdul Kadir; Yahya, Muhd Zu Azhan
2017-12-01
A series of mesoporous carbon sample was synthesized using silica template, SBA-15 with two different pore sizes. Impregnation method was applied using glucose as a precursor for converting it into carbon. An appropriate carbonization and silica removal process were carried out to produce a series of mesoporous carbon with different pore sizes and surface areas. Mesoporous carbon sample was then assembled as electrode and its performance was tested using cyclic voltammetry and impedance spectroscopy to study the effect of ion transportation into several pore sizes on electric double layer capacitor (EDLC) system. 6M KOH was used as electrolyte at various scan rates of 10, 20, 30 and 50 mVs-1. The results showed that the pore size of carbon increased as the pore size of template increased and the specific capacitance improved as the increasing of the pore size of carbon.
Simulation of Particle Size Effect on Dynamic Properties and Fracture of PTFE-W-Al Composites
NASA Astrophysics Data System (ADS)
Herbold, E. B.; Cai, J.; Benson, D. J.; Nesterenko, V. F.
2007-12-01
Recent investigations of the dynamic compressive strength of cold isostatically pressed composites of polytetrafluoroethylene (PTFE), tungsten (W) and aluminum (Al) powders show significant differences depending on the size of metallic particles. The addition of W increases the density and changes the overall strength of the sample depending on the size of W particles. To investigate relatively large deformations, multi-material Eulerian and arbitrary Lagrangian-Eulerian methods, which have the ability to efficiently handle the formation of free surfaces, were used. The calculations indicate that the increased sample strength with fine metallic particles is due to the dynamic formation of force chains. This phenomenon occurs for samples with a higher porosity of the PTFE matrix compared to samples with larger particle size of W and a higher density PTFE matrix.
Sample size, confidence, and contingency judgement.
Clément, Mélanie; Mercier, Pierre; Pastò, Luigi
2002-06-01
According to statistical models, the acquisition function of contingency judgement is due to confidence increasing with sample size. According to associative models, the function reflects the accumulation of associative strength on which the judgement is based. Which view is right? Thirty university students assessed the relation between a fictitious medication and a symptom of skin discoloration in conditions that varied sample size (4, 6, 8 or 40 trials) and contingency (delta P = .20, .40, .60 or .80). Confidence was also collected. Contingency judgement was lower for smaller samples, while confidence level correlated inversely with sample size. This dissociation between contingency judgement and confidence contradicts the statistical perspective.
Effects of crystallite size on the structure and magnetism of ferrihydrite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xiaoming; Zhu, Mengqiang; Koopal, Luuk K.
2015-12-15
The structure and magnetic properties of nano-sized (1.6 to 4.4 nm) ferrihydrite samples are systematically investigated through a combination of X-ray diffraction (XRD), X-ray pair distribution function (PDF), X-ray absorption spectroscopy (XAS) and magnetic analyses. The XRD, PDF and Fe K-edge XAS data of the ferrihydrite samples are all fitted well with the Michel ferrihydrite model, indicating similar local-, medium- and long-range ordered structures. PDF and XAS fitting results indicate that, with increasing crystallite size, the average coordination numbers of Fe–Fe and the unit cell parameter c increase, while Fe2 and Fe3 vacancies and the unit cell parameter a decrease.more » Mössbauer results indicate that the surface layer is relatively disordered, which might have been caused by the random distribution of Fe vacancies. These results support Hiemstra's surface-depletion model in terms of the location of disorder and the variations of Fe2 and Fe3 occupancies with size. Magnetic data indicate that the ferrihydrite samples show antiferromagnetism superimposed with a ferromagnetic-like moment at lower temperatures (100 K and 10 K), but ferrihydrite is paramagnetic at room temperature. In addition, both the magnetization and coercivity decrease with increasing ferrihydrite crystallite size due to strong surface effects in fine-grained ferrihydrites. Smaller ferrihydrite samples show less magnetic hyperfine splitting and a lower unblocking temperature (T B) than larger samples. The dependence of magnetic properties on grain size for nano-sized ferrihydrite provides a practical way to determine the crystallite size of ferrihydrite quantitatively in natural environments or artificial systems.« less
Industrial Application of Valuable Materials Generated from PLK Rock-A Bauxite Mining Waste
NASA Astrophysics Data System (ADS)
Swain, Ranjita; Routray, Sunita; Mohapatra, Abhisek; Ranjan Patra, Biswa
2018-03-01
PLK rock classified in to two products after a selective grinding to a particular size fraction. PLK rocks ground to below 45-micron size which is followed by a classifier i.e. hydrocyclone. The ground product classified in to different sizes of apex and vortex finder. The pressure gauge was attached for the measurement of the pressure. The production of fines is also increasing with increase in the vortex finder diameter. In order to increase in the feed capacity of the hydrocyclone, the vortex finder 11.1 mm diameter and the spigot diameter 8.0 mm has been considered as the best optimum condition for recovery of fines from PLK rock sample. The overflow sample contains 5.39% iron oxide (Fe2O3) with 0.97% of TiO2 and underflow sample contains 1.87% Fe2O3 with 2.39% of TiO2. The cut point or separation size of overflow sample is 25 μm. The efficiency of separation, or the so-called imperfection I, is at 6 μm size. In this study, the iron oxide content in underflow sample is less than 2% which is suitable for making of refractory application. The overflow sample is very fine which can also be a raw material for ceramic industry as well as a cosmetic product.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
Obesity and Body Size Preferences of Jordanian Women
ERIC Educational Resources Information Center
Madanat, Hala; Hawks, Steven R.; Angeles, Heidi N.
2011-01-01
The nutrition transition is associated with increased obesity rates and increased desire to be thin. This study evaluates the relationship between actual body size and desired body size among a representative sample of 800 Jordanian women. Using Stunkard's body silhouettes, women were asked to identify their current and ideal body sizes, healthy…
Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold
2016-04-25
To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Thermal conductivity of graphene mediated by strain and size
Kuang, Youdi; Shi, Sanqiang; Wang, Xinjiang; ...
2016-06-09
Based on first-principles calculations and full iterative solution of the linearized Boltzmann–Peierls transport equation for phonons, we systematically investigate effects of strain, size and temperature on the thermal conductivity k of suspended graphene. The calculated size-dependent and temperature-dependent k for finite samples agree well with experimental data. The results show that, contrast to the convergent room-temperature k = 5450 W/m-K of unstrained graphene at a sample size ~8 cm, k of strained graphene diverges with increasing the sample size even at high temperature. Out-of-plane acoustic phonons are responsible for the significant size effect in unstrained and strained graphene due tomore » their ultralong mean free path and acoustic phonons with wavelength smaller than 10 nm contribute 80% to the intrinsic room temperature k of unstrained graphene. Tensile strain hardens the flexural modes and increases their lifetimes, causing interesting dependence of k on sample size and strain due to the competition between boundary scattering and intrinsic phonon–phonon scattering. k of graphene can be tuned within a large range by strain for the size larger than 500 μm. These findings shed light on the nature of thermal transport in two-dimensional materials and may guide predicting and engineering k of graphene by varying strain and size.« less
NASA Astrophysics Data System (ADS)
Cantarello, Elena; Steck, Claude E.; Fontana, Paolo; Fontaneto, Diego; Marini, Lorenzo; Pautasso, Marco
2010-03-01
Recent large-scale studies have shown that biodiversity-rich regions also tend to be densely populated areas. The most obvious explanation is that biodiversity and human beings tend to match the distribution of energy availability, environmental stability and/or habitat heterogeneity. However, the species-people correlation can also be an artefact, as more populated regions could show more species because of a more thorough sampling. Few studies have tested this sampling bias hypothesis. Using a newly collated dataset, we studied whether Orthoptera species richness is related to human population size in Italy’s regions (average area 15,000 km2) and provinces (2,900 km2). As expected, the observed number of species increases significantly with increasing human population size for both grain sizes, although the proportion of variance explained is minimal at the provincial level. However, variations in observed Orthoptera species richness are primarily associated with the available number of records, which is in turn well correlated with human population size (at least at the regional level). Estimated Orthoptera species richness (Chao2 and Jackknife) also increases with human population size both for regions and provinces. Both for regions and provinces, this increase is not significant when controlling for variation in area and number of records. Our study confirms the hypothesis that broad-scale human population-biodiversity correlations can in some cases be artefactual. More systematic sampling of less studied taxa such as invertebrates is necessary to ascertain whether biogeographical patterns persist when sampling effort is kept constant or included in models.
Meta-analysis of genome-wide association from genomic prediction models
USDA-ARS?s Scientific Manuscript database
A limitation of many genome-wide association studies (GWA) in animal breeding is that there are many loci with small effect sizes; thus, larger sample sizes (N) are required to guarantee suitable power of detection. To increase sample size, results from different GWA can be combined in a meta-analys...
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M
2018-06-01
Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.
Modeling the transport of engineered nanoparticles in saturated porous media - an experimental setup
NASA Astrophysics Data System (ADS)
Braun, A.; Neukum, C.; Azzam, R.
2011-12-01
The accelerating production and application of engineered nanoparticles is causing concerns regarding their release and fate in the environment. For assessing the risk that is posed to drinking water resources it is important to understand the transport and retention mechanisms of engineered nanoparticles in soil and groundwater. In this study an experimental setup for analyzing the mobility of silver and titanium dioxide nanoparticles in saturated porous media is presented. Batch and column experiments with glass beads and two different soils as matrices are carried out under varied conditions to study the impact of electrolyte concentration and pore water velocities. The analysis of nanoparticles implies several challenges, such as the detection and characterization and the preparation of a well dispersed sample with defined properties, as nanoparticles tend to form agglomerates when suspended in an aqueous medium. The analytical part of the experiments is mainly undertaken with Flow Field-Flow Fractionation (FlFFF). This chromatography like technique separates a particulate sample according to size. It is coupled to a UV/Vis and a light scattering detector for analyzing concentration and size distribution of the sample. The advantage of this technique is the ability to analyze also complex environmental samples, such as the effluent of column experiments including soil components, and the gentle sample treatment. For optimization of the sample preparation and for getting a first idea of the aggregation behavior in soil solutions, in sedimentation experiments the effect of ionic strength, sample concentration and addition of a surfactant on particle or aggregate size and temporal dispersion stability was investigated. In general the samples are more stable the lower the concentration of particles is. For TiO2 nanoparticles, the addition of a surfactant yielded the most stable samples with smallest aggregate sizes. Furthermore the suspension stability is increasing with electrolyte concentration. Depending on the dispersing medium the results show that TiO2 nanoparticles tend to form aggregates between 100-200 nm in diameter while the primary particle size is given as 21 nm by the manufacturer. Aggregate sizes are increasing with time. The particle size distribution of the silver nanoparticle samples is quite uniform in each medium. The fresh samples show aggregate sizes between 40 and 45 nm while the primary particle size is 15 nm according to the manufacturer. Aggregate size is only slightly increasing with time during the sedimentation experiments. These results are used as a reference when analyzing the effluent of column experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papelis, Charalambos; Um, Wooyong; Russel, Charles E.
2003-03-28
The specific surface area of natural and manmade solid materials is a key parameter controlling important interfacial processes in natural environments and engineered systems, including dissolution reactions and sorption processes at solid-fluid interfaces. To improve our ability to quantify the release of trace elements trapped in natural glasses, the release of hazardous compounds trapped in manmade glasses, or the release of radionuclides from nuclear melt glass, we measured the specific surface area of natural and manmade glasses as a function of particle size, morphology, and composition. Volcanic ash, volcanic tuff, tektites, obsidian glass, and in situ vitrified rock were analyzed.more » Specific surface area estimates were obtained using krypton as gas adsorbent and the BET model. The range of surface areas measured exceeded three orders of magnitude. A tektite sample had the highest surface area (1.65 m2/g), while one of the samples of in situ vitrified rock had the lowest surf ace area (0.0016 m2/g). The specific surface area of the samples was a function of particle size, decreasing with increasing particle size. Different types of materials, however, showed variable dependence on particle size, and could be assigned to one of three distinct groups: (1) samples with low surface area dependence on particle size and surface areas approximately two orders of magnitude higher than the surface area of smooth spheres of equivalent size. The specific surface area of these materials was attributed mostly to internal porosity and surface roughness. (2) samples that showed a trend of decreasing surface area dependence on particle size as the particle size increased. The minimum specific surface area of these materials was between 0.1 and 0.01 m2/g and was also attributed to internal porosity and surface roughness. (3) samples whose surface area showed a monotonic decrease with increasing particle size, never reaching an ultimate surface area limit within the particle size range examined. The surface area results were consistent with particle morphology, examined by scanning electron microscopy, and have significant implications for the release of radionuclides and toxic metals in the environment.« less
Integrated investigation of the mixed origin of lunar sample 72161,11
NASA Technical Reports Server (NTRS)
Basu, A.; Des Marais, D. J.; Hayes, J. M.; Meinschein, W. G.
1975-01-01
The comminution-agglutination model and the solar-wind implantation-retention model are used to postulate the origins of the particulate components of lunar sample (72161,11), a submillimeter fraction of a surface sample for the dark mantle regolith at LRV-3. Grain-size analysis was performed by wet sieving with liquid argon, and analyses for CO2, CO, CH4, and H2 were carried out by stepwise pyrolysis in a helium atmosphere. The results indicate that the present sample is from a mature regolith, but the agglutinate content is only 30% in the particle-size range between 90 and 177 microns, indicating an apparent departure from steady state. Analyses of the carbon, methane, and hydrogen concentrations in size fractions larger than 149 microns show that the volume-correlated component of these species increases with increased grain size. It is suggested that the observed increase can be explained in terms of mixing of a dominant local population of coarser agglutinates having high carbon and hydrogen concentrations with an imported population of finer agglutinates relatively poor in carbon and hydrogen.
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Woodman, N.; Croft, D.A.
2005-01-01
Our study of mammalian remains excavated in the 1940s from McGrew Cave, north of Copan, Honduras, yielded an assemblage of 29 taxa that probably accumulated predominantly as the result of predation by owls. Among the taxa present are three species of small-eared shrews, genus Cryptotis. One species, Cryptotis merriami, is relatively rare among the fossil remains. The other two shrews, Cryptotis goodwini and Cryptotis orophila, are abundant and exhibit morpho metrical variation distinguishing them from modern populations. Fossils of C. goodwini are distinctly and consistently smaller than modern members of the species. To quantify the size differences, we derived common measures of body size for fossil C. goodwini using regression models based on modern samples of shrews in the Cryptotis mexicana-group. Estimated mean length of head and body for the fossil sample is 72-79 mm, and estimated mean mass is 7.6-9.6 g. These numbers indicate that the fossil sample averaged 6-14% smaller in head and body length and 39-52% less in mass than the modern sample and that increases of 6-17% in head and body length and 65-108% in mass occurred to achieve the mean body size of the modern sample. Conservative estimates of fresh (wet) food intake based on mass indicate that such a size increase would require a 37-58% increase in daily food consumption. In contrast to C. goodwini, fossil C. orophila from the cave is not different in mean body size from modern samples. The fossil sample does, however, show slightly greater variation in size than is currently present throughout the modern geographical distribution of the taxon. Moreover, variation in some other dental and mandibular characters is more constrained, exhibiting a more direct relationship to overall size. Our study of these species indicates that North American shrews have not all been static in size through time, as suggested by some previous work with fossil soricids. Lack of stratigraphic control within the site and our failure to obtain reliable radiometric dates on remains restrict our opportunities to place the site in a firm temporal context. However, the morphometrical differences we document for fossil C. orophila and C. goodwini show them to be distinct from modern populations of these shrews. Some other species of fossil mammals from McGrew Cave exhibit distinct size changes of the magnitudes experienced by many northern North American and some Mexican mammals during the transition from late glacial to Holocene environmental conditions, and it is likely that at least some of the remains from the cave are late Pleistocene in age. One curious factor is that, whereas most mainland mammals that exhibit large-scale size shifts during the late glacial/postglacial transition experienced dwarfing, C. goodwini increased in size. The lack of clinal variation in modern C. goodwini supports the hypothesis that size evolution can result from local selection rather than from cline translocation. Models of size change in mammals indicate that increased size, such as that observed for C. goodwini, are a likely consequence of increased availability of resources and, thereby, a relaxation of selection during critical times of the year.
Influence of Sample Size of Polymer Materials on Aging Characteristics in the Salt Fog Test
NASA Astrophysics Data System (ADS)
Otsubo, Masahisa; Anami, Naoya; Yamashita, Seiji; Honda, Chikahisa; Takenouchi, Osamu; Hashimoto, Yousuke
Polymer insulators have been used in worldwide because of some superior properties; light weight, high mechanical strength, good hydrophobicity etc., as compared with porcelain insulators. In this paper, effect of sample size on the aging characteristics in the salt fog test is examined. Leakage current was measured by using 100 MHz AD board or 100 MHz digital oscilloscope and separated three components as conductive current, corona discharge current and dry band arc discharge current by using FFT and the current differential method newly proposed. Each component cumulative charge was estimated automatically by a personal computer. As the results, when the sample size increased under the same average applied electric field, the peak values of leakage current and each component current increased. Especially, the cumulative charges and the arc discharge length of dry band arc discharge increased remarkably with the increase of gap length.
Zafra, C A; Temprano, J; Tejero, I
2011-07-01
The heavy metal pollution caused by road run-off water constitutes a problem in urban areas. The metallic load associated with road sediment must be determined in order to study its impact in drainage systems and receiving waters, and to perfect the design of prevention systems. This paper presents data regarding the sediment collected on road surfaces in the city of Torrelavega (northern Spain) during a period of 65 days (132 samples). Two sample types were collected: vacuum-dried samples and those swept up following vacuuming. The sediment loading (g m(-2)), particle size distribution (63-2800 microm) and heavy metal concentrations were determined. The data showed that the concentration of heavy metals tends to increase with the reduction in the particle diameter (exponential tendency). The concentrations ofPb, Zn, Cu, Cr, Ni, Cd, Fe, Mn and Co in the size fraction <63 microm were 350, 630, 124, 57, 56, 38, 3231, 374 and 51 mg kg(-1), respectively (average traffic density: 3800 vehicles day(-1)). By increasing the residence time of the sediment, the concentration increases, whereas the ratio of the concentration between the different size fractions decreases. The concentration across the road diminishes when the distance between the roadway and the sampling siteincreases; when the distance increases, the ratio between size fractions for heavy metal concentrations increases. Finally, the main sources of heavy metals are the particles detached by braking (brake pads) and tyre wear (rubber), and are associated with particle sizes <125 microm.
NASA Astrophysics Data System (ADS)
Kumar, S.; Aggarwal, S. G.; Fu, P. Q.; Kang, M.; Sarangi, B.; Sinha, D.; Kotnala, R. K.
2017-06-01
During March 20-22, 2012 Delhi experienced a massive dust-storm which originated in Middle-East. Size segregated sampling of these dust aerosols was performed using a nine staged Andersen sampler (5 sets of samples were collected including before dust-storm (BDS)), dust-storm day 1 to 3 (DS1 to DS3) and after dust storm (ADS). Sugars (mono and disaccharides, sugar-alcohols and anhydro-sugars) were determined using GC-MS technique. It was observed that on the onset of dust-storm, total suspended particulate matter (TSPM, sum of all stages) concentration in DS1 sample increased by > 2.5 folds compared to that of BDS samples. Interestingly, fine particulate matter (sum of stages with cutoff size < 2.1 μm) loading in DS1 also increased by > 2.5 folds as compared to that of BDS samples. Sugars analyzed in DS1 coarse mode (sum of stages with cutoff size > 2.1 μm) samples showed a considerable increase ( 1.7-2.8 folds) compared to that of other samples. It was further observed that mono-saccharides, disaccharides and sugar-alcohols concentrations were enhanced in giant (> 9.0 μm) particles in DS1 samples as compared to other samples. On the other hand, anhydro-sugars comprised 13-27% of sugars in coarse mode particles and were mostly found in fine mode constituting 66-85% of sugars in all the sample types. Trehalose showed an enhanced ( 2-4 folds) concentration in DS1 aerosol samples in both coarse (62.80 ng/m3) and fine (8.57 ng/m3) mode. This increase in Trehalose content in both coarse and fine mode suggests their origin to the transported desert dust and supports their candidature as an organic tracer for desert dust entrainments. Further, levoglucosan to mannosan (L/M) ratios which have been used to predict the type of biomass burning influences on aerosols are found to be size dependent in these samples. These ratios are higher for fine mode particles, hence should be used with caution while interpreting the sources using this tool.
Quantitative Reflectance Spectra of Solid Powders as a Function of Particle Size
Myers, Tanya L.; Brauer, Carolyn S.; Su, Yin-Fong; ...
2015-05-19
We have recently developed vetted methods for obtaining quantitative infrared directional-hemispherical reflectance spectra using a commercial integrating sphere. In this paper, the effects of particle size on the spectral properties are analyzed for several samples such as ammonium sulfate, calcium carbonate, and sodium sulfate as well as one organic compound, lactose. We prepared multiple size fractions for each sample and confirmed the mean sizes using optical microscopy. Most species displayed a wide range of spectral behavior depending on the mean particle size. General trends of reflectance vs. particle size are observed such as increased albedo for smaller particles: for mostmore » wavelengths, the reflectivity drops with increased size, sometimes displaying a factor of 4 or more drop in reflectivity along with a loss of spectral contrast. In the longwave infrared, several species with symmetric anions or cations exhibited reststrahlen features whose amplitude was nearly invariant with particle size, at least for intermediate- and large-sized sample fractions; that is, > ~150 microns. Trends of other types of bands (Christiansen minima, transparency features) are also investigated as well as quantitative analysis of the observed relationship between reflectance vs. particle diameter.« less
Quantitative Reflectance Spectra of Solid Powders as a Function of Particle Size
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, Tanya L.; Brauer, Carolyn S.; Su, Yin-Fong
We have recently developed vetted methods for obtaining quantitative infrared directional-hemispherical reflectance spectra using a commercial integrating sphere. In this paper, the effects of particle size on the spectral properties are analyzed for several samples such as ammonium sulfate, calcium carbonate, and sodium sulfate as well as one organic compound, lactose. We prepared multiple size fractions for each sample and confirmed the mean sizes using optical microscopy. Most species displayed a wide range of spectral behavior depending on the mean particle size. General trends of reflectance vs. particle size are observed such as increased albedo for smaller particles: for mostmore » wavelengths, the reflectivity drops with increased size, sometimes displaying a factor of 4 or more drop in reflectivity along with a loss of spectral contrast. In the longwave infrared, several species with symmetric anions or cations exhibited reststrahlen features whose amplitude was nearly invariant with particle size, at least for intermediate- and large-sized sample fractions; that is, > ~150 microns. Trends of other types of bands (Christiansen minima, transparency features) are also investigated as well as quantitative analysis of the observed relationship between reflectance vs. particle diameter.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asgari, H., E-mail: hamed.asgari@usask.ca; Odeshi, A.G.; Szpunar, J.A.
2015-08-15
The effects of grain size on the dynamic deformation behavior of rolled AZ31B alloy at high strain rates were investigated. Rolled AZ31B alloy samples with grain sizes of 6, 18 and 37 μm, were subjected to shock loading tests using Split Hopkinson Pressure Bar at room temperature and at a strain rate of 1100 s{sup −} {sup 1}. It was found that a double-peak basal texture formed in the shock loaded samples. The strength and ductility of the alloy under the high strain-rate compressive loading increased with decreasing grain size. However, twinning fraction and strain hardening rate were found tomore » decrease with decreasing grain size. In addition, orientation imaging microscopy showed a higher contribution of double and contraction twins in the deformation process of the coarse-grained samples. Using transmission electron microscopy, pyramidal dislocations were detected in the shock loaded sample, proving the activation of pyramidal slip system under dynamic impact loading. - Highlights: • A double-peak basal texture developed in all shock loaded samples. • Both strength and ductility increased with decreasing grain size. • Twinning fraction and strain hardening rate decreased with decreasing grain size. • ‘g.b’ analysis confirmed the presence of dislocations in shock loaded alloy.« less
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howard, C.; Frazer, D.; Lupinacci, A.
Here, micropillar compression testing was implemented on Equal Channel Angular Pressed copper samples ranging from 200 nm to 10 µm in side length in order to measure the mechanical properties yield strength, first load drop during plastic deformation at which there was a subsequent stress decrease with increasing strain, work hardening, and strain hardening exponent. Several micropillars containing multiple grains were investigated in a 200 nm grain sample. The effective pillar diameter to grain size ratios, D/d, were measured to be between 1.9 and 27.2. Specimens having D/d ratios between 0.2 and 5 were investigated in a second sample thatmore » was annealed at 200 °C for 2 h with an average grain size of 1.3 µm. No yield strength or elastic modulus size effects were observed in specimens in the 200 nm grain size sample. However work hardening increases with a decrease in critical ratios and first stress drops occur at much lower stresses for specimens with D/d ratios less than 5. For comparison, bulk tensile testing of both samples was performed, and the yield strength values of all micropillar compression tests for the 200 nm grained sample are in good agreement with the yield strength values of the tensile tests.« less
Howard, C.; Frazer, D.; Lupinacci, A.; ...
2015-09-30
Here, micropillar compression testing was implemented on Equal Channel Angular Pressed copper samples ranging from 200 nm to 10 µm in side length in order to measure the mechanical properties yield strength, first load drop during plastic deformation at which there was a subsequent stress decrease with increasing strain, work hardening, and strain hardening exponent. Several micropillars containing multiple grains were investigated in a 200 nm grain sample. The effective pillar diameter to grain size ratios, D/d, were measured to be between 1.9 and 27.2. Specimens having D/d ratios between 0.2 and 5 were investigated in a second sample thatmore » was annealed at 200 °C for 2 h with an average grain size of 1.3 µm. No yield strength or elastic modulus size effects were observed in specimens in the 200 nm grain size sample. However work hardening increases with a decrease in critical ratios and first stress drops occur at much lower stresses for specimens with D/d ratios less than 5. For comparison, bulk tensile testing of both samples was performed, and the yield strength values of all micropillar compression tests for the 200 nm grained sample are in good agreement with the yield strength values of the tensile tests.« less
Kohigashi, Tsuyoshi; Otsuka, Yoichi; Shimazu, Ryo; Matsumoto, Takuya; Iwata, Futoshi; Kawasaki, Hideya; Arakawa, Ryuichi
2016-01-01
Mass spectrometry imaging (MSI) with ambient sampling and ionization can rapidly and easily capture the distribution of chemical components in a solid sample. Because the spatial resolution of MSI is limited by the size of the sampling area, reducing sampling size is an important goal for high resolution MSI. Here, we report the first use of a nanopipette for sampling and ionization by tapping-mode scanning probe electrospray ionization (t-SPESI). The spot size of the sampling area of a dye molecular film on a glass substrate was decreased to 6 μm on average by using a nanopipette. On the other hand, ionization efficiency increased with decreasing solvent flow rate. Our results indicate the compatibility between a reduced sampling area and the ionization efficiency using a nanopipette. MSI of micropatterns of ink on a glass and a polymer substrate were also demonstrated. PMID:28101441
Effect of laser irradiation on surface hardness and structural parameters of 7178 aluminium alloy
NASA Astrophysics Data System (ADS)
Maryam, Siddra; Bashir, Farooq
2018-04-01
Aluminium 7178 samples were prepared and irradiated with Nd:YAG laser. The surfaces of exposed samples were investigated using optical microscopy, which revealed that the surface morphology of the samples is changed drastically as a function of laser shots. It is revealed from the micrographs that the laser heat effected area increases with the increase in the number of the laser pulses. Furthermore morphological and mechanical properties were studied using XRD and Vickers hardness testing. XRD study shows an increasing trend in Grain size with the increasing number of laser shots. And the hardness of the samples as a function of the laser shots shows that the hardness first increases and then it decreases gradually. It was observed that the grain size has no pronouncing effect on the hardness. Hardness profile has a decreasing trend with the increase in linear distance from the boundary of the laser heat affected area.
Richman, Julie D.; Livi, Kenneth J.T.; Geyh, Alison S.
2011-01-01
Increasing evidence suggests that the physicochemical properties of inhaled nanoparticles influence the resulting toxicokinetics and toxicodynamics. This report presents a method using scanning transmission electron microscopy (STEM) to measure the Mn content throughout the primary particle size distribution of welding fume particle samples collected on filters for application in exposure and health research. Dark field images were collected to assess the primary particle size distribution and energy-dispersive X-ray and electron energy loss spectroscopy were performed for measurement of Mn composition as a function of primary particle size. A manual method incorporating imaging software was used to measure the primary particle diameter and to select an integration region for compositional analysis within primary particles throughout the size range. To explore the variation in the developed metric, the method was applied to 10 gas metal arc welding (GMAW) fume particle samples of mild steel that were collected under a variety of conditions. The range of Mn composition by particle size was −0.10 to 0.19 %/nm, where a positive estimate indicates greater relative abundance of Mn increasing with primary particle size and a negative estimate conversely indicates decreasing Mn content with size. However, the estimate was only statistically significant (p<0.05) in half of the samples (n=5), which all had a positive estimate. In the remaining samples, no significant trend was measured. Our findings indicate that the method is reproducible and that differences in the abundance of Mn by primary particle size among welding fume samples can be detected. PMID:21625364
Richman, Julie D; Livi, Kenneth J T; Geyh, Alison S
2011-06-01
Increasing evidence suggests that the physicochemical properties of inhaled nanoparticles influence the resulting toxicokinetics and toxicodynamics. This report presents a method using scanning transmission electron microscopy (STEM) to measure the Mn content throughout the primary particle size distribution of welding fume particle samples collected on filters for application in exposure and health research. Dark field images were collected to assess the primary particle size distribution and energy-dispersive X-ray and electron energy loss spectroscopy were performed for measurement of Mn composition as a function of primary particle size. A manual method incorporating imaging software was used to measure the primary particle diameter and to select an integration region for compositional analysis within primary particles throughout the size range. To explore the variation in the developed metric, the method was applied to 10 gas metal arc welding (GMAW) fume particle samples of mild steel that were collected under a variety of conditions. The range of Mn composition by particle size was -0.10 to 0.19 %/nm, where a positive estimate indicates greater relative abundance of Mn increasing with primary particle size and a negative estimate conversely indicates decreasing Mn content with size. However, the estimate was only statistically significant (p<0.05) in half of the samples (n=5), which all had a positive estimate. In the remaining samples, no significant trend was measured. Our findings indicate that the method is reproducible and that differences in the abundance of Mn by primary particle size among welding fume samples can be detected.
Grain size effect on the electrical and magneto-transport properties of nanosized Pr0.67Sr0.33MnO3
NASA Astrophysics Data System (ADS)
Ng, S. W.; Lim, K. P.; Halim, S. A.; Jumiah, H.
2018-06-01
In this study, nanosized of Pr0.67Sr0.33MnO3 prepared via sol-gel method followed by heat treatment at 600-1000 °C in intervals of 100 °C were synthesized. The structure, surface morphology, electrical, magneto-transport and magnetic properties of the samples were investigated. Rietveld refinements of X-ray diffraction patterns confirm that single phase orthorhombic crystal structure with the space group of Pnma (62) is formed at 600 °C. A strong dependence of surface morphology, electrical and magneto-transport properties on grain size have been observed in this manganites system. Both grain size and crystallite size are increases with the sintering temperature due to the congregation effect. Upon increasing grain size, the paramagnetic-ferromagnetic transition temperature increases from 278 K to 295 K. The resistivity drops and the metal-insulator transition temperature shifted from 184 K to 248 K with increases of grain size due to the grain growth and reduction of grain boundary. Below metal-insulator transition temperature, the samples fit well to the combination of resistivity due to grain or domain boundaries, electron-electron scattering process and electron-phonon interaction. The resistivity data above the metal-insulator transition temperature is well described using small polaron hopping and variable range hopping models. It is found that the negative magnetoresistance also increases with larger grain size where the highest %MR of - 26% can be observed for sample sintered at 1000 °C (245 nm).
Sampling guidelines for oral fluid-based surveys of group-housed animals.
Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J
2017-09-01
Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Maaz, K.; Karim, S.; Mumtaz, A.; Hasanain, S. K.; Liu, J.; Duan, J. L.
2009-06-01
Magnetic nanoparticles of nickel ferrite (NiFe 2O 4) have been synthesized by co-precipitation route using stable ferric and nickel salts with sodium hydroxide as the precipitating agent and oleic acid as the surfactant. X-ray diffraction (XRD) and transmission electron microscope (TEM) analyses confirmed the formation of single-phase nickel ferrite nanoparticles in the range 8-28 nm depending upon the annealing temperature of the samples during the synthesis. The size of the particles ( d) was observed to be increasing linearly with annealing temperature of the sample while the coercivity with particle size goes through a maximum, peaking at ˜11 nm and then decreases for larger particles. Typical blocking effects were observed below ˜225 K for all the prepared samples. The superparamagnetic blocking temperature ( T B) was found to be increasing with increasing particle size that has been attributed to the increased effective anisotropy energy of the nanoparticles. The saturation moment of all the samples was found much below the bulk value of nickel ferrite that has been attributed to the disordered surface spins or dead/inert layer in these nanoparticles.
Size and modal analyses of fines and ultrafines from some Apollo 17 samples
NASA Technical Reports Server (NTRS)
Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.
1975-01-01
Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.
Size-selective separation of polydisperse gold nanoparticles in supercritical ethane.
Williams, Dylan P; Satherley, John
2009-04-09
The aim of this study was to use supercritical ethane to selectively disperse alkanethiol-stabilized gold nanoparticles of one size from a polydisperse sample in order to recover a monodisperse fraction of the nanoparticles. A disperse sample of metal nanoparticles with diameters in the range of 1-5 nm was prepared using established techniques then further purified by Soxhlet extraction. The purified sample was subjected to supercritical ethane at a temperature of 318 K in the pressure range 50-276 bar. Particles were characterized by UV-vis absorption spectroscopy, TEM, and MALDI-TOF mass spectroscopy. The results show that with increasing pressure the dispersibility of the nanoparticles increases, this effect is most pronounced for smaller nanoparticles. At the highest pressure investigated a sample of the particles was effectively stripped of all the smaller particles leaving a monodisperse sample. The relationship between dispersibility and supercritical fluid density for two different size samples of alkanethiol-stabilized gold nanoparticles was considered using the Chrastil chemical equilibrium model.
Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla
2016-01-01
Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
Albasan, Hasan; Lulich, Jody P; Osborne, Carl A; Lekcharoensuk, Chalermpol; Ulrich, Lisa K; Carpenter, Kathleen A
2003-01-15
To determine effects of storage temperature and time on pH and specific gravity of and number and size of crystals in urine samples from dogs and cats. Randomized complete block design. 31 dogs and 8 cats. Aliquots of each urine sample were analyzed within 60 minutes of collection or after storage at room or refrigeration temperatures (20 vs 6 degrees C [68 vs 43 degrees F]) for 6 or 24 hours. Crystals formed in samples from 11 of 39 (28%) animals. Calcium oxalate (CaOx) crystals formed in vitro in samples from 1 cat and 8 dogs. Magnesium ammonium phosphate (MAP) crystals formed in vitro in samples from 2 dogs. Compared with aliquots stored at room temperature, refrigeration increased the number and size of crystals that formed in vitro; however, the increase in number and size of MAP crystals in stored urine samples was not significant. Increased storage time and decreased storage temperature were associated with a significant increase in number of CaOx crystals formed. Greater numbers of crystals formed in urine aliquots stored for 24 hours than in aliquots stored for 6 hours. Storage time and temperature did not have a significant effect on pH or specific gravity. Urine samples should be analyzed within 60 minutes of collection to minimize temperature- and time-dependent effects on in vitro crystal formation. Presence of crystals observed in stored samples should be validated by reevaluation of fresh urine.
Jalava, Pasi I; Salonen, Raimo O; Hälinen, Arja I; Penttinen, Piia; Pennanen, Arto S; Sillanpää, Markus; Sandell, Erik; Hillamo, Risto; Hirvonen, Maija-Riitta
2006-09-15
The impact of long-range transport (LRT) episodes of wildfire smoke on the inflammogenic and cytotoxic activity of urban air particles was investigated in the mouse RAW 264.7 macrophages. The particles were sampled in four size ranges using a modified Harvard high-volume cascade impactor, and the samples were chemically characterized for identification of different emission sources. The particulate mass concentration in the accumulation size range (PM(1-0.2)) was highly increased during two LRT episodes, but the contents of total and genotoxic polycyclic aromatic hydrocarbons (PAH) in collected particulate samples were only 10-25% of those in the seasonal average sample. The ability of coarse (PM(10-2.5)), intermodal size range (PM(2.5-1)), PM(1-0.2) and ultrafine (PM(0.2)) particles to cause cytokine production (TNFalpha, IL-6, MIP-2) reduced along with smaller particle size, but the size range had a much smaller impact on induced nitric oxide (NO) production and cytotoxicity or apoptosis. The aerosol particles collected during LRT episodes had a substantially lower activity in cytokine production than the corresponding particles of the seasonal average period, which is suggested to be due to chemical transformation of the organic fraction during aging. However, the episode events were associated with enhanced inflammogenic and cytotoxic activities per inhaled cubic meter of air due to the greatly increased particulate mass concentration in the accumulation size range, which may have public health implications.
NASA Astrophysics Data System (ADS)
Dong, Xufeng; Guan, Xinchun; Ou, Jinping
2009-03-01
In the past ten years, there have been several investigations on the effects of particle size on magnetostrictive properties of polymer-bonded Terfenol-D composites, but they didn't get an agreement. To solve the conflict among them, Terfenol-D/unsaturated polyester resin composite samples were prepared from Tb0.3Dy0.7Fe2 powder with 20% volume fraction in six particle-size ranges (30-53, 53-150, 150-300, 300-450, 450-500 and 30-500μm). Then their magnetostrictive properties were tested. The results indicate the 53-150μm distribution presents the largest static and dynamic magnetostriction among the five monodispersed distribution samples. But the 30-500μm (polydispersed) distribution shows even larger response than 53-150μm distribution. It indicates the particle size level plays a doubleedged sword on magnetostrictive properties of magnetostrictive composites. The existence of the optimal particle size to prepare polymer-bonded Terfenol-D, whose composition is Tb0.3Dy0.7Fe2, is resulted from the competition between the positive effects and negative effects of increasing particle size. At small particle size level, the voids and the demagnetization effect decrease significantly with increasing particle size and leads to the increase of magnetostriction; while at lager particle size level, the percentage of single-crystal particles and packing density becomes increasingly smaller with increasing particle size and results in the decrease of magnetostriction. The reason for the other scholars got different results is analyzed.
Willan, Andrew R
2016-07-05
The Pessary for the Prevention of Preterm Birth Study (PS3) is an international, multicenter, randomized clinical trial designed to examine the effectiveness of the Arabin pessary in preventing preterm birth in pregnant women with a short cervix. During the design of the study two methodological issues regarding power and sample size were raised. Since treatment in the Standard Arm will vary between centers, it is anticipated that so too will the probability of preterm birth in that arm. This will likely result in a treatment by center interaction, and the issue of how this will affect the sample size requirements was raised. The sample size requirements to examine the effect of the pessary on the baby's clinical outcome was prohibitively high, so the second issue is how best to examine the effect on clinical outcome. The approaches taken to address these issues are presented. Simulation and sensitivity analysis were used to address the sample size issue. The probability of preterm birth in the Standard Arm was assumed to vary between centers following a Beta distribution with a mean of 0.3 and a coefficient of variation of 0.3. To address the second issue a Bayesian decision model is proposed that combines the information regarding the between-treatment difference in the probability of preterm birth from PS3 with the data from the Multiple Courses of Antenatal Corticosteroids for Preterm Birth Study that relate preterm birth and perinatal mortality/morbidity. The approach provides a between-treatment comparison with respect to the probability of a bad clinical outcome. The performance of the approach was assessed using simulation and sensitivity analysis. Accounting for a possible treatment by center interaction increased the sample size from 540 to 700 patients per arm for the base case. The sample size requirements increase with the coefficient of variation and decrease with the number of centers. Under the same assumptions used for determining the sample size requirements, the simulated mean probability that pessary reduces the risk of perinatal mortality/morbidity is 0.98. The simulated mean decreased with coefficient of variation and increased with the number of clinical sites. Employing simulation and sensitivity analysis is a useful approach for determining sample size requirements while accounting for the additional uncertainty due to a treatment by center interaction. Using a surrogate outcome in conjunction with a Bayesian decision model is an efficient way to compare important clinical outcomes in a randomized clinical trial in situations where the direct approach requires a prohibitively high sample size.
Hamilton, A J; Waters, E K; Kim, H J; Pak, W S; Furlong, M J
2009-06-01
The combined action of two lepidoteran pests, Plutella xylostella L. (Plutellidae) and Pieris rapae L. (Pieridae),causes significant yield losses in cabbage (Brassica oleracea variety capitata) crops in the Democratic People's Republic of Korea. Integrated pest management (IPM) strategies for these cropping systems are in their infancy, and sampling plans have not yet been developed. We used statistical resampling to assess the performance of fixed sample size plans (ranging from 10 to 50 plants). First, the precision (D = SE/mean) of the plans in estimating the population mean was assessed. There was substantial variation in achieved D for all sample sizes, and sample sizes of at least 20 and 45 plants were required to achieve the acceptable precision level of D < or = 0.3 at least 50 and 75% of the time, respectively. Second, the performance of the plans in classifying the population density relative to an economic threshold (ET) was assessed. To account for the different damage potentials of the two species the ETs were defined in terms of standard insects (SIs), where 1 SI = 1 P. rapae = 5 P. xylostella larvae. The plans were implemented using different economic thresholds (ETs) for the three growth stages of the crop: precupping (1 SI/plant), cupping (0.5 SI/plant), and heading (4 SI/plant). Improvement in the classification certainty with increasing sample sizes could be seen through the increasing steepness of operating characteristic curves. Rather than prescribe a particular plan, we suggest that the results of these analyses be used to inform practitioners of the relative merits of the different sample sizes.
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
The impact of multiple endpoint dependency on Q and I(2) in meta-analysis.
Thompson, Christopher Glen; Becker, Betsy Jane
2014-09-01
A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures Q and I(2) in scenarios using the unbiased standardized-mean-difference effect size. Univariate and multivariate meta-analysis methods are examined. Conditions included different overall outcome effects, study sample sizes, numbers of studies, between-outcomes correlations, dependency structures, and ways of computing the correlation. The univariate approach used typical fixed-effects analyses whereas the multivariate approach used generalized least-squares (GLS) estimates of a fixed-effects model, weighted by the inverse variance-covariance matrix. Increased dependence among effect sizes led to increased Type I error rates from univariate models. When effect sizes were strongly dependent, error rates were drastically higher than nominal levels regardless of study sample size and number of studies. In contrast, using GLS estimation to account for multiple-endpoint dependency maintained error rates within nominal levels. Conversely, mean I(2) values were not greatly affected by increased amounts of dependency. Last, we point out that the between-outcomes correlation should be estimated as a pooled within-groups correlation rather than using a full-sample estimator that does not consider treatment/control group membership. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Bolon, Bruce T.; Haugen, M. A.; Abin-Fuentes, A.; Deneen, J.; Carter, C. B.; Leighton, C.
2007-02-01
We have used ferromagnet/antiferromagnet/ferromagnet trilayers and ferromagnet/antiferromagnet multilayers to probe the grain size dependence of exchange bias in polycrystalline Co/Fe 50Mn 50. X-ray diffraction and transmission electron microscopy show that the Fe 50Mn 50 (FeMn) grain size increases with increasing FeMn thickness in the Co (30 Å)/FeMn system. Hence, in Co(30 Å)/FeMn( tAF Å)/Co(30 Å) trilayers the two Co layers sample different FeMn grain sizes at the two antiferromagnet/ferromagnet interfaces. For FeMn thicknesses above 100 Å, where simple bilayers have a thickness-independent exchange bias, we are therefore able to deduce the influence of FeMn grain size on the exchange bias and coercivity (and their temperature dependence) simply by measuring trilayer and multilayer samples with varying FeMn thicknesses. This can be done while maintaining the (1 1 1) orientation, and with little variation in interface roughness. Increasing the average grain size from 90 to 135 Å results in a fourfold decrease in exchange bias, following an inverse grain size dependence. We interpret the results as being due to a decrease in uncompensated spin density with increasing antiferromagnet grain size, further evidence for the importance of defect-generated uncompensated spins.
Sample-size needs for forestry herbicide trials
S.M. Zedaker; T.G. Gregoire; James H. Miller
1994-01-01
Forest herbicide experiments are increasingly being designed to evaluate smaller treatment differences when comparing existing effective treatments, tank mix ratios, surfactants, and new low-rate products. The ability to detect small differences in efficacy is dependent upon the relationship among sample size. type I and II error probabilities, and the coefficients of...
NASA Astrophysics Data System (ADS)
Verma, Narendra Kumar; Patel, Sandeep Kumar Singh; Kumar, Dinesh; Singh, Chandra Bhal; Singh, Akhilesh Kumar
2018-05-01
We have investigated the effect of sintering temperature on the densification behaviour, grain size, structural and dielectric properties of BaTiO3 ceramics, prepared by high energy ball milling method. The Powder x-ray diffraction reveals the tetragonal structure with space group P4mm for all the samples. The samples were sintered at four different temperatures, (T = 900°C, 1000°C, 1100°C, 1200°C and 1300°C). Density increased with increasing sintering temperature, reaching up to 97% at 1300°C. A grain growth was observed with increasing sintering temperature. Impedance analyses of the sintered samples at various temperatures were performed. Increase in dielectric constant and Curie temperature is observed with increasing sintering temperature.
Image analysis of representative food structures: application of the bootstrap method.
Ramírez, Cristian; Germain, Juan C; Aguilera, José M
2009-08-01
Images (for example, photomicrographs) are routinely used as qualitative evidence of the microstructure of foods. In quantitative image analysis it is important to estimate the area (or volume) to be sampled, the field of view, and the resolution. The bootstrap method is proposed to estimate the size of the sampling area as a function of the coefficient of variation (CV(Bn)) and standard error (SE(Bn)) of the bootstrap taking sub-areas of different sizes. The bootstrap method was applied to simulated and real structures (apple tissue). For simulated structures, 10 computer-generated images were constructed containing 225 black circles (elements) and different coefficient of variation (CV(image)). For apple tissue, 8 images of apple tissue containing cellular cavities with different CV(image) were analyzed. Results confirmed that for simulated and real structures, increasing the size of the sampling area decreased the CV(Bn) and SE(Bn). Furthermore, there was a linear relationship between the CV(image) and CV(Bn) (.) For example, to obtain a CV(Bn) = 0.10 in an image with CV(image) = 0.60, a sampling area of 400 x 400 pixels (11% of whole image) was required, whereas if CV(image) = 1.46, a sampling area of 1000 x 100 pixels (69% of whole image) became necessary. This suggests that a large-size dispersion of element sizes in an image requires increasingly larger sampling areas or a larger number of images.
NASA Astrophysics Data System (ADS)
Ranjbar, M.; Ghazi, M. E.; Izadifard, M.
2018-06-01
In this paper we have investigated the annealing temperature effect on the structure, morphology, dielectric and magnetic properties of sol-gel synthesized multiferroic BiFeO3 nanoparticles. X-ray diffraction spectroscopy revealed that all the samples have rhombohedrally distorted perovskite structure and the most pure BFO phase is obtained on the sample annealed at 800 °C. Field emission scanning electron microscopy (FESEM) revealed that increasing annealing temperature would increase the particle size. Decrease in dielectric constant was also observed by increasing annealing temperature. Vibrating sample method (VSM) analysis confirmed that samples annealed at 500-700 °C with particle size below the BFO's spiral spin structure length, have well saturated M-H curve and show ferromagnetic behavior.
Sample allocation balancing overall representativeness and stratum precision.
Diaz-Quijano, Fredi Alexander
2018-05-07
In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
Hattori, Shohei; Savarino, Joel; Kamezaki, Kazuki; Ishino, Sakiko; Dyckmans, Jens; Fujinawa, Tamaki; Caillon, Nicolas; Barbero, Albane; Mukotaka, Arata; Toyoda, Sakae; Well, Reinhard; Yoshida, Naohiro
2016-12-30
Triple oxygen and nitrogen isotope ratios in nitrate are powerful tools for assessing atmospheric nitrate formation pathways and their contribution to ecosystems. N 2 O decomposition using microwave-induced plasma (MIP) has been used only for measurements of oxygen isotopes to date, but it is also possible to measure nitrogen isotopes during the same analytical run. The main improvements to a previous system are (i) an automated distribution system of nitrate to the bacterial medium, (ii) N 2 O separation by gas chromatography before N 2 O decomposition using the MIP, (iii) use of a corundum tube for microwave discharge, and (iv) development of an automated system for isotopic measurements. Three nitrate standards with sample sizes of 60, 80, 100, and 120 nmol were measured to investigate the sample size dependence of the isotope measurements. The δ 17 O, δ 18 O, and Δ 17 O values increased with increasing sample size, although the δ 15 N value showed no significant size dependency. Different calibration slopes and intercepts were obtained with different sample amounts. The slopes and intercepts for the regression lines in different sample amounts were dependent on sample size, indicating that the extent of oxygen exchange is also dependent on sample size. The sample-size-dependent slopes and intercepts were fitted using natural log (ln) regression curves, and the slopes and intercepts can be estimated to apply to any sample size corrections. When using 100 nmol samples, the standard deviations of residuals from the regression lines for this system were 0.5‰, 0.3‰, and 0.1‰, respectively, for the δ 18 O, Δ 17 O, and δ 15 N values, results that are not inferior to those from other systems using gold tube or gold wire. An automated system was developed to measure triple oxygen and nitrogen isotopes in nitrate using N 2 O decomposition by MIP. This system enables us to measure both triple oxygen and nitrogen isotopes in nitrate with comparable precision and sample throughput (23 min per sample on average), and minimal manual treatment. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Gholizadeh, Ahmad
2018-04-01
In the present work, the influence of different sintering atmospheres and temperatures on physical properties of the Cu0.5Zn0.5Fe2O4 nanoparticles including the redistribution of Zn2+ and Fe3+ ions, the oxidation of Fe atoms in the lattice, crystallite sizes, IR bands, saturation magnetization and magnetic core sizes have been investigated. The fitting of XRD patterns by using Fullprof program and also FT-IR measurement show the formation of a cubic structure with no presence of impurity phase for all the samples. The unit cell parameter of the samples sintered at the air- and inert-ambient atmospheres trend to decrease with sintering temperature, but for the samples sintered under carbon monoxide-ambient atmosphere increase. The magnetization curves versus the applied magnetic field, indicate different behaviour for the samples sintered at 700 °C with the respect to the samples sintered at 300 °C. Also, the saturation magnetization increases with the sintering temperature and reach a maximum 61.68 emu/g in the sample sintered under reducing atmosphere at 600 °C. The magnetic particle size distributions of samples have been calculated by fitting the M-H curves with the size distributed Langevin function. The results obtained from the XRD and FTIR measurements suggest that the magnetic core size has the dominant effect in variation of the saturation magnetization of the samples.
Influence of temperature and aging time on HA synthesized by the hydrothermal method.
Kothapalli, C R; Wei, M; Legeros, R Z; Shaw, M T
2005-05-01
The influence of temperature and aging time on the morphology and mechanical properties of nano-sized hydroxyapatite (HA) synthesized by a hydrothermal method is reported here. The pre-mixed reactants were poured into a stirred autoclave and reacted at temperatures between 25-250 degrees C for 2-10 h. HA powders thus obtained were examined using X-ray diffraction (XRD), high-resolution field emission scanning electron microscopy (FESEM) and a particle size analyzer. It was found that the aspect ratio of the particles increased with the reaction temperature. The length of the HA particles increased with the reaction temperature below 170 degrees C, but it decreased when the temperature was raised above 170 degrees C. The agglomerates of HA particles were formed during synthesis, and their sizes were strongly dependent on reaction temperatures. As the reaction temperature increased, the agglomerate size decreased (p = 0.008). The density of the discs pressed from these samples reached 85-90% of the theoretical density after sintering at 1200 degrees C for 1 h. No decomposition to other calcium phosphates was detected at this sintering temperature. A correlation existed (p = 0.05) between the agglomerate sizes of HA particles synthesized at various conditions and their sintered densities. With the increase of the agglomerate size, the sintered density of the HA compact decreased. It was found that both the sintered density and flexural strength increased with increasing aging time and reaction temperature. A maximum flexural strength of 78 MPa was observed for the samples synthesized at 170 degrees C for 5 h with the predicted average at these conditions being 65 MPa. These samples attained an average sintered density of 88%.
Experimental and numerical modeling research of rubber material during microwave heating process
NASA Astrophysics Data System (ADS)
Chen, Hailong; Li, Tao; Li, Kunling; Li, Qingling
2018-05-01
This paper aims to investigate the heating behaviors of block rubber by experimental and simulated method. The COMSOL Multiphysics 5.0 software was utilized in numerical simulation work. The effects of microwave frequency, power and sample size on temperature distribution are examined. The effect of frequency on temperature distribution is obvious. The maximum and minimum temperatures of block rubber increase first and then decrease with frequency increasing. The microwave heating efficiency is maximum in the microwave frequency of 2450 MHz. However, more uniform temperature distribution is presented in other microwave frequencies. The influence of microwave power on temperature distribution is also remarkable. The smaller the power, the more uniform the temperature distribution on the block rubber. The effect of power on microwave heating efficiency is not obvious. The effect of sample size on temperature distribution is evidently found. The smaller the sample size, the more uniform the temperature distribution on the block rubber. However, the smaller the sample size, the lower the microwave heating efficiency. The results can serve as references for the research on heating rubber material by microwave technology.
Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.
Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham
2017-12-01
During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Rasch fit statistics and sample size considerations for polytomous data.
Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael
2008-05-29
Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire - 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges.
Rasch fit statistics and sample size considerations for polytomous data
Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael
2008-01-01
Background Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Methods Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire – 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. Results The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. Conclusion It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges. PMID:18510722
NASA Astrophysics Data System (ADS)
Rida, A.; Makke, A.; Rouhaud, E.; Micoulaut, M.
2017-10-01
We use molecular dynamics simulations to study the mechanical properties of a columnar nanocrystalline copper with a mean grain size between 8.91 nm and 24 nm. The used samples were generated by using a melting cooling method. These samples were submitted to uniaxial tensile test. The results reveal the presence of a critical mean grain size between 16 and 20 nm, where there is an inversion in the conventional Hall-Petch tendency. This inversion is illustrated by the increase of flow stress with the increase of the mean grain size. This transition is caused by shifting of the deformation mechanism from dislocations to a combination of grain boundaries sliding and dislocations. Moreover, the effect of temperature on the mechanical properties of nanocrystalline copper has been investigated. The results show a decrease of the flow stress and Young's modulus when the temperature increases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jalava, Pasi I.; Salonen, Raimo O.; Haelinen, Arja I.
2006-09-15
The impact of long-range transport (LRT) episodes of wildfire smoke on the inflammogenic and cytotoxic activity of urban air particles was investigated in the mouse RAW 264.7 macrophages. The particles were sampled in four size ranges using a modified Harvard high-volume cascade impactor, and the samples were chemically characterized for identification of different emission sources. The particulate mass concentration in the accumulation size range (PM{sub 1-0.2}) was highly increased during two LRT episodes, but the contents of total and genotoxic polycyclic aromatic hydrocarbons (PAH) in collected particulate samples were only 10-25% of those in the seasonal average sample. The abilitymore » of coarse (PM{sub 10-2.5}), intermodal size range (PM{sub 2.5-1}), PM{sub 1-0.2} and ultrafine (PM{sub 0.2}) particles to cause cytokine production (TNF{alpha}, IL-6, MIP-2) reduced along with smaller particle size, but the size range had a much smaller impact on induced nitric oxide (NO) production and cytotoxicity or apoptosis. The aerosol particles collected during LRT episodes had a substantially lower activity in cytokine production than the corresponding particles of the seasonal average period, which is suggested to be due to chemical transformation of the organic fraction during aging. However, the episode events were associated with enhanced inflammogenic and cytotoxic activities per inhaled cubic meter of air due to the greatly increased particulate mass concentration in the accumulation size range, which may have public health implications.« less
Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M
2017-11-21
One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.
Reproducibility of preclinical animal research improves with heterogeneity of study samples
Vogt, Lucile; Sena, Emily S.; Würbel, Hanno
2018-01-01
Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495
Simulation of Particle Size Effect on Dynamic Properties and Fracture of PTFE-W-Al Composites
NASA Astrophysics Data System (ADS)
Herbold, Eric; Cai, Jing; Benson, David; Nesterenko, Vitali
2007-06-01
Recent investigations of the dynamic compressive strength of cold isostatically pressed (CIP) composites of polytetrafluoroethylene (PTFE), tungsten and aluminum powders show significant differences depending on the size of metallic particles. PTFE and aluminum mixtures are known to be energetic under dynamic and thermal loading. The addition of tungsten increases density and overall strength of the sample. Multi-material Eulerian and arbitrary Lagrangian-Eulerian methods were used for the investigation due to the complexity of the microstructure, relatively large deformations and the ability to handle the formation of free surfaces in a natural manner. The calculations indicate that the observed dependence of sample strength on particle size is due to the formation of force chains under dynamic loading in samples with small particle sizes even at larger porosity in comparison with samples with large grain size and larger density.
Hierarchical modeling of cluster size in wildlife surveys
Royle, J. Andrew
2008-01-01
Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).
NASA Astrophysics Data System (ADS)
Davis, Cabell S.; Wiebe, Peter H.
1985-01-01
Macrozooplankton size structure and taxonomic composition in warm-core ring 82B was examined from a time series (March, April, June) of ring center MOCNESS (1 m) samples. Size distributions of 15 major taxonomic groups were determined from length measurements digitized from silhouette photographs of the samples. Silhouette digitization allows rapid quantification of Zooplankton size structure and taxonomic composition. Length/weight regressions, determined for each taxon, were used to partition the biomass (displacement volumes) of each sample among the major taxonomic groups. Zooplankton taxonomic composition and size structure varied with depth and appeared to coincide with the hydrographic structure of the ring. In March and April, within the thermostad region of the ring, smaller herbivorous/omnivorous Zooplankton, including copepods, crustacean larvae, and euphausiids, were dominant, whereas below this region, larger carnivores, such as medusae, ctenophores, fish, and decapods, dominated. Copepods were generally dominant in most samples above 500 m. Total macrozooplankton abundance and biomass increased between March and April, primarily because of increases in herbivorous taxa, including copepods, crustacean larvae, and larvaceans. A marked increase in total macrozooplankton abundance and biomass between April and June was characterized by an equally dramatic shift from smaller herbivores (1.0-3.0 mm) in April to large herbivores (5.0-6.0 mm) and carnivores (>15 mm) in June. Species identifications made directly from the samples suggest that changes in trophic structure resulted from seeding type immigration and subsequent in situ population growth of Slope Water zooplankton species.
Lindenfors, P; Tullberg, B S
2006-07-01
The fact that characters may co-vary in organism groups because of shared ancestry and not always because of functional correlations was the initial rationale for developing phylogenetic comparative methods. Here we point out a case where similarity due to shared ancestry can produce an undesired effect when conducting an independent contrasts analysis. Under special circumstances, using a low sample size will produce results indicating an evolutionary correlation between characters where an analysis of the same pattern utilizing a larger sample size will show that this correlation does not exist. This is the opposite effect of increased sample size to that expected; normally an increased sample size increases the chance of finding a correlation. The situation where the problem occurs is when co-variation between the two continuous characters analysed is clumped in clades; e.g. when some phylogenetically conservative factors affect both characters simultaneously. In such a case, the correlation between the two characters becomes contingent on the number of clades sharing this conservative factor that are included in the analysis, in relation to the number of species contained within these clades. Removing species scattered evenly over the phylogeny will in this case remove the exact variation that diffuses the evolutionary correlation between the two characters - the variation contained within the clades sharing the conservative factor. We exemplify this problem by discussing a parallel in nature where the described problem may be of importance. This concerns the question of the presence or absence of Rensch's rule in primates.
Stucke, Kathrin; Kieser, Meinhard
2012-12-10
In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.
Technical note: Alternatives to reduce adipose tissue sampling bias.
Cruz, G D; Wang, Y; Fadel, J G
2014-10-01
Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.
NASA Astrophysics Data System (ADS)
Mockford, T.; Zobeck, T. M.; Lee, J. A.; Gill, T. E.; Dominguez, M. A.; Peinado, P.
2012-12-01
Understanding the controls of mineral dust emissions and their particle size distributions during wind-erosion events is critical as dust particles play a significant impact in shaping the earth's climate. It has been suggested that emission rates and particle size distributions are independent of soil chemistry and soil texture. In this study, 45 samples of wind-erodible surface soils from the Southern High Plains and Chihuahuan Desert regions of Texas, New Mexico, Colorado and Chihuahua were analyzed by the Lubbock Dust Generation, Analysis and Sampling System (LDGASS) and a Beckman-Coulter particle multisizer. The LDGASS created dust emissions in a controlled laboratory setting using a rotating arm which allows particle collisions. The emitted dust was transferred to a chamber where particulate matter concentration was recorded using a DataRam and MiniVol filter and dust particle size distribution was recorded using a GRIMM particle analyzer. Particle size analysis was also determined from samples deposited on the Mini-Vol filters using a Beckman-Coulter particle multisizer. Soil textures of source samples ranged from sands and sandy loams to clays and silts. Initial results suggest that total dust emissions increased with increasing soil clay and silt content and decreased with increasing sand content. Particle size distribution analysis showed a similar relationship; soils with high silt content produced the widest range of dust particle sizes and the smallest dust particles. Sand grains seem to produce the largest dust particles. Chemical control of dust emissions by calcium carbonate content will also be discussed.
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Atomistic origin of size effects in fatigue behavior of metallic glasses
NASA Astrophysics Data System (ADS)
Sha, Zhendong; Wong, Wei Hin; Pei, Qingxiang; Branicio, Paulo Sergio; Liu, Zishun; Wang, Tiejun; Guo, Tianfu; Gao, Huajian
2017-07-01
While many experiments and simulations on metallic glasses (MGs) have focused on their tensile ductility under monotonic loading, the fatigue mechanisms of MGs under cyclic loading still remain largely elusive. Here we perform molecular dynamics (MD) and finite element simulations of tension-compression fatigue tests in MGs to elucidate their fatigue mechanisms with focus on the sample size effect. Shear band (SB) thickening is found to be the inherent fatigue mechanism for nanoscale MGs. The difference in fatigue mechanisms between macroscopic and nanoscale MGs originates from whether the SB forms partially or fully through the cross-section of the specimen. Furthermore, a qualitative investigation of the sample size effect suggests that small sample size increases the fatigue life while large sample size promotes cyclic softening and necking. Our observations on the size-dependent fatigue behavior can be rationalized by the Gurson model and the concept of surface tension of the nanovoids. The present study sheds light on the fatigue mechanisms of MGs and can be useful in interpreting previous experimental results.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826
Sample size and power considerations in network meta-analysis
2012-01-01
Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327
Sample size for post-marketing safety studies based on historical controls.
Wu, Yu-te; Makuch, Robert W
2010-08-01
As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.
Optimal number of features as a function of sample size for various classification rules.
Hua, Jianping; Xiong, Zixiang; Lowey, James; Suh, Edward; Dougherty, Edward R
2005-04-15
Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification. For the companion website, please visit http://public.tgen.org/tamu/ofs/ e-dougherty@ee.tamu.edu.
Increased accuracy of batch fecundity estimates using oocyte stage ratios in Plectropomus leopardus.
Carter, A B; Williams, A J; Russ, G R
2009-08-01
Using the ratio of the number of migratory nuclei to hydrated oocytes to estimate batch fecundity of common coral trout Plectropomus leopardus increases the time over which samples can be collected and, therefore, increases the sample size available and reduces biases in batch fecundity estimates.
Yu, Jiaguo; Qi, Lifang; Cheng, Bei; Zhao, Xiufeng
2008-12-30
Tungsten trioxide hollow microspheres were prepared by immersing SrWO4 microspheres in a concentrated HNO3 solution, and then calcined at different temperatures. The prepared tungsten oxide samples were characterized by X-ray diffraction, X-ray photoelectron spectroscopy, Fourier transform infrared spectra, differential thermal analysis-thermogravimetry, UV-visible spectrophotometry, scanning electron microscopy, N2 adsorption/desorption measurements. The photocatalytic activity of the samples was evaluated by photocatalytic decolorization of rhodamine B aqueous solution under visible-light irradiation. It was found that with increasing calcination temperatures, the average crystallite size and average pore size increased, on the contrary, Brunauer-Emmett-Teller-specific surface areas decreased. However, pore volume and porosity increased firstly, and then decreased. Increasing calcination temperatures resulted in the changes of surface morphology of hollow microspheres. The un-calcined and 300 degrees C-calcined samples showed higher photocatalytic activity than other samples. At 400 degrees C, the photocatalytic activity decreased greatly due to the decrease of specific surface areas. At 500 degrees C, the photocatalytic activity of the samples increased again due to the junction effect of two phases.
Chen, Hua-xing; Tang, Hong-ming; Duan, Ming; Liu, Yi-gang; Liu, Min; Zhao, Feng
2015-01-01
In this study, the effects of gravitational settling time, temperature, speed and time of centrifugation, flocculant type and dosage, bubble size and gas amount were investigated. The results show that the simple increase in settling time and temperature is of no use for oil-water separation of the three wastewater samples. As far as oil-water separation efficiency is concerned, increasing centrifugal speed and centrifugal time is highly effective for L sample, and has a certain effect on J sample, but is not valid for S sample. The flocculants are highly effective for S and L samples, and the oil-water separation efficiency increases with an increase in the concentration of inorganic cationic flocculants. There exist critical reagent concentrations for the organic cationic and the nonionic flocculants, wherein a higher or lower concentration of flocculant would cause a decrease in the treatment efficiency. Flotation is an effective approach for oil-water separation of polymer-contained wastewater from the three oilfields. The oil-water separation efficiency can be enhanced by increasing floatation agent concentration, flotation time and gas amount, and by decreasing bubble size.
How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation
ERIC Educational Resources Information Center
Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard
2006-01-01
Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…
Calibrating the Ordovician Radiation of marine life: implications for Phanerozoic diversity trends
NASA Technical Reports Server (NTRS)
Miller, A. I.; Foote, M.
1996-01-01
It has long been suspected that trends in global marine biodiversity calibrated for the Phanerozoic may be affected by sampling problems. However, this possibility has not been evaluated definitively, and raw diversity trends are generally accepted at face value in macroevolutionary investigations. Here, we analyze a global-scale sample of fossil occurrences that allows us to determine directly the effects of sample size on the calibration of what is generally thought to be among the most significant global biodiversity increases in the history of life: the Ordovician Radiation. Utilizing a composite database that includes trilobites, brachiopods, and three classes of molluscs, we conduct rarefaction analyses to demonstrate that the diversification trajectory for the Radiation was considerably different than suggested by raw diversity time-series. Our analyses suggest that a substantial portion of the increase recognized in raw diversity depictions for the last three Ordovician epochs (the Llandeilian, Caradocian, and Ashgillian) is a consequence of increased sample size of the preserved and catalogued fossil record. We also use biometric data for a global sample of Ordovician trilobites, along with methods of measuring morphological diversity that are not biased by sample size, to show that morphological diversification in this major clade had leveled off by the Llanvirnian. The discordance between raw diversity depictions and more robust taxonomic and morphological diversity metrics suggests that sampling effects may strongly influence our perception of biodiversity trends throughout the Phanerozoic.
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models
Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
Grindability and combustion behavior of coal and torrefied biomass blends.
Gil, M V; García, R; Pevida, C; Rubiera, F
2015-09-01
Biomass samples (pine, black poplar and chestnut woodchips) were torrefied to improve their grindability before being combusted in blends with coal. Torrefaction temperatures between 240 and 300 °C and residence times between 11 and 43 min were studied. The grindability of the torrefied biomass, evaluated from the particle size distribution of the ground sample, significantly improved compared to raw biomass. Higher temperatures increased the proportion of smaller-sized particles after grinding. Torrefied chestnut woodchips (280 °C, 22 min) showed the best grinding properties. This sample was blended with coal (5-55 wt.% biomass). The addition of torrefied biomass to coal up to 15 wt.% did not significantly increase the proportion of large-sized particles after grinding. No relevant differences in the burnout value were detected between the coal and coal/torrefied biomass blends due to the high reactivity of the coal. NO and SO2 emissions decreased as the percentage of torrefied biomass in the blend with coal increased. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effects of normalization on quantitative traits in association test
2009-01-01
Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest. PMID:20003414
Enhancing the Damping Behavior of Dilute Zn-0.3Al Alloy by Equal Channel Angular Pressing
NASA Astrophysics Data System (ADS)
Demirtas, M.; Atli, K. C.; Yanar, H.; Purcek, G.
2017-06-01
The effect of grain size on the damping capacity of a dilute Zn-0.3Al alloy was investigated. It was found that there was a critical strain value (≈1 × 10-4) below and above which damping of Zn-0.3Al showed dynamic and static/dynamic hysteresis behavior, respectively. In the dynamic hysteresis region, damping resulted from viscous sliding of phase/grain boundaries, and decreasing grain size increased the damping capacity. While the quenched sample with 100 to 250 µm grain size showed very limited damping capacity with a loss factor tanδ of less than 0.007, decreasing grain size down to 2 µm by equal channel angular pressing (ECAP) increased tanδ to 0.100 in this region. Dynamic recrystallization due to microplasticity at the sample surface was proposed as the damping mechanism for the first time in the region where the alloy showed the combined aspects of dynamic and static hysteresis damping. In this region, tanδ increased with increasing strain amplitude, and ECAPed sample showed a tanδ value of 0.256 at a strain amplitude of 2 × 10-3, the highest recorded so far in the damping capacity-related studies on ZA alloys.
Conceptual data sampling for breast cancer histology image classification.
Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir
2017-10-01
Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ellison, Laura E.; Lukacs, Paul M.
2014-01-01
Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.
Transport Loss Estimation of Fine Particulate Matter in Sampling Tube Based on Numerical Computation
NASA Astrophysics Data System (ADS)
Luo, L.; Cheng, Z.
2016-12-01
In-situ measurement of PM2.5 physical and chemical properties is one substantial approach for the mechanism investigation of PM2.5 pollution. Minimizing PM2.5 transport loss in sampling tube is essential for ensuring the accuracy of the measurement result. In order to estimate the integrated PM2.5 transport efficiency in sampling tube and optimize tube designs, the effects of different tube factors (length, bore size and bend number) on the PM2.5 transport were analyzed based on the numerical computation. The results shows that PM2.5 mass concentration transport efficiency of vertical tube with flowrate at 20.0 L·min-1, bore size at 4 mm, length at 1.0 m was 89.6%. However, the transport efficiency will increase to 98.3% when the bore size is increased to 14 mm. PM2.5 mass concentration transport efficiency of horizontal tube with flowrate at 1.0 L·min-1, bore size at 4mm, length at 10.0 m is 86.7%, increased to 99.2% with length at 0.5 m. Low transport efficiency of 85.2% for PM2.5 mass concentration is estimated in bend with flowrate at 20.0 L·min-1, bore size at 4mm, curvature angle at 90o. Laminar flow of air in tube through keeping the ratio of flowrate (L·min-1) and bore size (mm) less than 1.4 is beneficial to decrease the PM2.5 transport loss. For the target of PM2.5 transport efficiency higher than 97%, it is advised to use vertical sampling tubes with length less than 6.0 m for the flowrates of 2.5, 5.0, 10.0 L·min-1 and bore size larger than 12 mm for the flowrates of 16.7 or 20.0 L·min-1. For horizontal sampling tubes, tube length is decided by the ratio of flowrate and bore size. Meanwhile, it is suggested to decrease the amount of the bends in tube of turbulent flow.
NASA Astrophysics Data System (ADS)
Ratnawulan, Fauzi, Ahmad; AE, Sukma Hayati
2017-08-01
Copper oxide powder was prepared from Copper iron from South Solok, Indonesia. The samples was dried and calcined for an hour at temperatures of 145°C, 300°C,850°C, 1000°C. Phase transformation and crystallite size of the calcined powders have been investigated as a function of calcination temperature by room-temperature X-ray diffraction (XRD). It was seen that the tenorite, CuO was successfully obtained. With increasing calcining temperature, CuO transformed from malachite Cu2(CO3)(OH)2 to tenorite phase (CuO) and crystallite size of prepared samples increased from 36 nm to 76 nm.
Phase Composition, Crystallite Size and Physical Properties of B2O3-added Forsterite Nano-ceramics
NASA Astrophysics Data System (ADS)
Pratapa, S.; Chairunnisa, A.; Nurbaiti, U.; Handoko, W. D.
2018-05-01
This study was aimed to know the effect of B2O3 addition on the phase composition, crystallite size and dielectric properties of forsterite (Mg2SiO4) nano-ceramics. It utilized a purified silica sand from Tanah Laut, South Kalimantan as the source of (amorphous) silica and a magnesium oxide (MgO) powder. They were thoroughly mixed and milled prior to calcination. The addition of 1, 2, 3, and 4 wt% B2O3 to the calcined powder was done before uniaxial pressing and then sintering at 950 °C for 4 h. The phase composition and forsterite crystallite size, the microstructure and the dielectric constant of the sintered samples were characterized using X-ray diffractometer (XRD), Scanning Electron Microscope (SEM) and Vector Network Analyzer (VNA), respectively. Results showed that all samples contained forsterite, periclase (MgO) and proto enstatite (MgSiO3) with different weight fractions and forsterite crystallite size. In general, the weight fraction and crystallite size of forsterite increased with increasing B2O3 addition. The weight fraction and crystallite size of forsterite in the 4%-added sample reached 99% wt and 164 nm. Furthermore, the SEM images showed that the average grain size became slightly larger and the ceramics also became slightly denser as more B2O3 was added. The results are in accordance with density measurements using the Archimedes method which showed that the 4% ceramic exhibited 1.845 g/cm3 apparent density, while the 1% ceramic 1.681 g/cm3. We also found that the higher the density, the higher the average dielectric constant, i.e. it was 4.6 for the 1%-added sample and 6.4 for the 4%-added sample.
Yang, Songshan; Cranford, James A; Jester, Jennifer M; Li, Runze; Zucker, Robert A; Buu, Anne
2017-02-28
This study proposes a time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes. The motivating example demonstrates that this zero-inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Exact tests using two correlated binomial variables in contemporary cancer clinical trials.
Yu, Jihnhee; Kepner, James L; Iyer, Renuka
2009-12-01
New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.
A field instrument for quantitative determination of beryllium by activation analysis
Vaughn, William W.; Wilson, E.E.; Ohm, J.M.
1960-01-01
A low-cost instrument has been developed for quantitative determinations of beryllium in the field by activation analysis. The instrument makes use of the gamma-neutron reaction between gammas emitted by an artificially radioactive source (Sb124) and beryllium as it occurs in nature. The instrument and power source are mounted in a panel-type vehicle. Samples are prepared by hand-crushing the rock to approximately ?-inch mesh size and smaller. Sample volumes are kept constant by means of a standard measuring cup. Instrument calibration, made by using standards of known BeO content, indicates the analyses are reproducible and accurate to within ? 0.25 percent BeO in the range from 1 to 20 percent BeO with a sample counting time of 5 minutes. Sensitivity of the instrument maybe increased somewhat by increasing the source size, the sample size, or by enlarging the cross-sectional area of the neutron-sensitive phosphor normal to the neutron flux.
Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz
2013-01-01
Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065
Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz
2013-03-01
The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.
Boonna, Sureeporn; Tongta, Sunanta
2018-07-01
Structural transformation of crystallized debranched cassava starch prepared by temperature cycling (TC) treatment and then subjected to annealing (ANN), heat-moisture treatment (HMT) and dual hydrothermal treatments of ANN and HMT was investigated. The relative crystallinity, lateral crystal size, melting temperature and resistant starch (RS) content increased for all hydrothermally treated samples, but the slowly digestible starch (SDS) content decreased. The RS content followed the order: HMT → ANN > HMT > ANN → HMT > ANN > TC, respectively. The HMT → ANN sample showed a larger lateral crystal size with more homogeneity, whereas the ANN → HMT sample had a smaller lateral crystal size with a higher melting temperature. After cooking at 50% moisture, the increased RS content of samples was observed, particularly for the ANN → HMT sample. These results suggest that structural changes of crystallized debranched starch during hydrothermal treatments depend on initial crystalline characteristics and treatment sequences, influencing thermal stability, enzyme digestibility, and cooking stability. Copyright © 2018 Elsevier Ltd. All rights reserved.
Diet of land birds along an elevational gradient in Papua New Guinea.
Sam, Katerina; Koane, Bonny; Jeppy, Samuel; Sykorova, Jana; Novotny, Vojtech
2017-03-09
Food preferences and exploitation are crucial to many aspects of avian ecology and are of increasing importance as we progress in our understanding of community ecology. We studied birds and their feeding specialization in the Central Range of Papua New Guinea, at eight study sites along a complete (200 to 3700 m a.s.l.) rainforest elevational gradient. The relative species richness and abundance increased with increasing elevation for insect and nectar eating birds, and decreased with elevation for fruit feeding birds. Using emetic tartar, we coerced 999 individuals from 99 bird species to regurgitate their stomach contents and studied these food samples. The proportion of arthropods in food samples increased with increasing elevation at the expense of plant material. Body size of arthropods eaten by birds decreased with increasing elevation. This reflected the parallel elevational trend in the body size of arthropods available in the forest understory. Body size of insectivorous birds was significantly positively correlated with the body size of arthropods they ate. Coleoptera were the most exploited arthropods, followed by Araneae, Hymenoptera, and Lepidoptera. Selectivity indexes showed that most of the arthropod taxa were taken opportunistically, reflecting the spatial patterns in arthropod abundances to which the birds were exposed.
Xu, Jiao; Li, Mei; Shi, Guoliang; Wang, Haiting; Ma, Xian; Wu, Jianhui; Shi, Xurong; Feng, Yinchang
2017-11-15
In this study, single particle mass spectra signatures of both coal burning boiler and biomass burning boiler emitted particles were studied. Particle samples were suspended in clean Resuspension Chamber, and analyzed by ELPI and SPAMS simultaneously. The size distribution of BBB (biomass burning boiler sample) and CBB (coal burning boiler sample) are different, as BBB peaks at smaller size, and CBB peaks at larger size. Mass spectra signatures of two samples were studied by analyzing the average mass spectrum of each particle cluster extracted by ART-2a in different size ranges. In conclusion, BBB sample mostly consists of OC and EC containing particles, and a small fraction of K-rich particles in the size range of 0.2-0.5μm. In 0.5-1.0μm, BBB sample consists of EC, OC, K-rich and Al_Silicate containing particles; CBB sample consists of EC, ECOC containing particles, while Al_Silicate (including Al_Ca_Ti_Silicate, Al_Ti_Silicate, Al_Silicate) containing particles got higher fractions as size increase. The similarity of single particle mass spectrum signatures between two samples were studied by analyzing the dot product, results indicated that part of the single particle mass spectra of two samples in the same size range are similar, which bring challenge to the future source apportionment activity by using single particle aerosol mass spectrometer. Results of this study will provide physicochemical information of important sources which contribute to particle pollution, and will support source apportionment activities. Copyright © 2017. Published by Elsevier B.V.
Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi
2011-04-01
Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.
Structural elucidation and magnetic behavior evaluation of Cu-Cr doped BaCo-X hexagonal ferrites
NASA Astrophysics Data System (ADS)
Azhar Khan, Muhammad; Hussain, Farhat; Rashid, Muhammad; Mahmood, Asif; Ramay, Shahid M.; Majeed, Abdul
2018-04-01
Ba2-xCuxCo2CryFe28-yO46 (x = 0.0, 0.1, 0.2, 0.3, 0.4, y = 0.0, 0.2, 0.4, 0.6, 0.8) X-type hexagonal ferrites were synthesized via micro-emulsion route. The techniques which were applied to characterize the prepared samples are as follows: X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), Dielectric measurements and vibrating sample magnetometer (VSM). The structural parameters i.e. lattice constant (a, c), cell volume (V), X-ray density, bulk density and crystallite size of all the prepared samples were obtained using XRD analysis. The lattice parameters 'a' and 'c' increase from 5.875 Å to 5.934 Å and 83.367 Å to 83.990 Å respectively. The crystallite size of investigated samples lies in the range of 28-32 nm. The magnetic properties of all samples have been calculated by vibrating sample magnetometer (VSM) analysis. The increase in coercivity (Hc) was observed with the increase of doping contents. It was observed that the coercivity (Hc) of all prepared samples is inversely related to the crystalline size which reflects that all materials are super-paramagnetic. The dielectric parameters i.e. dielectric constant, dielectric loss, tangent loss etc were obtained in the frequency range of 1 MHz-3 GHz and followed the Maxwell-Wagner's model. The significant variation the dielectric parameters are observed with increasing frequency. The maximum Q value is obtained at ∼2 GHz due to which these materials are used for high frequency multilayer chip inductors.
Variation of phytoplankton assemblages along the Mozambique coast as revealed by HPLC and microscopy
NASA Astrophysics Data System (ADS)
Sá, C.; Leal, M. C.; Silva, A.; Nordez, S.; André, E.; Paula, J.; Brotas, V.
2013-05-01
This study is an integrated overview of pigment and microscopic analysis of phytoplankton communities throughout the Mozambican coast. Collected samples revealed notable patterns of phytoplankton occurrence and distribution, with community structure changing between regions and sample depth. Pigment data showed Delagoa Bight, Sofala Bank and Angoche as the most productive regions throughout the sampled area. In general, micro-sized phytoplankton, particularly diatoms, were important contributors to biomass both at surface and sub-surface maximum (SSM) samples, although were almost absent in the northern stations. In contrast, nano- and pico-sized phytoplankton revealed opposing patterns. Picophytoplankton were most abundant at surface, as opposed to nanophytoplankton, which were more abundant at the SSM. Microphytoplankton were associated with cooler southern water masses, while picophytoplankton were related to warmer northern water masses. Nanophytoplankton were found to increase their contribution to biomass with increasing SSM. Microscopy information on the genera and species level revealed the diatoms Chaetoceros spp., Proboscia alata, Pseudo-nitzschia spp., Cylindrotheca closterium and Hemiaulus haukii as the most abundant taxa of the micro-sized phytoplankton. Discosphaera tubifera and Emiliania huxleyi were the most abundant coccolithophores, nano-sized phytoplankton.
Microstructural Evaluation of Forging Parameters for Superalloy Disks
NASA Technical Reports Server (NTRS)
Falsey, John R.
2004-01-01
Forgings of nickel base superalloy were formed under several different strain rates and forging temperatures. Samples were taken from each forging condition to find the ASTM grain size, and the as large as grain (ALA). The specimens were mounted in bakelite, polished, etched and then optical microscopy was used to determine grain size. The specimens ASTM grain sizes from each forging condition were plotted against strain rate, forging temperature, and presoak time. Grain sizes increased with increasing forging temperature. Grain sizes also increased with decreasing strain rates and increasing forging presoak time. The ALA had been determined from each forging condition using the ASTM standard method. Each ALA was compared with the ASTM grain size of each forging condition to determine if the grain sizes were uniform or not. The forging condition of a strain rate of .03/sec and supersolvus heat treatment produced non uniform grains indicated by critical grain growth. Other anomalies are noted as well.
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests
Duncanson, L.; Rourke, O.; Dubayah, R.
2015-01-01
Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from −4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation. PMID:26598233
Sampling bee communities using pan traps: alternative methods increase sample size
USDA-ARS?s Scientific Manuscript database
Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...
Sepúlveda, Nuno; Drakeley, Chris
2015-04-03
In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.
Measurement of tortuosity in aluminum foams using airborne ultrasound.
Le, Lawrence H; Zhang, Chan; Ta, Dean; Lou, Edmond
2010-01-01
The slow compressional wave in air-saturated aluminum foams was studied by means of ultrasonic transverse transmission method over a frequency range from 0.2 MHz to 0.8 MHz. The samples investigated have three different cell sizes or pores per inch (5, 10 and 20 ppi) and each size has three aluminum volume fractions (5%, 8% and 12% AVF). Phase velocities show minor dispersion at low frequencies but remain constant after 0.7 MHz. Pulse broadening and amplitude attenuation are obvious and increase with increasing ppi. Attenuation increases considerably with AVF for 20 ppi foams. Tortuosity ranges from 1.003 to 1.032 and increases with AVF and ppi. However, the increase of tortuosity with AVF is very small for 10 and 20 ppi samples.
Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument
NASA Astrophysics Data System (ADS)
Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory
2014-10-01
The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.
Effect of annealing temperature on the size and magnetic properties of CoFe2O4 nanoparticle
NASA Astrophysics Data System (ADS)
Sunny, Annrose; Akshay, V. R.; Vasundhara, M.
2018-05-01
CoFe2O4 (CFO) nanoparticles (NPs) are synthesized using sol gel method and are annealed at 400, 600 and 800 °C for 4h. The crystal structure and morphology of the NPs are investigated through XRD and TEM analysis. The X- ray diffraction analysis shows that all the samples are well formed and attain a cubic structure with Fd-3m space group. The morphology of the material is found to be polygonal and the particle size of the NPs is increased with increase of annealing temperature as 400, 600 and 800 to be 20 nm, 30 nm and 70 nm respectively. The magnetic properties of the NPs are investigated using VSM and observed that the curie temperature for 400, 600 and 800 °C annealing temperature are 762 K, 780 K, 769 K respectively. The Ms of 600 sample is 80 emu/g. The 400 and 800 sample shows lower Ms value this is due to poor crystalanity and exaggerated grain growth at the respective temperatures. The coercivity of the sample shows linear dependence with particle size of the material the highest coercivity is obtained for 400 sample and low value for 800 sample.
Sample size in psychological research over the past 30 years.
Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B
2011-04-01
The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
How Methodological Features Affect Effect Sizes in Education
ERIC Educational Resources Information Center
Cheung, Alan; Slavin, Robert
2016-01-01
As evidence-based reform becomes increasingly important in educational policy, it is becoming essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programs. The purpose of this study was to examine how methodological features such as types of publication, sample sizes, and…
[A comparison of convenience sampling and purposive sampling].
Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien
2014-06-01
Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities.
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.
Godri, Krystal J.; Harrison, Roy M.; Evans, Tim; Baker, Timothy; Dunster, Christina; Mudway, Ian S.; Kelly, Frank J.
2011-01-01
As the incidence of respiratory and allergic symptoms has been reported to be increased in children attending schools in close proximity to busy roads, it was hypothesised that PM from roadside schools would display enhanced oxidative potential (OP). Two consecutive one-week air quality monitoring campaigns were conducted at seven school sampling sites, reflecting roadside and urban background in London. Chemical characteristics of size fractionated particulate matter (PM) samples were related to the capacity to drive biological oxidation reactions in a synthetic respiratory tract lining fluid. Contrary to hypothesised contrasts in particulate OP between school site types, no robust size-fractionated differences in OP were identified due high temporal variability in concentrations of PM components over the one-week sampling campaigns. For OP assessed both by ascorbate (OPAA m−3) and glutathione (OPGSH m−3) depletion, the highest OP per cubic metre of air was in the largest size fraction, PM1.9–10.2. However, when expressed per unit mass of particles OPAA µg−1 showed no significant dependence upon particle size, while OPGSH µg−1 had a tendency to increase with increasing particle size, paralleling increased concentrations of Fe, Ba and Cu. The two OP metrics were not significantly correlated with one another, suggesting that the glutathione and ascorbate depletion assays respond to different components of the particles. Ascorbate depletion per unit mass did not show the same dependence as for GSH and it is possible that other trace metals (Zn, Ni, V) or organic components which are enriched in the finer particle fractions, or the greater surface area of smaller particles, counter-balance the redox activity of Fe, Ba and Cu in the coarse particles. Further work with longer-term sampling and a larger suite of analytes is advised in order to better elucidate the determinants of oxidative potential, and to fuller explore the contrasts between site types. PMID:21818283
NASA Astrophysics Data System (ADS)
Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua
2017-12-01
In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.
Erfani, Maryam; Saion, Elias; Soltani, Nayereh; Hashim, Mansor; Wan Abdullah, Wan Saffiey B.; Navasery, Manizheh
2012-01-01
Calcium borate nanoparticles have been synthesized by a thermal treatment method via facile co-precipitation. Differences of annealing temperature and annealing time and their effects on crystal structure, particle size, size distribution and thermal stability of nanoparticles were investigated. The formation of calcium borate compound was characterized by X-ray diffraction (XRD) and Fourier Transform Infrared spectroscopy (FTIR), Transmission electron microscopy (TEM), and Thermogravimetry (TGA). The XRD patterns revealed that the co-precipitated samples annealed at 700 °C for 3 h annealing time formed an amorphous structure and the transformation into a crystalline structure only occurred after 5 h annealing time. It was found that the samples annealed at 900 °C are mostly metaborate (CaB2O4) nanoparticles and tetraborate (CaB4O7) nanoparticles only observed at 970 °C, which was confirmed by FTIR. The TEM images indicated that with increasing the annealing time and temperature, the average particle size increases. TGA analysis confirmed the thermal stability of the annealed samples at higher temperatures. PMID:23203073
Effects of grain size on the properties of bulk nanocrystalline Co-Ni alloys
NASA Astrophysics Data System (ADS)
Qiao, Gui-Ying; Xiao, Fu-Ren
2017-08-01
Bulk nanocrystalline Co78Ni22 alloys with grain size ranging from 5 nm to 35 nm were prepared by high-speed jet electrodeposition (HSJED) and annealing. Microhardness and magnetic properties of these alloys were investigated by microhardness tester and vibrating sample magnetometer. Effects of grain size on these characteristics were also discussed. Results show that the microhardness of nanocrystalline Co78Ni22 alloys increases following a d -1/2-power law with decreasing grain size d. This phenomenon fits the Hall-Petch law when the grain size ranges from 5 nm to 35 nm. However, coercivity H c increases following a 1/d-power law with increasing grain size when the grain size ranges from 5 nm to 15.9 nm. Coercivity H c decreases again for grain sizes above 16.6 nm according to the d 6-power law.
McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S
2016-10-01
The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (<130 mmHg) target on recurrent stroke risk in a two-by-two factorial design. A sample size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not achieve, statistical significance with the larger sample (n = 3020 HR (95% confidence interval): 0.81 (0.63, 1.0), p = 0.089; n = 2500 HR (95% confidence interval): 0.89 (0.68, 1.17), p = 0.40). The results from the safety analyses were similar to 3020 and 2500 patients for both study interventions. Other trial-related factors, such as contracts, finances, and study management, were impacted as well. Adaptive designs can have benefits in randomized clinical trials, but do not always result in significant findings. The impact of adaptive designs should be measured in terms of both trial results, as well as practical issues related to trial management. More post hoc analyses of study adaptations will lead to better understanding of the balance between the benefits and the costs. © The Author(s) 2016.
Myatt, Julia P; Crompton, Robin H; Thorpe, Susannah K S
2011-01-01
By relating an animal's morphology to its functional role and the behaviours performed, we can further develop our understanding of the selective factors and constraints acting on the adaptations of great apes. Comparison of muscle architecture between different ape species, however, is difficult because only small sample sizes are ever available. Further, such samples are often comprised of different age–sex classes, so studies have to rely on scaling techniques to remove body mass differences. However, the reliability of such scaling techniques has been questioned. As datasets increase in size, more reliable statistical analysis may eventually become possible. Here we employ geometric and allometric scaling techniques, and ancovas (a form of general linear model, GLM) to highlight and explore the different methods available for comparing functional morphology in the non-human great apes. Our results underline the importance of regressing data against a suitable body size variable to ascertain the relationship (geometric or allometric) and of choosing appropriate exponents by which to scale data. ancova models, while likely to be more robust than scaling for species comparisons when sample sizes are high, suffer from reduced power when sample sizes are low. Therefore, until sample sizes are radically increased it is preferable to include scaling analyses along with ancovas in data exploration. Overall, the results obtained from the different methods show little significant variation, whether in muscle belly mass, fascicle length or physiological cross-sectional area between the different species. This may reflect relatively close evolutionary relationships of the non-human great apes; a universal influence on morphology of generalised orthograde locomotor behaviours or, quite likely, both. PMID:21507000
Dynamic properties of cluster glass in La0.25Ca0.75MnO3 nanoparticles
NASA Astrophysics Data System (ADS)
Huang, X. H.; Ding, J. F.; Jiang, Z. L.; Yin, Y. W.; Yu, Q. X.; Li, X. G.
2009-10-01
The dynamic magnetic properties of cluster glass in La0.25Ca0.75MnO3 nanoparticles with average particle size range from 40 to 1000 nm have been investigated by measuring the frequency and dc magnetic field (H) dependencies of the ac susceptibility. The frequency-dependent Tf, the freezing temperature of the ferromagnetic clusters determined by the peak in the real part of the ac susceptibility χ' versus T curve with H =0, is fit to a power law. The relaxation time constant τ0 decreases as the particle size increases from 40 to 350 nm, which indicates the decrease in the size of the clusters at the surface of the nanoparticle. The relationship between H and Tf(H) deviates from the De Almeida-Thouless-type phase boundary at relatively high fields for the samples with size range from 40 to 350 nm. Moreover, for the samples with particle sizes of 40 and 100 nm, τ0 increases with increasing H, which indicates the increasing cluster size and may be ascribed to the competition between the influence of H and the local anisotropy field in the shell spins. All these results may give rise to a new insight into the behaviors of the cluster glass state in the nanosized antiferromagnetic charge-ordered perovskite manganites.
Estrada, Nicolas; Oquendo, W F
2017-10-01
This article presents a numerical study of the effects of grain size distribution (GSD) on the microstructure of two-dimensional packings of frictionless disks. The GSD is described by a power law with two parameters controlling the size span and the shape of the distribution. First, several samples are built for each combination of these parameters. Then, by means of contact dynamics simulations, the samples are densified in oedometric conditions and sheared in a simple shear configuration. The microstructure is analyzed in terms of packing fraction, local ordering, connectivity, and force transmission properties. It is shown that the microstructure is notoriously affected by both the size span and the shape of the GSD. These findings confirm recent observations regarding the size span of the GSD and extend previous works by describing the effects of the GSD shape. Specifically, we find that if the GSD shape is varied by increasing the proportion of small grains by a certain amount, it is possible to increase the packing fraction, increase coordination, and decrease the proportion of floating particles. Thus, by carefully controlling the GSD shape, it is possible to obtain systems that are denser and better connected, probably increasing the system's robustness and optimizing important strength properties such as stiffness, cohesion, and fragmentation susceptibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaur, Manpreet, E-mail: manpreet.kaur@thapar.edu; Singh, Gaganjot; Bimbraw, Keshav
Nanostructured titania have been successfully synthesized by hydrolysis of alkoxide at calcination temperatures 500 °C, 600 °C and 700 °C. As the calcination temperature increases, alcohol washed samples show lesser rutile content as compared to water washed samples. Morphology and Particle sizes was determined by field emission scanning electron microscopy (FESEM), while thermogravimetric-differential scanning calorimetry (TG-DSC) was used to determine thermal stability. Alcohol washed samples undergo 30% weight loss whereas 16% in water washed samples was observed. The mean particle sizes were found to be increase from 37 nm to 100.9 nm and 35.3 nm to 55.2 nm for water and alcohol washed samplesmore » respectively. Hydrolysis of alkoxide was shown to be an effective means to prepare thermally stable titania by using alcohol washed samples as a precursor.« less
Rosenberger, Amanda E.; Dunham, Jason B.
2005-01-01
Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashtar, M.; Munir, A.; Anis-ur-Rehman, M.
2016-07-15
Graphical abstract: Variation of AC conductivity (σ{sub AC}) as a function of natural log of angular frequency (lnω) for Ni{sub 0.5}Zn{sub 0.5}Fe{sub 2-x}Cr{sub x}O{sub 4} nanoferrites at room temperature. - Highlights: • Cr doped mixed Ni-Zn ferrites were successfully synthesized by a newly developed WOWS sol gel technique. • The specific surface area and specific surface area to volume ratio increased with decrease in particle size. • The resonance peaks appeared in dielectric loss graphs, shifting towards low frequency with the increase in Cr concentration. • The prepared samples have the lowest values of the dielectric constant. • The dielectricmore » constant were observed to be inversely proportional to square root of the AC resistivity. - Abstract: Cr{sup +3} doped Ni-Zn nanoferrite samples with composition Ni{sub 0.5}Zn{sub 0.5}Fe{sub 2-x}Cr{sub x}O{sub 4}(x = 0.1, 0.2, 0.3, 0.4) were synthesized With Out Water and Surfactant (WOWS) sol-gel technique. The structural, morphological and dielectric properties of the samples were investigated. The lattice constant, crystallite size, theoretical density and porosity of each sample were obtained from X-ray diffraction (XRD) data. The specific surface area and specific surface area to volume ratio increased with the decrease in the size of Cr{sup +3} doped Ni-Zn ferrite nanoparticles, as the concentration of Cr{sup +3} increased. The SEM analysis revealed that the particles were of nano size and of spherical shape. The dielectric parameters such as dielectric constant (ε′) and dielectric loss (tanδ) of all the samples as a function of frequency at room temperature were measured. The AC conductivity (σ{sub AC}) was determined from the dielectric parameters, which showed increasing trend with the rise in frequency.« less
Fe–Ni solid solutions in nano-size dimensions: Effect of hydrogen annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Asheesh, E-mail: asheeshk@barc.gov.in; Meena, S.S.; Banerjee, S.
Highlights: • Fe–Ni solid solution with nano-size dimensions were prepared and characterized. • Both as prepared and hydrogenated solid solutions have FCC structure of Ni. • Paramagnetic and ferromagnetic domains coexist in these samples. - Abstract: Nanoparticles of Ni{sub 0.50}Fe{sub 0.50} and Ni{sub 0.75}Fe{sub 0.25} alloys were prepared by chemical reduction in ethylene glycol medium. XRD and {sup 57}Fe Mössbauer studies have confirmed the formation of Fe–Ni solid solution in nano-size dimensions with FCC structure. These samples consist of both ferromagnetic and paramagnetic domains which have been attributed to the coexistence of large and small particles as confirmed by atomicmore » force microscopic (AFM) and {sup 57}Fe Mössbauer spectroscopic studies. Improved extent of Fe–Fe exchange interaction existing in Ni{sub 0.50}Fe{sub 0.50} alloy compared to Ni{sub 0.75}Fe{sub 0.25} alloy explains the observed increase in the relative extent of ferromagnetic domains compared to paramagnetic domains in the former sample. Increase in the relative extent of ferromagnetic domains for hydrogenated alloys is due to increase in particle size brought about by the high temperature activation prior to hydrogenation.« less
Increasing efficiency of preclinical research by group sequential designs
Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich
2017-01-01
Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371
NASA Astrophysics Data System (ADS)
Saddeek, Yasser B.; Mohamed, Hamdy F. M.; Azooz, Moenis A.
2004-07-01
Positron annihilation lifetime (PAL), ultrasonic techniques, and differential thermal analysis (DTA) were performed to study the structure of some aluminoborate glasses. The basic compositions of these glasses are 50 B2O3 + 10 Al2O3 + 40 RO (wt%), where RO is the divalent oxide (MgO, CaO, SrO, and CdO). The ultrasonic data show that the rigidity increases from MgO to CaO then decrease at SrO and again increases at CdO. The glass transition temperature (determined from DTA) decreases from MgO to SrO then increases at CdO. The trend of the thermal properties was attributed to thermal stability. The experimental data are correlated with the internal glass structure and its connectivity. The PAL data show that an inversely correlation between the relative fractional of the open hole volume and the density of the samples. Also, there is a good correlation between the ortho-positronium (o-Ps) lifetime (open hole volume size) and the bulk modulus of the samples (determined from ultrasonic technique). The open volume hole size distribution for the samples shows that the open volume holes expand in size for CaO, SrO, MgO, and CdO, respectively with their distribution function moving to higher volume size.
Extraction of hydrocarbons from high-maturity Marcellus Shale using supercritical carbon dioxide
Jarboe, Palma B.; Philip A. Candela,; Wenlu Zhu,; Alan J. Kaufman,
2015-01-01
Shale is now commonly exploited as a hydrocarbon resource. Due to the high degree of geochemical and petrophysical heterogeneity both between shale reservoirs and within a single reservoir, there is a growing need to find more efficient methods of extracting petroleum compounds (crude oil, natural gas, bitumen) from potential source rocks. In this study, supercritical carbon dioxide (CO2) was used to extract n-aliphatic hydrocarbons from ground samples of Marcellus shale. Samples were collected from vertically drilled wells in central and western Pennsylvania, USA, with total organic carbon (TOC) content ranging from 1.5 to 6.2 wt %. Extraction temperature and pressure conditions (80 °C and 21.7 MPa, respectively) were chosen to represent approximate in situ reservoir conditions at sample depth (1920−2280 m). Hydrocarbon yield was evaluated as a function of sample matrix particle size (sieve size) over the following size ranges: 1000−500 μm, 250−125 μm, and 63−25 μm. Several methods of shale characterization including Rock-Eval II pyrolysis, organic petrography, Brunauer−Emmett−Teller surface area, and X-ray diffraction analyses were also performed to better understand potential controls on extraction yields. Despite high sample thermal maturity, results show that supercritical CO2 can liberate diesel-range (n-C11 through n-C21) n-aliphatic hydrocarbons. The total quantity of extracted, resolvable n-aliphatic hydrocarbons ranges from approximately 0.3 to 12 mg of hydrocarbon per gram of TOC. Sieve size does have an effect on extraction yield, with highest recovery from the 250−125 μm size fraction. However, the significance of this effect is limited, likely due to the low size ranges of the extracted shale particles. Additional trends in hydrocarbon yield are observed among all samples, regardless of sieve size: 1) yield increases as a function of specific surface area (r2 = 0.78); and 2) both yield and surface area increase with increasing TOC content (r2 = 0.97 and 0.86, respectively). Given that supercritical CO2 is able to mobilize residual organic matter present in overmature shales, this study contributes to a better understanding of the extent and potential factors affecting the extraction process.
Wolbers, Marcel; Heemskerk, Dorothee; Chau, Tran Thi Hong; Yen, Nguyen Thi Bich; Caws, Maxine; Farrar, Jeremy; Day, Jeremy
2011-02-02
In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 x 2 factorial design. We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 x 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Current Controlled Trials ISRCTN61649292.
Is the permeability of naturally fractured rocks scale dependent?
NASA Astrophysics Data System (ADS)
Azizmohammadi, Siroos; Matthäi, Stephan K.
2017-09-01
The equivalent permeability, keq of stratified fractured porous rocks and its anisotropy is important for hydrocarbon reservoir engineering, groundwater hydrology, and subsurface contaminant transport. However, it is difficult to constrain this tensor property as it is strongly influenced by infrequent large fractures. Boreholes miss them and their directional sampling bias affects the collected geostatistical data. Samples taken at any scale smaller than that of interest truncate distributions and this bias leads to an incorrect characterization and property upscaling. To better understand this sampling problem, we have investigated a collection of outcrop-data-based Discrete Fracture and Matrix (DFM) models with mechanically constrained fracture aperture distributions, trying to establish a useful Representative Elementary Volume (REV). Finite-element analysis and flow-based upscaling have been used to determine keq eigenvalues and anisotropy. While our results indicate a convergence toward a scale-invariant keq REV with increasing sample size, keq magnitude can have multi-modal distributions. REV size relates to the length of dilated fracture segments as opposed to overall fracture length. Tensor orientation and degree of anisotropy also converge with sample size. However, the REV for keq anisotropy is larger than that for keq magnitude. Across scales, tensor orientation varies spatially, reflecting inhomogeneity of the fracture patterns. Inhomogeneity is particularly pronounced where the ambient stress selectively activates late- as opposed to early (through-going) fractures. While we cannot detect any increase of keq with sample size as postulated in some earlier studies, our results highlight a strong keq anisotropy that influences scale dependence.
NASA Astrophysics Data System (ADS)
Wongpratat, Unchista; Maensiri, Santi; Swatsitang, Ekaphan
2016-09-01
Effect of cations distribution upon EXAFS analysis on magnetic properties of Co1-xNixFe2O4 (x = 0, 0.25, 0.50, 0.75 and 1.0) nanoparticles prepared by the hydrothermal method in aloe vera extract solution were studied. XRD analysis confirmed a pure phase of cubic spinel ferrite of all samples. Changes in lattice parameter and particle size depended on the Ni content with partial substitution and site distributions of Co2+, Ni2+ ions of different ionic radii at both tetrahedral and octahedral sites in the crystal structure. Particle sizes of samples estimated by TEM images were found to be in the range of 10.87-62.50 nm. The VSM results at room temperature indicated the ferrimagnetic behavior of all samples. Superparamagnetic behavior was observed in NiFe2O4 sample. The coercivity (Hc) and remanance (Mr) values were related to the particle sizes of samples. The saturation magnetization (Ms) was increased by a factor of 1.4 to a value of 57.57 emu/g, whereas the coercivity (Hc) was decreased by a factor of 20 to a value of 63.15 Oe for a sample with x = 0.75. In addition to the cations distribution, the increase of aspect ratio (surface to volume ratio) due to the decrease of particle size could significantly affect the magnetic properties of the materials.
Study on extrusion process of SiC ceramic matrix
NASA Astrophysics Data System (ADS)
Dai, Xiao-Yuan; Shen, Fan; Ji, Jia-You; Wang, Shu-Ling; Xu, Man
2017-11-01
In this thesis, the extrusion process of SiC ceramic matrix has been systematically studied.The effect of different cellulose content on the flexural strength and pore size distribution of SiC matrix was discussed.Reselts show that with the increase of cellulose content, the flexural strength decreased.The pore size distribution in the sample was 1um-4um, and the 1um-2um concentration was more concentrated. It is found that the cellulose content has little effect on the pore size distribution.When the cellulose content is 7%, the flexural strength of the sample is 40.9Mpa. At this time, the mechanical properties of the sample are the strongest.
NASA Astrophysics Data System (ADS)
Liu, Yichi; Liu, Debao; You, Chen; Chen, Minfang
2015-09-01
The aim of this study was to investigate the effect of grain size on the corrosion resistance of pure magnesium developed for biomedical applications. High-purity magnesium samples with different grain size were prepared by the cooling rate-controlled solidification. Electrochemical and immersion tests were employed to measure the corrosion resistance of pure magnesium with different grain size. The electrochemical polarization curves indicated that the corrosion susceptibility increased as the grain size decrease. However, the electrochemical impedance spectroscopy (EIS) and immersion tests indicated that the corrosion resistance of pure magnesium is improved as the grain size decreases. The improvement in the corrosion resistance is attributed to refine grain can produce more uniform and density film on the surface of sample.
Diet of land birds along an elevational gradient in Papua New Guinea
Sam, Katerina; Koane, Bonny; Jeppy, Samuel; Sykorova, Jana; Novotny, Vojtech
2017-01-01
Food preferences and exploitation are crucial to many aspects of avian ecology and are of increasing importance as we progress in our understanding of community ecology. We studied birds and their feeding specialization in the Central Range of Papua New Guinea, at eight study sites along a complete (200 to 3700 m a.s.l.) rainforest elevational gradient. The relative species richness and abundance increased with increasing elevation for insect and nectar eating birds, and decreased with elevation for fruit feeding birds. Using emetic tartar, we coerced 999 individuals from 99 bird species to regurgitate their stomach contents and studied these food samples. The proportion of arthropods in food samples increased with increasing elevation at the expense of plant material. Body size of arthropods eaten by birds decreased with increasing elevation. This reflected the parallel elevational trend in the body size of arthropods available in the forest understory. Body size of insectivorous birds was significantly positively correlated with the body size of arthropods they ate. Coleoptera were the most exploited arthropods, followed by Araneae, Hymenoptera, and Lepidoptera. Selectivity indexes showed that most of the arthropod taxa were taken opportunistically, reflecting the spatial patterns in arthropod abundances to which the birds were exposed. PMID:28276508
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H
2017-01-01
In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
NASA Astrophysics Data System (ADS)
Jalaiah, K.; Vijaya Babu, K.; Rajashekhar Babu, K.; Chandra Mouli, K.
2018-06-01
Zr and Co substituted Ni0.5Zn0.5 ZrxCuxFe2-2xO4 with x values varies from the 0.0 to 0.4 in steps of 0.08 wt% ferrites synthesized by using sol-gel auto combustion method. The XRD patterns give evidence for formation of the single phase cubic spinel. The lattice constant was initially decreased from 8.3995 Å to 8.3941 Å with dopant concentration for x = 0.00-0.08 thereafter the lattice parameter steeply increased up to 8.4129 Å fox x = 0.4 with increasing dopant concentration. The estimated crystallite size and measured particle sizes are in comparable nano size. The grain size initially increased 2.3137-3.0430 μm, later it decreased to 2.2952 μm with increasing dopant concentration. The prepared samples porosity shows the opposite trend to grain size. The FT-IR spectrum for prepared samples shows the Fd3m (O7h). The wavenumber for tetrahedral site increased from 579 cm-1 to 593 cm-1 with increasing dopant concentration and the wavenumber of octahedral site are initially decreased from 414 cm-1 to 400 cm-1 for x = 0.00 to x = 0.08 later increased to 422 cm-1 with increasing dopant concentration. The dielectric constant increased from 8.85 to 34.5127 with dopant increasing concentration. The corresponding loss factor was fallows the similar trend as dielectric constant. The AC conductivity increased with increasing dopant concentration from 3.0261 × 10-7 S/m to 4.4169 × 10-6 S/m.
The Contribution of Expanding Portion Sizes to the US Obesity Epidemic
Young, Lisa R.; Nestle, Marion
2002-01-01
Objectives. Because larger food portions could be contributing to the increasing prevalence of overweight and obesity, this study was designed to weigh samples of marketplace foods, identify historical changes in the sizes of those foods, and compare current portions with federal standards. Methods. We obtained information about current portions from manufacturers or from direct weighing; we obtained information about past portions from manufacturers or contemporary publications. Results. Marketplace food portions have increased in size and now exceed federal standards. Portion sizes began to grow in the 1970s, rose sharply in the 1980s, and have continued in parallel with increasing body weights. Conclusions. Because energy content increases with portion size, educational and other public health efforts to address obesity should focus on the need for people to consume smaller portions. PMID:11818300
On the role of the grain size in the magnetic behavior of sintered permanent magnets
NASA Astrophysics Data System (ADS)
Efthimiadis, K. G.; Ntallis, N.
2018-02-01
In this work the finite elements method is used to simulate, by micromagnetic modeling, the magnetic behavior of sintered anisotropic magnets. Hysteresis loops were simulated for different grain sizes in an oriented multigrain sample. By keeping out other parameters that contribute to the magnetic microstructure, such as the sample size, the grain morphology and the grain boundaries mismatch, it has been found that the grain size affects the magnetic properties only if the grains are exchange-decoupled. In this case, as the grain size decreases, a decrease in the nucleation field of a reverse magnetic domain is observed and an increase in the coercive field due to the pinning of the magnetic domain walls at the grain boundaries.
Efficient Bayesian mixed model analysis increases association power in large cohorts
Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L
2014-01-01
Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633
Size effects on electrical properties of chemically grown zinc oxide nanoparticles
NASA Astrophysics Data System (ADS)
Rathod, K. N.; Joshi, Zalak; Dhruv, Davit; Gadani, Keval; Boricha, Hetal; Joshi, A. D.; Solanki, P. S.; Shah, N. A.
2018-03-01
In the present article, we study ZnO nanoparticles grown by cost effective sol–gel technique for various electrical properties. Structural studies performed by x-ray diffraction (XRD) revealed hexagonal unit cell phase with no observed impurities. Transmission electron microscopy (TEM) and particle size analyzer showed increased average particle size due to agglomeration effect with higher sintering. Dielectric constant (ε‧) decreases with increase in frequency because of the disability of dipoles to follow higher electric field. With higher sintering, dielectric constant reduced owing to the important role of increased formation of oxygen vacancy defects. Universal dielectric response (UDR) was verified by straight line fitting of log (fε‧) versus log (f) plots. All samples exhibit UDR behavior and with higher sintering more contribution from crystal cores. Impedance studies suggest an important role of boundary density while Cole–Cole (Z″ versus Z‧) plots have been studied for the relaxation behavior of the samples. Average normalized change (ANC) in impedance has been studied for all the samples wherein boundaries play an important role. Frequency dependent electrical conductivity has been understood on the basis of Jonscher’s universal power law. Jonscher’s law fits suggest that conduction of charge carrier is possible in the context of correlated barrier hopping (CBH) mechanism for lower temperature sintered sample while for higher temperature sintered ZnO samples, Maxwell–Wagner (M–W) relaxation process has been determined.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...
1H NMR Cryoporometry Study of the Melting Behavior of Water in White Cement
NASA Astrophysics Data System (ADS)
Boguszyńska, Joanna; Tritt-Goc, Jadwiga
2004-09-01
The pore size of white cement samples is studied by the melting behaviour of water confined in it, using 1H NMR cryopormetry. The influence of the preparing method and antifreeze admixture on the pore size and distribution in cement samples is investigated at 283 K. The addition of an antifreeze admixture [containing 1% Sika Rapid 2 by weight of the dry cement] influences the porosity. In wet prepared samples we observed a significant increase in the quantity of mesopores between 0.8 and 5 nm and a smaller increase of mesopores between 5 and 10 nm, when compared to cement without admixture. The compressive strength is related to the porosity of the cement. Therefore the cement with Sika Rapid 2, wet prepared at 278 K shows a higher strength than all other measured samples.
Kay, Matthew C; Lenihan, Hunter S; Guenther, Carla M; Wilson, Jono R; Miller, Christopher J; Shrout, Samuel W
2012-01-01
Assessments of the conservation and fisheries effects of marine reserves typically focus on single reserves where sampling occurs over narrow spatiotemporal scales. A strategy for broadening the collection and interpretation of data is collaborative fisheries research (CFR). Here we report results of a CFR program formed in part to test whether reserves at the Santa Barbara Channel Islands, USA, influenced lobster size and trap yield, and whether abundance changes in reserves led to spillover that influenced trap yield and effort distribution near reserve borders. Industry training of scientists allowed us to sample reserves with fishery relevant metrics that we compared with pre-reserve fishing records, a concurrent port sampling program, fishery effort patterns, the local ecological knowledge (LEK) of fishermen, and fishery-independent visual surveys of lobster abundance. After six years of reserve protection, there was a four- to eightfold increase in trap yield, a 5-10% increase in the mean size (carapace length) of legal sized lobsters, and larger size structure of lobsters trapped inside vs. outside of three replicate reserves. Patterns in trap data were corroborated by visual scuba surveys that indicated a four- to sixfold increase in lobster density inside reserves. Population increases within reserves did not lead to increased trap yields or effort concentrations (fishing the line) immediately outside reserve borders. The absence of these catch and effort trends, which are indicative of spillover, may be due to moderate total mortality (Z = 0.59 for legal sized lobsters outside reserves), which was estimated from analysis of growth and length frequency data collected as part of our CFR program. Spillover at the Channel Islands reserves may be occurring but at levels that are insufficient to influence the fishery dynamics that we measured. Future increases in fishing effort (outside reserves) and lobster biomass (inside reserves) are likely and may lead to increased spillover, and CFR provides an ideal platform for continued assessment of fishery-reserve interactions.
Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan
2016-03-09
Cardiac magnetic resonance (CMR) can quantify myocardial infarct (MI) size and myocardium at risk (MaR), enabling assessment of myocardial salvage index (MSI). We assessed how MSI impacts the number of patients needed to reach statistical power in relation to MI size alone and levels of biochemical markers in clinical cardioprotection trials and how scan day affect sample size. Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid scan day bias masking a treatment effect of 25%. Sample size in cardioprotection trials can be reduced 46% to 65% without compromising statistical power when using MSI by CMR as an outcome variable instead of MI size alone or biochemical markers. It is essential to ensure lack of bias in scan day between treatment and control arms to avoid compromising statistical power. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Gravity or turbulence? IV. Collapsing cores in out-of-virial disguise
NASA Astrophysics Data System (ADS)
Ballesteros-Paredes, Javier; Vázquez-Semadeni, Enrique; Palau, Aina; Klessen, Ralf S.
2018-06-01
We study the dynamical state of massive cores by using a simple analytical model, an observational sample, and numerical simulations of collapsing massive cores. From the analytical model, we find that cores increase their column density and velocity dispersion as they collapse, resulting in a time evolution path in the Larson velocity dispersion-size diagram from large sizes and small velocity dispersions to small sizes and large velocity dispersions, while they tend to equipartition between gravity and kinetic energy. From the observational sample, we find that: (a) cores with substantially different column densities in the sample do not follow a Larson-like linewidth-size relation. Instead, cores with higher column densities tend to be located in the upper-left corner of the Larson velocity dispersion σv, 3D-size R diagram, a result explained in the hierarchical and chaotic collapse scenario. (b) Cores appear to have overvirial values. Finally, our numerical simulations reproduce the behavior predicted by the analytical model and depicted in the observational sample: collapsing cores evolve towards larger velocity dispersions and smaller sizes as they collapse and increase their column density. More importantly, however, they exhibit overvirial states. This apparent excess is due to the assumption that the gravitational energy is given by the energy of an isolated homogeneous sphere. However, such excess disappears when the gravitational energy is correctly calculated from the actual spatial mass distribution. We conclude that the observed energy budget of cores is consistent with their non-thermal motions being driven by their self-gravity and in the process of dynamical collapse.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huie, Matthew M.; Marschilok, Amy C.; Takeuchi, Esther S.
Here, this report describes a synthetic approach to control the crystallite size of silver vanadium phosphorous oxide, Ag 0.50VOPO 4·1.9H 2O, and the impact on electrochemistry in lithium based batteries. Ag 0.50VOPO 4·1.9H 2O was synthesized using a stirred hydrothermal method over a range of temperatures. X-ray diffraction (XRD) was used to confirm the crystalline phase and the crystallite size sizes of 11, 22, 38, 40, 49, and 120 nm. Particle shape was plate-like with edges <1 micron to >10 microns. Under galvanostatic reduction the samples with 22 nm crystallites and 880 nm particles produced the highest capacity, ~25% moremore » capacity than the 120 nm sample. Notably, the 11 nm sample resulted in reduced delivered capacity and higher resistance consistent with increased grain boundaries contributing to resistance. Under intermittent pulsing ohmic resistance decreased with increasing crystallite size from 11 nm to 120 nm implying that electrical conduction within a crystal is more facile than between crystallites and across grain boundaries. Finally, this systematic study of material dimension shows that crystallite size impacts deliverable capacity as well as cell resistance where both interparticle and intraparticle transport are important.« less
Huie, Matthew M.; Marschilok, Amy C.; Takeuchi, Esther S.; ...
2017-04-12
Here, this report describes a synthetic approach to control the crystallite size of silver vanadium phosphorous oxide, Ag 0.50VOPO 4·1.9H 2O, and the impact on electrochemistry in lithium based batteries. Ag 0.50VOPO 4·1.9H 2O was synthesized using a stirred hydrothermal method over a range of temperatures. X-ray diffraction (XRD) was used to confirm the crystalline phase and the crystallite size sizes of 11, 22, 38, 40, 49, and 120 nm. Particle shape was plate-like with edges <1 micron to >10 microns. Under galvanostatic reduction the samples with 22 nm crystallites and 880 nm particles produced the highest capacity, ~25% moremore » capacity than the 120 nm sample. Notably, the 11 nm sample resulted in reduced delivered capacity and higher resistance consistent with increased grain boundaries contributing to resistance. Under intermittent pulsing ohmic resistance decreased with increasing crystallite size from 11 nm to 120 nm implying that electrical conduction within a crystal is more facile than between crystallites and across grain boundaries. Finally, this systematic study of material dimension shows that crystallite size impacts deliverable capacity as well as cell resistance where both interparticle and intraparticle transport are important.« less
Liu, Nehemiah T; Salinas, José; Fenrich, Craig A; Serio-Melvin, Maria L; Kramer, George C; Driscoll, Ian R; Schreiber, Martin A; Cancio, Leopoldo C; Chung, Kevin K
2016-11-01
The depth of burn has been an important factor often overlooked when estimating the total resuscitation fluid needed for early burn care. The goal of this study was to determine the degree to which full-thickness (FT) involvement affected overall 24-hour burn resuscitation volumes. We performed a retrospective review of patients admitted to our burn intensive care unit from December 2007 to April 2013, with significant burns that required resuscitation using our computerized decision support system for burn fluid resuscitation. We defined the degree of FT involvement as FT Index (FTI; percentage of FT injury/percentage of total body surface area (TBSA) burned [%FT / %TBSA]) and compared variables on actual 24-hour fluid resuscitation volumes overall as well as for any given burn size. A total of 203 patients admitted to our burn center during the study period were included in the analysis. Mean age and weight were 47 ± 19 years and 87 ± 18 kg, respectively. Mean %TBSA was 41 ± 20 with a mean %FT of 18 ± 24. As %TBSA, %FT, and FTI increased, so did actual 24-hour fluid resuscitation volumes (mL/kg). However, increase in FTI did not result in increased volume indexed to burn size (mL/kg per %TBSA). This was true even when patients with inhalation injury were excluded. Further investigation revealed that as %TBSA increased, %FT increased nonlinearly (quadratic polynomial) (R = 0.994). Total burn size and FT burn size were both highly correlated with increased 24-hour fluid resuscitation volumes. However, FTI did not correlate with a corresponding increase in resuscitation volumes for any given burn size, even when patients with inhalation injury were excluded. Thus, there are insufficient data to presume that those who receive more volume at any given burn size are likely to be mostly full thickness or vice versa. This was influenced by a relatively low sample size at each 10%TBSA increment and larger burn sizes disproportionately having more FT burns. A more robust sample size may elucidate this relationship better. Therapeutic/care management study, level IV.
Kim, Gibaek; Kwak, Jihyun; Kim, Ki-Rak; Lee, Heesung; Kim, Kyoung-Woong; Yang, Hyeon; Park, Kihong
2013-12-15
A laser induced breakdown spectroscopy (LIBS) coupled with the chemometric method was applied to rapidly discriminate between soils contaminated with heavy metals or oils and clean soils. The effects of the water contents and grain sizes of soil samples on LIBS emissions were also investigated. The LIBS emission lines decreased by 59-75% when the water content increased from 1.2% to 7.8%, and soil samples with a grain size of 75 μm displayed higher LIBS emission lines with lower relative standard deviations than those with a 2mm grain size. The water content was found to have a more pronounced effect on the LIBS emission lines than the grain size. Pelletizing and sieving were conducted for all samples collected from abandoned mining areas and military camp to have similar water contents and grain sizes before being analyzed by the LIBS with the chemometric analysis. The data show that three types of soil samples were clearly discerned by using the first three principal components from the spectral data of soil samples. A blind test was conducted with a 100% correction rate for soil samples contaminated with heavy metals and oil residues. Copyright © 2013 Elsevier B.V. All rights reserved.
Long-term effective population size dynamics of an intensively monitored vertebrate population
Mueller, A-K; Chakarov, N; Krüger, O; Hoffman, J I
2016-01-01
Long-term genetic data from intensively monitored natural populations are important for understanding how effective population sizes (Ne) can vary over time. We therefore genotyped 1622 common buzzard (Buteo buteo) chicks sampled over 12 consecutive years (2002–2013 inclusive) at 15 microsatellite loci. This data set allowed us to both compare single-sample with temporal approaches and explore temporal patterns in the effective number of parents that produced each cohort in relation to the observed population dynamics. We found reasonable consistency between linkage disequilibrium-based single-sample and temporal estimators, particularly during the latter half of the study, but no clear relationship between annual Ne estimates () and census sizes. We also documented a 14-fold increase in between 2008 and 2011, a period during which the census size doubled, probably reflecting a combination of higher adult survival and immigration from further afield. Our study thus reveals appreciable temporal heterogeneity in the effective population size of a natural vertebrate population, confirms the need for long-term studies and cautions against drawing conclusions from a single sample. PMID:27553455
Transport properties of bismuth telluride compound prepared by mechanical alloying
NASA Astrophysics Data System (ADS)
Khade, Poonam; Bagwaiya, Toshi; Bhattacharya, Shovit; Rayaprol, Sudhindra; Sahu, Ashok K.; Shelke, Vilas
2017-05-01
We have synthesized bismuth telluride compound using mechanical alloying and hot press sintering method. The phase formation, crystal structure was evaluated by X-ray diffraction and Raman spectroscopy. The scanning electron microscopy images indicated sub-micron sized grains. We observed low value of thermal conductivity 0.39 W/mK at room temperature as a result of grain size reduction by increasing deformation. The performance of the samples can be improved by reducing the grain size, which increases the grain boundary scattering.
ERIC Educational Resources Information Center
Anstey, Kaarin J.; Mack, Holly A.; Christensen, Helen; Li, Shu-Chen; Reglade-Meslin, Chantal; Maller, Jerome; Kumar, Rajeev; Dear, Keith; Easteal, Simon; Sachdev, Perminder
2007-01-01
Intra-individual variability in reaction time increases with age and with neurological disorders, but the neural correlates of this increased variability remain uncertain. We hypothesized that both faster mean reaction time (RT) and less intra-individual RT variability would be associated with larger corpus callosum (CC) size in older adults, and…
Drop size distributions and related properties of fog for five locations measured from aircraft
NASA Technical Reports Server (NTRS)
Zak, J. Allen
1994-01-01
Fog drop size distributions were collected from aircraft as part of the Synthetic Vision Technology Demonstration Program. Three west coast marine advection fogs, one frontal fog, and a radiation fog were sampled from the top of the cloud to the bottom as the aircraft descended on a 3-degree glideslope. Drop size versus altitude versus concentration are shown in three dimensional plots for each 10-meter altitude interval from 1-minute samples. Also shown are median volume radius and liquid water content. Advection fogs contained the largest drops with median volume radius of 5-8 micrometers, although the drop sizes in the radiation fog were also large just above the runway surface. Liquid water content increased with height, and the total number of drops generally increased with time. Multimodal variations in number density and particle size were noted in most samples where there was a peak concentration of small drops (2-5 micrometers) at low altitudes, midaltitude peak of drops 5-11 micrometers, and high-altitude peak of the larger drops (11-15 micrometers and above). These observations are compared with others and corroborate previous results in fog gross properties, although there is considerable variation with time and altitude even in the same type of fog.
Barkhofen, Sonja; Bartley, Tim J; Sansoni, Linda; Kruse, Regina; Hamilton, Craig S; Jex, Igor; Silberhorn, Christine
2017-01-13
Sampling the distribution of bosons that have undergone a random unitary evolution is strongly believed to be a computationally hard problem. Key to outperforming classical simulations of this task is to increase both the number of input photons and the size of the network. We propose driven boson sampling, in which photons are input within the network itself, as a means to approach this goal. We show that the mean number of photons entering a boson sampling experiment can exceed one photon per input mode, while maintaining the required complexity, potentially leading to less stringent requirements on the input states for such experiments. When using heralded single-photon sources based on parametric down-conversion, this approach offers an ∼e-fold enhancement in the input state generation rate over scattershot boson sampling, reaching the scaling limit for such sources. This approach also offers a dramatic increase in the signal-to-noise ratio with respect to higher-order photon generation from such probabilistic sources, which removes the need for photon number resolution during the heralding process as the size of the system increases.
Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey
2013-01-01
We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.
Duy, Pham K; Chun, Seulah; Chung, Hoeil
2017-11-21
We have systematically characterized Raman scatterings in solid samples with different particle sizes and investigated subsequent trends of particle size-induced intensity variations. For this purpose, both lactose powders and pellets composed of five different particle sizes were prepared. Uniquely in this study, three spectral acquisition schemes with different sizes of laser illuminations and detection windows were employed for the evaluation, since it was expected that the experimental configuration would be another factor potentially influencing the intensity of the lactose peak, along with the particle size itself. In both samples, the distribution of Raman photons became broader with the increase in particle size, as the mean free path of laser photons, the average photon travel distance between consecutive scattering locations, became longer under this situation. When the particle size was the same, the Raman photon distribution was narrower in the pellets since the individual particles were more densely packed in a given volume (the shorter mean free path). When the size of the detection window was small, the number of photons reaching the detector decreased as the photon distribution was larger. Meanwhile, a large-window detector was able to collect the widely distributed Raman photons more effectively; therefore, the trends of intensity change with the variation in particle size were dissimilar depending on the employed spectral acquisition schemes. Overall, the Monte Carlo simulation was effective at probing the photon distribution inside the samples and helped to support the experimental observations.
NASA Astrophysics Data System (ADS)
Lastra, M.; de La Huz, R.; Sánchez-Mata, A. G.; Rodil, I. F.; Aerts, K.; Beloso, S.; López, J.
2006-02-01
Thirty-four exposed sandy beaches on the northern coast of Spain (from 42°11' to 43°44'N, and from 2°04' to 8°52' W; ca. 1000 km) were sampled over a range of beach sizes, beach morphodynamics and exposure rates. Ten equally spaced intertidal shore levels along six replicated transects were sampled at each beach. Sediment and macrofauna samples were collected using corers to a depth of 15 cm. Morphodynamic characteristics such as the beach face slope, wave environment, exposure rates, Dean's parameter and Beach State Index were estimated. Biotic results indicated that in all the beaches the community was dominated by isopods, amphipods and polychaetes, mostly belonging to the detritivorous-opportunistic trophic group. The number of intertidal species ranged from 9 to 31, their density being between 31 and 618 individuals m - 2 , while individuals per linear metre (m - 1 ) ranged from 4962 to 17 2215. The biomass, calculated as total ash-free dry weight (AFDW) varied from 0.027 to 2.412 g m - 2 , and from 3.6 to 266.6 g m - 1 . Multiple regression analysis indicated that number of species significantly increased with proximity to the wind-driven upwelling zone located to the west, i.e., west-coast beaches hosted more species than east-coast beaches. The number of species increased with decreasing mean grain size and increasing beach length. The density of individuals m - 2 increased with decreasing mean grain size, while biomass m - 2 increased with increasing food availability estimated as chlorophyll-a concentration in the water column of the swash zone. Multiple-regression analysis indicated that chlorophyll-a in the water column increased with increasing western longitude. Additional insights provided by single-regression analysis showed a positive relationship between the number of species and chlorophyll-a, while increasing biomass occurred with increasing mean grain size of the beach. The results indicate that community characteristics in the exposed sandy beaches studied are affected by physical characteristics such as sediment size and beach length, but also by other factors dependent on coastal processes, such as food availability in the water column.
Ferguson, Philip E; Sales, Catherine M; Hodges, Dalton C; Sales, Elizabeth W
2015-01-01
Recent publications have emphasized the importance of a multidisciplinary strategy for maximum conservation and utilization of lung biopsy material for advanced testing, which may determine therapy. This paper quantifies the effect of a multidisciplinary strategy implemented to optimize and increase tissue volume in CT-guided transthoracic needle core lung biopsies. The strategy was three-pronged: (1) once there was confidence diagnostic tissue had been obtained and if safe for the patient, additional biopsy passes were performed to further increase volume of biopsy material, (2) biopsy material was placed in multiple cassettes for processing, and (3) all tissue ribbons were conserved when cutting blocks in the histology laboratory. This study quantifies the effects of strategies #1 and #2. This retrospective analysis comparing CT-guided lung biopsies from 2007 and 2012 (before and after multidisciplinary approach implementation) was performed at a single institution. Patient medical records were reviewed and main variables analyzed include biopsy sample size, radiologist, number of blocks submitted, diagnosis, and complications. The biopsy sample size measured was considered to be directly proportional to tissue volume in the block. Biopsy sample size increased 2.5 fold with the average total biopsy sample size increasing from 1.0 cm (0.9-1.1 cm) in 2007 to 2.5 cm (2.3-2.8 cm) in 2012 (P<0.0001). The improvement was statistically significant for each individual radiologist. During the same time, the rate of pneumothorax requiring chest tube placement decreased from 15% to 7% (P = 0.065). No other major complications were identified. The proportion of tumor within the biopsy material was similar at 28% (23%-33%) and 35% (30%-40%) for 2007 and 2012, respectively. The number of cases with at least two blocks available for testing increased from 10.7% to 96.4% (P<0.0001). The effect of this multidisciplinary strategy to CT-guided lung biopsies was effective in significantly increasing tissue volume and number of blocks available for advanced diagnostic testing.
The evolution of body size and shape in the human career
Grabowski, Mark; Hatala, Kevin G.; Richmond, Brian G.
2016-01-01
Body size is a fundamental biological property of organisms, and documenting body size variation in hominin evolution is an important goal of palaeoanthropology. Estimating body mass appears deceptively simple but is laden with theoretical and pragmatic assumptions about best predictors and the most appropriate reference samples. Modern human training samples with known masses are arguably the ‘best’ for estimating size in early bipedal hominins such as the australopiths and all members of the genus Homo, but it is not clear if they are the most appropriate priors for reconstructing the size of the earliest putative hominins such as Orrorin and Ardipithecus. The trajectory of body size evolution in the early part of the human career is reviewed here and found to be complex and nonlinear. Australopith body size varies enormously across both space and time. The pre-erectus early Homo fossil record from Africa is poor and dominated by relatively small-bodied individuals, implying that the emergence of the genus Homo is probably not linked to an increase in body size or unprecedented increases in size variation. Body size differences alone cannot explain the observed variation in hominin body shape, especially when examined in the context of small fossil hominins and pygmy modern humans. This article is part of the themed issue ‘Major transitions in human evolution’. PMID:27298459
The evolution of body size and shape in the human career.
Jungers, William L; Grabowski, Mark; Hatala, Kevin G; Richmond, Brian G
2016-07-05
Body size is a fundamental biological property of organisms, and documenting body size variation in hominin evolution is an important goal of palaeoanthropology. Estimating body mass appears deceptively simple but is laden with theoretical and pragmatic assumptions about best predictors and the most appropriate reference samples. Modern human training samples with known masses are arguably the 'best' for estimating size in early bipedal hominins such as the australopiths and all members of the genus Homo, but it is not clear if they are the most appropriate priors for reconstructing the size of the earliest putative hominins such as Orrorin and Ardipithecus The trajectory of body size evolution in the early part of the human career is reviewed here and found to be complex and nonlinear. Australopith body size varies enormously across both space and time. The pre-erectus early Homo fossil record from Africa is poor and dominated by relatively small-bodied individuals, implying that the emergence of the genus Homo is probably not linked to an increase in body size or unprecedented increases in size variation. Body size differences alone cannot explain the observed variation in hominin body shape, especially when examined in the context of small fossil hominins and pygmy modern humans.This article is part of the themed issue 'Major transitions in human evolution'. © 2016 The Author(s).
Recent advances of mesoporous materials in sample preparation.
Zhao, Liang; Qin, Hongqiang; Wu, Ren'an; Zou, Hanfa
2012-03-09
Sample preparation has been playing an important role in the analysis of complex samples. Mesoporous materials as the promising adsorbents have gained increasing research interest in sample preparation due to their desirable characteristics of high surface area, large pore volume, tunable mesoporous channels with well defined pore-size distribution, controllable wall composition, as well as modifiable surface properties. The aim of this paper is to review the recent advances of mesoporous materials in sample preparation with emphases on extraction of metal ions, adsorption of organic compounds, size selective enrichment of peptides/proteins, specific capture of post-translational peptides/proteins and enzymatic reactor for protein digestion. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jinlong, Lv, E-mail: ljlbuaa@126.com; State Key Lab of New Ceramic and Fine Processing, Tsinghua University, Beijing 100084; Tongxiang, Liang, E-mail: ljltsinghua@126.com
The nanocrystalline pure nickels with different grain orientations were fabricated by direct current electrodeposition process. The grain size slightly decreased with the increasing of electrodeposition solution temperature. However, grain orientation was affected significantly. Comparing with samples obtained at 50 °C and 80 °C, sample obtained at 20 °C had the strongest (111) orientation plane which increased electrochemical corrosion resistance of this sample. At the same time, the lowest (111) orientation plane deteriorated electrochemical corrosion resistance of sample obtained at 50 °C. - Graphical abstract: The increased electrodeposition temperature promoted slightly grain refinement. The grain orientation was affected significantly by electrodepositionmore » solution temperature. The (111) orientation plane of sample increased significantly corrosion resistance. Display Omitted.« less
Dou, Haiyang; Lee, Yong-Ju; Jung, Euo Chang; Lee, Byung-Chul; Lee, Seungho
2013-08-23
In field-flow fractionation (FFF), there is the 'steric transition' phenomenon where the sample elution mode changes from the normal to steric/hyperlayer mode. Accurate analysis by FFF requires understanding of the steric transition phenomenon, particularly when the sample has a broad size distribution, for which the effect by combination of different modes may become complicated to interpret. In this study, the steric transition phenomenon in asymmetrical flow FFF (AF4) was studied using polystyrene (PS) latex beads. The retention ratio (R) gradually decreases as the particle size increases (normal mode) and reaches a minimum (Ri) at diameter around 0.5μm, after which R increases with increasing diameter (steric/hyperlayer mode). It was found that the size-based selectivity (Sd) tends to increase as the channel thickness (w) increases. The retention behavior of cyclo-1,3,5-trimethylene-2,4,6-trinitramine (commonly called 'research department explosive' (RDX)) particles in AF4 was investigated by varying experimental parameters including w and flow rates. AF4 showed a good reproducibility in size determination of RDX particles with the relative standard deviation of 4.1%. The reliability of separation obtained by AF4 was evaluated by transmission electron microscopy (TEM). Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Singh, Dharmendra; Rao, P. Nageswara; Jayaganthan, R.
2013-08-01
The influence of rolling at liquid nitrogen temperature and annealing on the microstructure and mechanical properties of Al 5083 alloy was studied in this paper. Cryorolled samples of Al 5083 show significant improvements in strength and hardness. The ultimate tensile strength increases up to 340 MPa and 390 MPa for the 30% and 50% cryorolled samples, respectively. The cryorolled samples, with 30% and 50% reduction, were subjected to Charpy impact testing at various temperatures from -190°C to 100°C. It is observed that increasing the percentage of reduction of samples during cryorolling has significant effect on decreasing impact toughness at all temperatures by increasing yield strength and decreasing ductility. Annealing of samples after cryorolling shows remarkable increment in impact toughness through recovery and recrystallization. The average grain size of the 50% cryorolled sample (14 μm) after annealing at 350°C for 1 h is found to be finer than that of the 30% cryorolled sample (25 μm). The scanning electron microscopy (SEM) analysis of fractured surfaces shows a large-size dimpled morphology, resembling the ductile fracture mechanism in the starting material and fibrous structure with very fine dimples in cryorolled samples corresponding to the brittle fracture mechanism.
Masson, M; Angot, H; Le Bescond, C; Launay, M; Dabrin, A; Miège, C; Le Coz, J; Coquery, M
2018-05-10
Monitoring hydrophobic contaminants in surface freshwaters requires measuring contaminant concentrations in the particulate fraction (sediment or suspended particulate matter, SPM) of the water column. Particle traps (PTs) have been recently developed to sample SPM as cost-efficient, easy to operate and time-integrative tools. But the representativeness of SPM collected with PTs is not fully understood, notably in terms of grain size distribution and particulate organic carbon (POC) content, which could both skew particulate contaminant concentrations. The aim of this study was to evaluate the representativeness of SPM characteristics (i.e. grain size distribution and POC content) and associated contaminants (i.e. polychlorinated biphenyls, PCBs; mercury, Hg) in samples collected in a large river using PTs for differing hydrological conditions. Samples collected using PTs (n = 74) were compared with samples collected during the same time period by continuous flow centrifugation (CFC). The grain size distribution of PT samples shifted with increasing water discharge: the proportion of very fine silts (2-6 μm) decreased while that of coarse silts (27-74 μm) increased. Regardless of water discharge, POC contents were different likely due to integration by PT of high POC-content phytoplankton blooms or low POC-content flood events. Differences in PCBs and Hg concentrations were usually within the range of analytical uncertainties and could not be related to grain size or POC content shifts. Occasional Hg-enriched inputs may have led to higher Hg concentrations in a few PT samples (n = 4) which highlights the time-integrative capacity of the PTs. The differences of annual Hg and PCB fluxes calculated either from PT samples or CFC samples were generally below 20%. Despite some inherent limitations (e.g. grain size distribution bias), our findings suggest that PT sampling is a valuable technique to assess reliable spatial and temporal trends of particulate contaminants such as PCBs and Hg within a river monitoring network. Copyright © 2018 Elsevier B.V. All rights reserved.
Soglia, Francesca; Gao, Jingxian; Mazzoni, Maurizio; Puolanne, Eero; Cavani, Claudio; Petracci, Massimiliano; Ertbjerg, Per
2017-09-01
Recently the poultry industry faced an emerging muscle abnormality termed wooden breast (WB), the prevalence of which has dramatically increased in the past few years. Considering the incomplete knowledge concerning this condition and the lack of information on possible variations due to the intra-fillet sampling locations (superficial vs. deep position) and aging of the samples, this study aimed at investigating the effect of 7-d storage of broiler breast muscles on histology, texture, and particle size distribution, evaluating whether the sampling position exerts a relevant role in determining the main features of WB. With regard to the histological observations, severe myodegeneration accompanied by accumulation of connective tissue was observed within the WB cases, irrespective of the intra-fillet sampling position. No changes in the histological traits took place during the aging in either the normal or the WB samples. As to textural traits, although a progressive tenderization process took place during storage (P ≤ 0.001), the differences among the groups were mainly detected when raw meat rather than cooked was analyzed, with the WB samples exhibiting the highest (P ≤ 0.001) 80% compression values. In spite of the increased amount of connective tissue components in the WB cases, their thermally labile cross-links will account for the similar compression and shear-force values as normal breast cases when measured on cooked samples. Similarly, the enlargement of extracellular matrix and fibrosis might contribute in explaining the different fragmentation patterns observed between the superficial and the deep layer in the WB samples, with the superficial part exhibiting a higher amount of larger particles and an increase in particles with larger size during storage, compared to normal breasts. © 2017 Poultry Science Association Inc.
Ensemble representations: effects of set size and item heterogeneity on average size perception.
Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W
2013-02-01
Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.
Lee, Jae Chun; Kim, Yun-Il; Lee, Dong-Hun; Kim, Won-Jun; Park, Sung; Lee, Dong Bok
2011-08-01
Several kinds of nano-sized silica-based thermal insulation were prepared by dry processing of mixtures consisting of fumed silica, ceramic fiber, and a SiC opacifier. Infiltration of phenolic resin solution into the insulation, followed by hot-pressing, was attempted to improve the mechanical strength of the insulation. More than 22% resin content was necessary to increase the strength of the insulation by a factor of two or more. The structural integrity of the resin-infiltrated samples could be maintained, even after resin burn-out, presumably due to reinforcement from ceramic fibers. For all temperature ranges and similar sample bulk density values, the thermal conductivities of the samples after resin burn-out were consistently higher than those of the samples obtained from the dry process. Mercury intrusion curves indicated that the median size of the nanopores formed by primary silica aggregates in the samples after resin burn-out is consistently larger than that of the sample without resin infiltration.
Increasing point-count duration increases standard error
Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.
1998-01-01
We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.
Identification of missing variants by combining multiple analytic pipelines.
Ren, Yingxue; Reddy, Joseph S; Pottier, Cyril; Sarangi, Vivekananda; Tian, Shulan; Sinnwell, Jason P; McDonnell, Shannon K; Biernacka, Joanna M; Carrasquillo, Minerva M; Ross, Owen A; Ertekin-Taner, Nilüfer; Rademakers, Rosa; Hudson, Matthew; Mainzer, Liudmila Sergeevna; Asmann, Yan W
2018-04-16
After decades of identifying risk factors using array-based genome-wide association studies (GWAS), genetic research of complex diseases has shifted to sequencing-based rare variants discovery. This requires large sample sizes for statistical power and has brought up questions about whether the current variant calling practices are adequate for large cohorts. It is well-known that there are discrepancies between variants called by different pipelines, and that using a single pipeline always misses true variants exclusively identifiable by other pipelines. Nonetheless, it is common practice today to call variants by one pipeline due to computational cost and assume that false negative calls are a small percent of total. We analyzed 10,000 exomes from the Alzheimer's Disease Sequencing Project (ADSP) using multiple analytic pipelines consisting of different read aligners and variant calling strategies. We compared variants identified by using two aligners in 50,100, 200, 500, 1000, and 1952 samples; and compared variants identified by adding single-sample genotyping to the default multi-sample joint genotyping in 50,100, 500, 2000, 5000 and 10,000 samples. We found that using a single pipeline missed increasing numbers of high-quality variants correlated with sample sizes. By combining two read aligners and two variant calling strategies, we rescued 30% of pass-QC variants at sample size of 2000, and 56% at 10,000 samples. The rescued variants had higher proportions of low frequency (minor allele frequency [MAF] 1-5%) and rare (MAF < 1%) variants, which are the very type of variants of interest. In 660 Alzheimer's disease cases with earlier onset ages of ≤65, 4 out of 13 (31%) previously-published rare pathogenic and protective mutations in APP, PSEN1, and PSEN2 genes were undetected by the default one-pipeline approach but recovered by the multi-pipeline approach. Identification of the complete variant set from sequencing data is the prerequisite of genetic association analyses. The current analytic practice of calling genetic variants from sequencing data using a single bioinformatics pipeline is no longer adequate with the increasingly large projects. The number and percentage of quality variants that passed quality filters but are missed by the one-pipeline approach rapidly increased with sample size.
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.
2014-01-01
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150
NASA Astrophysics Data System (ADS)
Patel, N.; Mariazzi, S.; Toniutti, L.; Checchetto, R.; Miotello, A.; Dirè, S.; Brusa, R. S.
2007-09-01
Three series of silica thin films with thicknesses in the 300 nm range were deposited by spin coating on Si substrates using different compositions of the sol precursors. Film samples were thermally treated in static air at temperatures ranging from 300 to 900 °C. The effect of sol precursors and thermal treatment temperature on the film porosity was analysed by Fourier transform infrared (FTIR) spectroscopy, depth profiling with positron annihilation spectroscopy (DP-PAS) and the analysis of the capacitance-voltage (C-V) characteristic. The maximum of the total porosity was found to occur at a temperature of 600 °C when removal of porogen and OH groups was completed. Film densification due to the collapsing of the pores was observed after drying at 900 °C. DP-PAS provides evidence that the increase in the total porosity is related to a progressive increase in the pore size. The increase in the pore size never gives rise to the onset of connected porosity. In the silica film samples prepared using a low acidity sol precursor, the pore size is always lower than 1 nm. By increasing the acid catalyst ratio in the sol, larger pores are formed. Pores with size larger than 2.3 nm can be obtained by adding porogen to the sol. In each series of silica film samples the shift of the antisymmetric Si-O-Si transversal optical (TO3) mode upon thermal treatment correlates with a change of the pore size as evidenced by DP-PAS analysis. The pore microstructure of the three series of silica films is different at all the examined treatment temperatures and depends on the composition of the precursor sol.
Jian, Yu-Tao; Yang, Yue; Tian, Tian; Stanford, Clark; Zhang, Xin-Ping; Zhao, Ke
2015-01-01
Five types of porous Nickel-Titanium (NiTi) alloy samples of different porosities and pore sizes were fabricated. According to compressive and fracture strengths, three groups of porous NiTi alloy samples underwent further cytocompatibility experiments. Porous NiTi alloys exhibited a lower Young’s modulus (2.0 GPa ~ 0.8 GPa). Both compressive strength (108.8 MPa ~ 56.2 MPa) and fracture strength (64.6 MPa ~ 41.6 MPa) decreased gradually with increasing mean pore size (MPS). Cells grew and spread well on all porous NiTi alloy samples. Cells attached more strongly on control group and blank group than on all porous NiTi alloy samples (p < 0.05). Cell adhesion on porous NiTi alloys was correlated negatively to MPS (277.2 μm ~ 566.5 μm; p < 0.05). More cells proliferated on control group and blank group than on all porous NiTi alloy samples (p < 0.05). Cellular ALP activity on all porous NiTi alloy samples was higher than on control group and blank group (p < 0.05). The porous NiTi alloys with optimized pore size could be a potential orthopedic material. PMID:26047515
Effect of the three-dimensional microstructure on the sound absorption of foams: A parametric study.
Chevillotte, Fabien; Perrot, Camille
2017-08-01
The purpose of this work is to systematically study the effect of the throat and the pore sizes on the sound absorbing properties of open-cell foams. The three-dimensional idealized unit cell used in this work enables to mimic the acoustical macro-behavior of a large class of cellular solid foams. This study is carried out for a normal incidence and also for a diffuse field excitation, with a relatively large range of sample thicknesses. The transport and sound absorbing properties are numerically studied as a function of the throat size, the pore size, and the sample thickness. The resulting diagrams show the ranges of the specific throat sizes and pore sizes where the sound absorption grading is maximized due to the pore morphology as a function of the sample thickness, and how it correlates with the corresponding transport parameters. These charts demonstrate, together with typical examples, how the morphological characteristics of foam could be modified in order to increase the visco-thermal dissipation effects.
NASA Astrophysics Data System (ADS)
Ghosh, P.; Bhowmik, R. N.; Das, M. R.; Mitra, P.
2017-04-01
We have studied the grain size dependent electrical conductivity, dielectric relaxation and magnetic field dependent current voltage (I - V) characteristics of nickel ferrite (NiFe2O4) . The material has been synthesized by sol-gel self-combustion technique, followed by ball milling at room temperature in air environment to control the grain size. The material has been characterized using X-ray diffraction (refined with MAUD software analysis) and Transmission electron microscopy. Impedance spectroscopy and I - V characteristics in the presence of variable magnetic fields have confirmed the increase of resistivity for the fine powdered samples (grain size 5.17±0.6 nm), resulted from ball milling of the chemical routed sample. Activation energy of the material for electrical charge hopping process has increased with the decrease of grain size by mechanical milling of chemical routed sample. The I - V curves showed many highly non-linear and irreversible electrical features, e.g., I - V loop and bi-stable electronic states (low resistance state-LRS and high resistance state-HRS) on cycling the electrical bias voltage direction during I-V curve measurement. The electrical dc resistance for the chemically routed (without milled) sample in HRS (∼3.4876×104 Ω) at 20 V in presence of magnetic field 10 kOe has enhanced to ∼3.4152×105 Ω for the 10 h milled sample. The samples exhibited an unusual negative differential resistance (NDR) effect that gradually decreased on decreasing the grain size of the material. The magneto-resistance of the samples at room temperature has been found substantially large (∼25-65%). The control of electrical charge transport properties under magnetic field, as observed in the present ferrimagnetic material, indicate the magneto-electric coupling in the materials and the results could be useful in spintronics applications.
Park size and disturbance: impact on soil heterogeneity - a case study Tel-Aviv- Jaffa.
NASA Astrophysics Data System (ADS)
Zhevelev, Helena; Sarah, Pariente; Oz, Atar
2015-04-01
Parks and gardens are poly-functional elements of great importance in urban areas, and can be used for optimization of physical and social components in these areas. This study aimed to investigate alteration of soil properties with land usages within urban park and with area size of park. Ten parks differed by size (2 - 50 acres) were chosen, in random, in Tel-Aviv- Jaffa city. Soil was sampled in four microenvironments ((lawn, path, picnic and peripheral area (unorganized area) of each the park)), in three points and three depth (0-2, 5-10 and 10-20 cm). Penetration depth was measured in all point of sampling. For each soil sample electrical conductivity and organic matter content were determined. Averages of penetration depth drastically increased from the most disturbed microenvironments (path and picnic) to the less disturbed ones (lawn and peripheral). The maximal heterogeneity (by variances and percentiles) of penetration depth was found in the peripheral area. In this area, penetration depth increased with increasing park size, i.e., from 2.6 cm to 3.7 cm in the small and large parks, respectively. Averages of organic matter content and electrical conductivity decreased with soil depth in all microenvironments and increased with decreasing disturbance of microenvironments. Maximal heterogeneity for both of these properties was found in the picnic area. Increase of park size was accompanied by increasing of organic matter content in the upper depth in the peripheral area, i.e., from 2.4% in the small parks to 4.5% in the large ones. In all microenvironments the increasing of averages of all studied soil properties was accompanied by increasing heterogeneity, i.e., variances and upper percentiles. The increase in the heterogeneity of the studied soil properties is attributed to improved ecological soil status in the peripheral area, on the one hand, and to the high anthropogenic pressure in the picnic area, on the other. This means that the urban park offers "islands" with better ecological conditions which improve the urban system.
NASA Astrophysics Data System (ADS)
Li, Liang-Liang; Qin, Xiao-Ying; Liu, Yong-Fei; Liu, Quan-Zhen
2015-06-01
(Sr0.95Gd0.05)TiO3 (SGTO) ceramics are successfully prepared via spark plasma sintering (SPS) respectively at 1548, 1648, and 1748 K by using submicron-sized SGTO powders synthesized from a sol-gel method. The densities, microstructures, and thermoelectric properties of the SGTO ceramics are studied. Though the Seebeck coefficient shows no obvious difference in the case that SPS temperatures range from 1548 K to 1648 K, the electrical conductivity and the thermal conductivity increase remarkably due to the increase in grain size and density. The sample has a density higher than 98% theoretical density as the sintering temperature increases up to 1648 K and shows average grain sizes increasing from ˜ 0.7 μm to 7 μm until 1748 K. As a result, the maximum of the dimensionless figure of merit of ˜ 0.24 is achieved at ˜ 1000 K for the samples sintered at 1648 K and 1748 K, which was ˜ 71% larger than that (0.14 at ˜ 1000 K) for the sample sintered at 1548 K due to the enhancement of the power factor. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174292, 51101150, and 11374306).
Li, Xiao-li; An, Shu-qing; Xu, Tie-min; Liu, Yi-bo; Zhang, Li-juan; Zeng, Jiang-ping; Wang, Na
2015-06-01
The main analysis error of pressed powder pellet of carbonate comes from particle-size effect and mineral effect. So in the article in order to eliminate the particle-size effect, the ultrafine pressed powder pellet sample preparation is used to the determination of multi-elements and carbon-dioxide in carbonate. To prepare the ultrafine powder the FRITSCH planetary Micro Mill machine and tungsten carbide media is utilized. To conquer the conglomeration during the process of grinding, the wet grinding is preferred. The surface morphology of the pellet is more smooth and neat, the Compton scatter effect is reduced with the decrease in particle size. The intensity of the spectral line is varied with the change of the particle size, generally the intensity of the spectral line is increased with the decrease in the particle size. But when the particle size of more than one component of the material is decreased, the intensity of the spectral line may increase for S, Si, Mg, or decrease for Ca, Al, Ti, K, which depend on the respective mass absorption coefficient . The change of the composition of the phase with milling is also researched. The incident depth of respective element is given from theoretical calculation. When the sample is grounded to the particle size of less than the penetration depth of all the analyte, the effect of the particle size on the intensity of the spectral line is much reduced. In the experiment, when grounded the sample to less than 8 μm(d95), the particle-size effect is much eliminated, with the correction method of theoretical α coefficient and the empirical coefficient, 14 major, minor and trace element in the carbonate can be determined accurately. And the precision of the method is much improved with RSD < 2%, except Na2O. Carbon is ultra-light element, the fluorescence yield is low and the interference is serious. With the manual multi-layer crystal PX4, coarse collimator, empirical correction, X-ray spectrometer can be used to determine the carbon dioxide in the carbonate quantitatively. The intensity of the carbon is increase with the times of the measurement and the time delay even the pellet is stored in the dessicator. So employing the latest pressed powder pellet is suggested.
Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis
2006-01-01
The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.
Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.
2007-01-01
This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than adequate for the majority of sedimentological applications, especially considering that the autocorrelation technique is estimated to be at least 100 times faster than traditional methods.
Pedagogical Simulation of Sampling Distributions and the Central Limit Theorem
ERIC Educational Resources Information Center
Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari
2007-01-01
Students often find the fact that a sample statistic is a random variable very hard to grasp. Even more mysterious is why a sample mean should become ever more Normal as the sample size increases. This simulation tool is meant to illustrate the process, thereby giving students some intuitive grasp of the relationship between a parent population…
NASA Astrophysics Data System (ADS)
Maulia, R.; Putra, R. A.; Suharyadi, E.
2017-05-01
Mg0.5Ni0.5Fe2O4 nanoparticles have been successfully synthesized by using co-precipitation method and varying the synthesis parameter, i.e. synthesis temperature and NaOH concentration. X-ray Diffraction (XRD) pattern showed that nanoparticles have cubic spinel structures with an additional phase of γ-Fe2O3 and particle size varies within the range of 4.3 - 6.7 nm. This variation is due to the effect of various synthesis parameters. Transmission Electron Microscopy (TEM) image showed that the nanoparticles exhibited agglomeration. The observed diffraction ring from selected area electron diffraction showed that the sample was polycrystalline and confirmed the peak appearing in XRD. The coercivities showed an increasing trend with an increase in particle size from 44.7 Oe to 49.6 Oe for variation of NaOH concentration, and a decreasing trend with an increase in particle size from 46.8 to 45.1 Oe for variation of synthesis temperature. The maximum magnetization showed an increasing trend with an increase in the ferrite phase from 3.7 emu/g to 5.4 emu/g possessed in the sample with variations on NaOH concentration. The maximum magnetization for the sample with variations on synthesis temperature varied from 4.4 emu/g to 5.7 emu/g due to its crystal structures.
Release of carbon nanotubes from an epoxy-based nanocomposite during an abrasion process.
Schlagenhauf, Lukas; Chu, Bryan T T; Buha, Jelena; Nüesch, Frank; Wang, Jing
2012-07-03
The abrasion behavior of an epoxy/carbon nanotube (CNT) nanocomposite was investigated. An experimental setup has been established to perform abrasion, particle measurement, and collection all in one. The abraded particles were characterized by particle size distribution and by electron microscopy. The abrasion process was carried out with a Taber Abraser, and the released particles were collected by a tube for further investigation. The particle size distributions were measured with a scanning mobility particle sizer (SMPS) and an aerodynamic particle sizer (APS) and revealed four size modes for all measured samples. The mode corresponding to the smallest particle sizes of 300-400 nm was measured with the SMPS and showed a trend of increasing size with increasing nanofiller content. The three measured modes with particle sizes from 0.6 to 2.5 μm, measured with the APS, were similar for all samples. The measured particle concentrations were between 8000 and 20,000 particles/cm(3) for measurements with the SMPS and between 1000 and 3000 particles/cm(3) for measurements with the APS. Imaging by transmission electron microscopy (TEM) revealed that free-standing individual CNTs and agglomerates were emitted during abrasion.
Physical properties of the WAIS Divide ice core
Fitzpatrick, Joan J.; Voigt, Donald E.; Fegyveresi, John M.; Stevens, Nathan T.; Spencer, Matthew K.; Cole-Dai, Jihong; Alley, Richard B.; Jardine, Gabriella E.; Cravens, Eric; Wilen, Lawrence A.; Fudge, T. J.; McConnell, Joseph R.
2014-01-01
The WAIS (West Antarctic Ice Sheet) Divide deep ice core was recently completed to a total depth of 3405 m, ending ∼50 m above the bed. Investigation of the visual stratigraphy and grain characteristics indicates that the ice column at the drilling location is undisturbed by any large-scale overturning or discontinuity. The climate record developed from this core is therefore likely to be continuous and robust. Measured grain-growth rates, recrystallization characteristics, and grain-size response at climate transitions fit within current understanding. Significant impurity control on grain size is indicated from correlation analysis between impurity loading and grain size. Bubble-number densities and bubble sizes and shapes are presented through the full extent of the bubbly ice. Where bubble elongation is observed, the direction of elongation is preferentially parallel to the trace of the basal (0001) plane. Preferred crystallographic orientation of grains is present in the shallowest samples measured, and increases with depth, progressing to a vertical-girdle pattern that tightens to a vertical single-maximum fabric. This single-maximum fabric switches into multiple maxima as the grain size increases rapidly in the deepest, warmest ice. A strong dependence of the fabric on the impurity-mediated grain size is apparent in the deepest samples.
Hower, J.C.; Trimble, A.S.; Eble, C.F.; Palmer, C.A.; Kolker, A.
1999-01-01
Fly ash samples were collected in November and December of 1994, from generating units at a Kentucky power station using high- and low-sulfur feed coals. The samples are part of a two-year study of the coal and coal combustion byproducts from the power station. The ashes were wet screened at 100, 200, 325, and 500 mesh (150, 75, 42, and 25 ??m, respectively). The size fractions were then dried, weighed, split for petrographic and chemical analysis, and analyzed for ash yield and carbon content. The low-sulfur "heavy side" and "light side" ashes each have a similar size distribution in the November samples. In contrast, the December fly ashes showed the trend observed in later months, the light-side ash being finer (over 20 % more ash in the -500 mesh [-25 ??m] fraction) than the heavy-side ash. Carbon tended to be concentrated in the coarse fractions in the December samples. The dominance of the -325 mesh (-42 ??m) fractions in the overall size analysis implies, though, that carbon in the fine sizes may be an important consideration in the utilization of the fly ash. Element partitioning follows several patterns. Volatile elements, such as Zn and As, are enriched in the finer sizes, particularly in fly ashes collected at cooler, light-side electrostatic precipitator (ESP) temperatures. The latter trend is a function of precipitation at the cooler-ESP temperatures and of increasing concentration with the increased surface area of the finest fraction. Mercury concentrations are higher in high-carbon fly ashes, suggesting Hg adsorption on the fly ash carbon. Ni and Cr are associated, in part, with the spinel minerals in the fly ash. Copyright ?? 1999 Taylor & Francis.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-04-01
In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.
Vinson, M.R.; Budy, P.
2011-01-01
We compared sources of variability and cost in paired stomach content and stable isotope samples from three salmonid species collected in September 2001–2005 and describe the relative information provided by each method in terms of measuring diet overlap and food web study design. Based on diet analyses, diet overlap among brown trout, rainbow trout, and mountain whitefish was high, and we observed little variation in diets among years. In contrast, for sample sizes n ≥ 25, 95% confidence interval (CI) around mean δ15Ν and δ13C for the three target species did not overlap, and species, year, and fish size effects were significantly different, implying that these species likely consumed similar prey but in different proportions. Stable isotope processing costs were US$12 per sample, while stomach content analysis costs averaged US$25.49 ± $2.91 (95% CI) and ranged from US$1.50 for an empty stomach to US$291.50 for a sample with 2330 items. Precision in both δ15Ν and δ13C and mean diet overlap values based on stomach contents increased considerably up to a sample size of n = 10 and plateaued around n = 25, with little further increase in precision.
[Experimental study on particle size distributions of an engine fueled with blends of biodiesel].
Lu, Xiao-Ming; Ge, Yun-Shan; Han, Xiu-Kun; Wu, Si-Jin; Zhu, Rong-Fu; He, Chao
2007-04-01
The purpose of this study is to obtain the particle size distributions of an engine fueled biodiesel and its blends. A turbocharged DI diesel engine was tested on a dynamometer. A pump of 80 L/min and fiber glass filters with diameter of 90 mm were used to sample engine particles in exhaust pipe. Sampling duration was 10 minutes. Particle size distributions were measured by a laser diffraction particle size analyzer. Results indicated that higher engine speed resulted in smaller particle sizes and narrower distributions. The modes on distribution curves and mode variation were larger with dry samples than with wet samples (dry: around 10 - 12 microm vs. wet: around 4 - 10 microm). At low speed, Sauter mean diameter d32 of dry samples was the biggest with B100, the smallest with diesel fuel, and among them with B20, while at high speed, d32 the biggest with B20, the smallest with B100, and in middle with diesel. Median diameter d(0.5) also reflected the results. Except for 2 000 r/min, d32 of wet with B20 is the biggest, the smallest with diesel, and in middle with B100. The large mode variation resulted in increase of d32.
Seven ways to increase power without increasing N.
Hansen, W B; Collins, L M
1994-01-01
Many readers of this monograph may wonder why a chapter on statistical power was included. After all, by now the issue of statistical power is in many respects mundane. Everyone knows that statistical power is a central research consideration, and certainly most National Institute on Drug Abuse grantees or prospective grantees understand the importance of including a power analysis in research proposals. However, there is ample evidence that, in practice, prevention researchers are not paying sufficient attention to statistical power. If they were, the findings observed by Hansen (1992) in a recent review of the prevention literature would not have emerged. Hansen (1992) examined statistical power based on 46 cohorts followed longitudinally, using nonparametric assumptions given the subjects' age at posttest and the numbers of subjects. Results of this analysis indicated that, in order for a study to attain 80-percent power for detecting differences between treatment and control groups, the difference between groups at posttest would need to be at least 8 percent (in the best studies) and as much as 16 percent (in the weakest studies). In order for a study to attain 80-percent power for detecting group differences in pre-post change, 22 of the 46 cohorts would have needed relative pre-post reductions of greater than 100 percent. Thirty-three of the 46 cohorts had less than 50-percent power to detect a 50-percent relative reduction in substance use. These results are consistent with other review findings (e.g., Lipsey 1990) that have shown a similar lack of power in a broad range of research topics. Thus, it seems that, although researchers are aware of the importance of statistical power (particularly of the necessity for calculating it when proposing research), they somehow are failing to end up with adequate power in their completed studies. This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. It is easy to see how this overemphasis has come about. Sample size is easy to manipulate, has the advantage of being related to power in a straight-forward way, and usually is under the direct control of the researcher, except for limitations imposed by finances or subject availability. Another option for increasing power is to increase the alpha used for hypothesis-testing but, as very few researchers seriously consider significance levels much larger than the traditional .05, this strategy seldom is used. Of course, sample size is important, and the authors of this chapter are not recommending that researchers cease choosing sample sizes carefully. Rather, they argue that researchers should not confine themselves to increasing N to enhance power. It is important to take additional measures to maintain and improve power over and above making sure the initial sample size is sufficient. The authors recommend two general strategies. One strategy involves attempting to maintain the effective initial sample size so that power is not lost needlessly. The other strategy is to take measures to maximize the third factor that determines statistical power: effect size.
NASA Astrophysics Data System (ADS)
Asefaw Berhe, Asmeret; Kaiser, Michael; Ghezzehei, Teamrat; Myrold, David; Kleber, Markus
2013-04-01
The effectiveness of charcoal and calcium carbonate applications to improve soil conditions has been well documented. However, their influence on the formation of silt-sized aggregates and the amount and protection of associated organic matter (OM) against microbial decomposition is still largely unknown. For sustainable management of agricultural soils, silt-sized aggregates (2-53 µm) are of particularly large importance because they store up to 60% of soil organic carbon with mean residence times between 70 and 400 years. The objectives are i) to analyze the ability of CaCO3 and/or charcoal application to increase the amount of silt-sized aggregates and associated OM, ii) vary soil mineral conditions to establish relevant boundary conditions for amendment-induced aggregation processes, iii) to determine how amendment-induced changes in formation of silt-sized aggregates relate to microbial decomposition of OM. We set up artificial high reactive (HR, clay: 40%, sand: 57%, OM: 3%) and low reactive soils (LR, clay: 10%, sand: 89%, OM: 1%) and mixed them with charcoal (CC, 1%) and/or calcium carbonate (Ca, 0.2%). The samples were adjusted to a water potential of 0.3 bar and sub samples were incubated with microbial inoculum (MO). After a 16-weeks aggregation experiment, size fractions were separated by wet-sieving and sedimentation. Since we did not use mineral compounds in the artificial mixtures within the size range of 2 to 53 µm, we consider material recovered in this fraction as silt-sized aggregates, which was confirmed by SEM analyses. For the LR mixtures, we detected increasing N concentrations within the 2-53 µm fractions of the charcoal amended samples (CC, CC+Ca, and CC+Ca+MO) as compared to the Control sample with the strongest effect for the CC+Ca+MO sample. This indicates an association of N-containing microbial derived OM with silt-sized aggregates. For the charcoal amended LR and HR mixtures, the C concentrations of the 2-53 µm fractions are larger than those of the respective fractions of the Control samples but the effect is several times stronger for the LR mixtures. The C concentrations of the 2-53 µm fractions relative to the total C amount of the LR and HR mixtures are between 30 and 50%. The charcoal amended samples show generally larger relative C amounts associated with the 2-53 µm fractions than the Control samples. Benefits for aggregate formation and OM storage were larger for sand (LR) than for clay soil (HR). The gained data are similar to respective data for natural soils. Consequently, the suggested microcosm experiments are suitable to analyze mechanisms within soil aggregation processes.
2010-01-01
Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152
2011-01-01
Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Trial registration Current Controlled Trials ISRCTN61649292 PMID:21288326
Karumendu, L U; Ven, R van de; Kerr, M J; Lanza, M; Hopkins, D L
2009-08-01
The impact of homogenization speed on Particle Size (PS) results was examined using samples from the M.longissimus thoracis et lumborum (LL) of 40 lambs. One gram duplicate samples from meat aged for 1 and 5days were homogenized at five different speeds; 11,000, 13,000, 16,000, 19,000 and 22,000rpm. In addition to this LL samples from 30 different lamb carcases also aged for 1 and 5days were used to study the comparison between PS and myofibrillar fragmentation index (MFI) values. In this case, 1g duplicate samples (n=30) were homogenized at 16,000rpm and the other half (0.5g samples) at 11,000rpm (n=30). The homogenates were then subjected to respective combinations of treatments which included either PS analysis or the determination of MFI, both with or without three cycles of centrifugation. All 140 samples of LL included 65g blocks for subsequent shear force (SF) testing. Homogenization at 16,000rpm provided the greatest ability to detect ageing differences for particle size between samples aged for 1 and 5days. Particle size at the 25% quantile provided the best result for detecting differences due to ageing. It was observed that as ageing increased the mean PS decreased and was significantly (P<0.001) less for 5days aged samples compared to 1day aged samples, while MFI values significantly increased (P<0.001) as ageing period increased. When comparing the PS and MFI methods it became apparent that, as opposed to the MFI method, there was a greater coefficient of variation for the PS method which warranted a quality assurance system. Given this requirement and examination of the mean, standard deviation and the 25% quantile for PS data it was concluded that three cycles of centrifugation were not necessary and this also applied to the MFI method. There were significant correlations (P<0.001) within the same lamb loin sample aged for a given period between mean MFI and mean PS (-0.53), mean MFI and mean SF (-0.38) and mean PS and mean SF (0.23). It was concluded that PS analysis offers significant potential for streamlining determination of myofibrillar degradation when samples are measured after homogenization at 16,000rpm with no centrifugation.
NASA Astrophysics Data System (ADS)
Fukuda, Jun-ichi; Muto, Jun; Nagahama, Hiroyuki
2018-01-01
We performed two axial deformation experiments on synthetic polycrystalline anorthite samples with a grain size of 3 μm and 5 vol% Si-Al-rich glass at 900 °C, a confining pressure of 1.0 GPa, and a strain rate of 10-4.8 s-1. One sample was deformed as-is (dry); in the other sample, two half-cut samples (two cores) with 0.15 wt% water at the boundary were put together in the apparatus. The mechanical data for both samples were essentially identical with a yield strength of 700 MPa and strain weakening of 500 MPa by 20% strain. The dry sample appears to have been deformed by distributed fracturing. Meanwhile, the water-added sample shows plastic strain localization in addition to fracturing and reaction products composed of zoisite grains and SiO2 materials along the boundary between the two sample cores. Infrared spectra of the water-added sample showed dominant water bands of zoisite. The maximum water content was 1500 wt ppm H2O at the two-core boundary, which is the same as the added amount. The water contents gradually decreased from the boundaries to the sample interior, and the gradient fitted well with the solution of the one-dimensional diffusion equation. The determined diffusion coefficient was 7.4 × 10-13 m2/s, which agrees with previous data for the grain boundary diffusion of water. The anorthite grains in the water-added sample showed no crystallographic preferred orientation. Textural observations and water diffusion indicate that water promotes the plastic deformation of polycrystalline anorthite by grain-size-sensitive creep as well as simultaneous reactions. We calculated the strain rate evolution controlled by water diffusion in feldspar aggregates surrounded by a water source. We assumed water diffusion in a dry rock mass with variable sizes. Diffused water weakens a rock mass with time under compressive stress. The calculated strain rate decreased from 10-10 to 10-15 s-1 with an increase in the rock mass size to which water is supplied from < 1 m to 1 km and an increase in the time of water diffusion from < 1 to 10,000 years. This indicates a decrease in the strain rate in a rock mass with increasing deformation via water diffusion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gihring, Thomas; Green, Stefan; Schadt, Christopher Warren
2011-01-01
Technologies for massively parallel sequencing are revolutionizing microbial ecology and are vastly increasing the scale of ribosomal RNA (rRNA) gene studies. Although pyrosequencing has increased the breadth and depth of possible rRNA gene sampling, one drawback is that the number of reads obtained per sample is difficult to control. Pyrosequencing libraries typically vary widely in the number of sequences per sample, even within individual studies, and there is a need to revisit the behaviour of richness estimators and diversity indices with variable gene sequence library sizes. Multiple reports and review papers have demonstrated the bias in non-parametric richness estimators (e.g.more » Chao1 and ACE) and diversity indices when using clone libraries. However, we found that biased community comparisons are accumulating in the literature. Here we demonstrate the effects of sample size on Chao1, ACE, CatchAll, Shannon, Chao-Shen and Simpson's estimations specifically using pyrosequencing libraries. The need to equalize the number of reads being compared across libraries is reiterated, and investigators are directed towards available tools for making unbiased diversity comparisons.« less
NASA Astrophysics Data System (ADS)
Shivaramu, N. J.; Lakshminarasappa, B. N.; Nagabhushana, K. R.; Singh, Fouran
2016-02-01
Nanocrystalline Y2O3 is synthesized by solution combustion technique using urea and glycine as fuels. X-ray diffraction (XRD) pattern of as prepared sample shows amorphous nature while annealed samples show cubic nature. The average crystallite size is calculated using Scherrer's formula and is found to be in the range 14-30 nm for samples synthesized using urea and 15-20 nm for samples synthesized using glycine respectively. Field emission scanning electron microscopy (FE-SEM) image of 1173 K annealed Y2O3 samples show well separated spherical shape particles and the average particle size is found to be in the range 28-35 nm. Fourier transformed infrared (FTIR) and Raman spectroscopy reveals a stretching of Y-O bond. Electron spin resonance (ESR) shows V- center, O2- and Y2 + defects. A broad photoluminescence (PL) emission with peak at 386 nm is observed when the sample is excited with 252 nm. Thermoluminescence (TL) properties of γ-irradiated Y2O3 nanopowder are studied at a heating rate of 5 K s- 1. The samples prepared by using urea show a prominent and well resolved peak at 383 K and a weak one at 570 K. It is also found that TL glow peak intensity (Im1) at 383 K increases with increase in γ-dose up to 6.0 kGy and then decreases with increase in dose. However, glycine used Y2O3 shows a prominent TL glow with peaks at 396 K and 590 K. Among the fuels, urea used Y2O3 shows simple and well resolved TL glows. This might be due to fuel and hence particle size effect. The kinetic parameters are calculated by Chen's glow curve peak shape method and results are discussed in detail.
Size-selective separation of submicron particles in suspensions with ultrasonic atomization.
Nii, Susumu; Oka, Naoyoshi
2014-11-01
Aqueous suspensions containing silica or polystyrene latex were ultrasonically atomized for separating particles of a specific size. With the help of a fog involving fine liquid droplets with a narrow size distribution, submicron particles in a limited size-range were successfully separated from suspensions. Performance of the separation was characterized by analyzing the size and the concentration of collected particles with a high resolution method. Irradiation of 2.4MHz ultrasound to sample suspensions allowed the separation of particles of specific size from 90 to 320nm without regarding the type of material. Addition of a small amount of nonionic surfactant, PONPE20 to SiO2 suspensions enhanced the collection of finer particles, and achieved a remarkable increase in the number of collected particles. Degassing of the sample suspension resulted in eliminating the separation performance. Dissolved air in suspensions plays an important role in this separation. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Marin, N.; Farmer, J. D.; Zacny, K.; Sellar, R. G.; Nunez, J.
2011-12-01
This study seeks to understand variations in composition and texture of basaltic pyroclastic materials used in the 2010 International Lunar Surface Operation-In-Situ Resource Utilization Analogue Test (ILSO-ISRU) held on the slopes of Mauna Kea Volcano, Hawaii (1). The quantity and quality of resources delivered by ISRU depends upon the nature of the materials processed (2). We obtained a one-meter deep auger cuttings sample of a basaltic regolith at the primary site for feed stock materials being mined for the ISRU field test. The auger sample was subdivided into six, ~16 cm depth increments and each interval was sampled and characterized in the field using the Multispectral Microscopic Imager (MMI; 3) and a portable X-ray Diffractometer (Terra, InXitu Instruments, Inc.). Splits from each sampled interval were returned to the lab and analyzed using more definitive methods, including high resolution Powder X-ray Diffraction and Thermal Infrared (TIR) spectroscopy. The mineralogy and microtexture (grain size, sorting, roundness and sphericity) of the auger samples were determined using petrographic point count measurements obtained from grain-mount thin sections. NIH Image J (http://rsb.info.nih.gov/ij/) was applied to digital images of thin sections to document changes in particle size with depth. Results from TIR showed a general predominance of volcanic glass, along with plagioclase, olivine, and clinopyroxene. In addition, thin section and XRPD analyses showed a down core increase in the abundance of hydrated iron oxides (as in situ weathering products). Quantitative point count analyses confirmed the abundance of volcanic glass in samples, but also revealed olivine and pyroxene to be minor components, that decreased in abundance with depth. Furthermore, point count and XRD analyses showed a decrease in magnetite and ilmenite with depth, accompanied by an increase in Fe3+phases, including hematite and ferrihydrite. Image J particle analysis showed that the average grain size decreased down the depth profile. This decrease in average grain size and increase in hydrated iron oxides down hole suggests that the most favorable ISRU feedstock materials were sampled in the lower half-meter of the mine section sampled.
Scott, Frank I; McConnell, Ryan A; Lewis, Matthew E; Lewis, James D
2012-04-01
Significant advances have been made in clinical and epidemiologic research methods over the past 30 years. We sought to demonstrate the impact of these advances on published gastroenterology research from 1980 to 2010. Twenty original clinical articles were randomly selected from each of three journals from 1980, 1990, 2000, and 2010. Each article was assessed for topic, whether the outcome was clinical or physiologic, study design, sample size, number of authors and centers collaborating, reporting of various statistical methods, and external funding. From 1980 to 2010, there was a significant increase in analytic studies, clinical outcomes, number of authors per article, multicenter collaboration, sample size, and external funding. There was increased reporting of P values, confidence intervals, and power calculations, and increased use of large multicenter databases, multivariate analyses, and bioinformatics. The complexity of clinical gastroenterology and hepatology research has increased dramatically, highlighting the need for advanced training of clinical investigators.
Elastic moduli in nano-size samples of amorphous solids: System size dependence
NASA Astrophysics Data System (ADS)
Cohen, Yossi; Procaccia, Itamar
2012-08-01
This letter is motivated by some recent experiments on pan-cake-shaped nano-samples of metallic glass that indicate a decline in the measured shear modulus upon decreasing the sample radius. Similar measurements on crystalline samples of the same dimensions showed a much more modest change. In this letter we offer a theory of this phenomenon; we argue that such results are generically expected for any amorphous solid, with the main effect being related to the increased contribution of surfaces with respect to the bulk when the samples get smaller. We employ exact relations between the shear modulus and the eigenvalues of the system's Hessian matrix to explore the role of surface modes in affecting the elastic moduli.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Rajesh, E-mail: rkkaushik06@gmail.com; Deptt. of Physics,Vaish College of Engineering, Rohtak-124001, Haryana; Praveen,
2016-05-06
In present work Magnesium oxide (MgO) samples were doped with different concentration of Transition metal Nickel Oxide(NiO) by using Chemical co-precipitation method. The doping levels were varied from NiO (5%, 10%, 15%) and all the samples were calcined at 600°C for 4hrs and 8hrs respectively. Structural analysis of these calcined materials is carried out by X-ray diffraction (XRD) techniques which reveals that average crystalline sizes are in nano region i.e. 21.77nm-31.13 nm and tabulated in table 1. The powder of calcined samples were also characterized by using various other techniques i.e. Scanning Electron Microscopy (SEM), Fourier Transformation Infrared Spectroscopy (FTIR), UV-Visiblemore » spectroscopy, Transmission Electron Microscopy (TEM) etc. The effects of dopant concentration, calcined temperature, calcinations duration on samples were studied and also investigate the effect of varying dopant concentration on morphology and optical properties of calcined nanomaterials. From results it was observed that the crystallite size of nanocomposites increases with increases dopant concentration or increases calcinations duration. The optical band gap decreases with increases sintering time and increase with increases dopant concentrations. TEM results coincide with XRD results and show that particles are polycrystalline in nature. FTIR spectra show that for all samples particles are pure in composition and transmission rate increases with calcinations duration.« less
Will Outer Tropical Cyclone Size Change due to Anthropogenic Warming?
NASA Astrophysics Data System (ADS)
Schenkel, B. A.; Lin, N.; Chavas, D. R.; Vecchi, G. A.; Knutson, T. R.; Oppenheimer, M.
2017-12-01
Prior research has shown significant interbasin and intrabasin variability in outer tropical cyclone (TC) size. Moreover, outer TC size has even been shown to vary substantially over the lifetime of the majority of TCs. However, the factors responsible for both setting initial outer TC size and determining its evolution throughout the TC lifetime remain uncertain. Given these gaps in our physical understanding, there remains uncertainty in how outer TC size will change, if at all, due to anthropogenic warming. The present study seeks to quantify whether outer TC size will change significantly in response to anthropogenic warming using data from a high-resolution global climate model and a regional hurricane model. Similar to prior work, the outer TC size metric used in this study is the radius in which the azimuthal-mean surface azimuthal wind equals 8 m/s. The initial results from the high-resolution global climate model data suggest that the distribution of outer TC size shifts significantly towards larger values in each global TC basin during future climates, as revealed by 1) statistically significant increase of the median outer TC size by 5-10% (p<0.05) according to a 1,000-sample bootstrap resampling approach with replacement and 2) statistically significant differences between distributions of outer TC size from current and future climate simulations as shown using two-sample Kolmogorov Smirnov testing (p<<0.01). Additional analysis of the high-resolution global climate model data reveals that outer TC size does not uniformly increase within each basin in future climates, but rather shows substantial locational dependence. Future work will incorporate the regional mesoscale hurricane model data to help focus on identifying the source of the spatial variability in outer TC size increases within each basin during future climates and, more importantly, why outer TC size changes in response to anthropogenic warming.
Guo, Shuang; Zhu, Chenqi; Gao-Yang, Yaya; Qiu, Bailing; Wu, Di; Liang, Qihui; He, Jiayuan; Han, Nanyin
2016-02-01
Gravitational field-flow fractionation is the simplest field-flow fractionation technique in terms of principle and operation. The earth' s gravity is its external field. Different sized particles are injected into a thin channel and carried by carrier fluid. The different velocities of the carrier liquid in different places results in a size-based separation. A gravitational field-flow fractionation (GrFFF) instrument was designed and constructed. Two kinds of polystyrene (PS) particles with different sizes (20 µm and 6 µm) were chosen as model particles. In this work, the separation of the sample was achieved by changing the concentration of NaN3, the percentage of mixed surfactant in the carrier liquid and the flow rate of carrier liquid. Six levels were set for each factor. The effects of these three factors on the retention ratio (R) and plate height (H) of the PS particles were investigated. It was found that R increased and H decreased with increasing particle size. On the other hand, the R and H increased with increasing flow rate. The R and H also increased with increasing NaN3 concentration. The reason was that the electrostatic repulsive force between the particles and the glass channel wall increased. The force allowed the samples approach closer to the channel wall. The results showed that the resolution and retention time can be improved by adjusting the experimental conditions. These results can provide important values to the further applications of GrFFF technique.
Composition of hydroponic lettuce: effect of time of day, plant size, and season.
Gent, Martin P N
2012-02-01
The diurnal variation of nitrate and sugars in leafy green vegetables may vary with plant size or the ability of plants to buffer the uptake, synthesis, and use of metabolites. Bibb lettuce was grown in hydroponics in a greenhouse and sampled at 3 h intervals throughout one day in August 2007 and another day in November 2008 to determine fresh weight, dry matter, and concentration of nitrate and sugars. Plantings differing in size and age were sampled on each date. The dry/fresh weight ratio increased during the daylight period. This increase was greater for small compared to large plants. On a fresh weight basis, tissue nitrate of small plants was only half that of larger plants. The variation in concentration with time was much less for nitrate than for soluble sugars. Soluble sugars were similar for all plant sizes early in the day, but they increased far more for small compared to large plants in the long days of summer. The greatest yield on a fresh weight basis was obtained by harvesting lettuce at dawn. Although dry matter or sugar content increased later in the day, there is no commercial benefit to delaying harvest as consumers do not buy lettuce for these attributes. Copyright © 2011 Society of Chemical Industry.
The relationship of motor unit size, firing rate and force.
Conwit, R A; Stashuk, D; Tracy, B; McHugh, M; Brown, W F; Metter, E J
1999-07-01
Using a clinical electromyographic (EMG) protocol, motor units were sampled from the quadriceps femoris during isometric contractions at fixed force levels to examine how average motor unit size and firing rate relate to force generation. Mean firing rates (mFRs) and sizes (mean surface-detected motor unit action potential (mS-MUAP) area) of samples of active motor units were assessed at various force levels in 79 subjects. MS-MUAP size increased linearly with increased force generation, while mFR remained relatively constant up to 30% of a maximal force and increased appreciably only at higher force levels. A relationship was found between muscle force and mS-MUAP area (r2 = 0.67), mFR (r2 = 0.38), and the product of mS-MUAP area and mFR (mS-MUAP x mFR) (r2 = 0.70). The results support the hypothesis that motor units are recruited in an orderly manner during forceful contractions, and that in large muscles only at higher levels of contraction ( > 30% MVC) do mFRs increase appreciably. MS-MUAP and mFR can be assessed using clinical EMG techniques and they may provide a physiological basis for analyzing the role of motor units during muscle force generation.
NASA Astrophysics Data System (ADS)
Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.
2016-09-01
Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.
NASA Astrophysics Data System (ADS)
van Sebille, M.; Fusi, A.; Xie, L.; Ali, H.; van Swaaij, R. A. C. M. M.; Leifer, K.; Zeman, M.
2016-09-01
We report the effect of hydrogen on the crystallization process of silicon nanocrystals embedded in a silicon oxide matrix. We show that hydrogen gas during annealing leads to a lower sub-band gap absorption, indicating passivation of defects created during annealing. Samples annealed in pure nitrogen show expected trends according to crystallization theory. Samples annealed in forming gas, however, deviate from this trend. Their crystallinity decreases for increased annealing time. Furthermore, we observe a decrease in the mean nanocrystal size and the size distribution broadens, indicating that hydrogen causes a size reduction of the silicon nanocrystals.
Geochemical and radiological characterization of soils from former radium processing sites
Landa, E.R.
1984-01-01
Soil samples were collected from former radium processing sites in Denver, CO, and East Orange, NJ. Particle-size separations and radiochemical analyses of selected samples showed that while the greatest contents of both 226Ra and U were generally found in the finest (< 45 ??m) fraction, the pattern was not always of progressive increase in radionuclide content with decreasing particle size. Leaching tests on these samples showed a large portion of the 225Ra and U to be soluble in dilute hydrochloric acid. Radon-emanation coefficients measured for bulk samples of contaminated soil were about 20%. Recovery of residual uranium and vanadium, as an adjunct to any remedial action program, appears unlikely due to economic considerations.
Stability and bias of classification rates in biological applications of discriminant analysis
Williams, B.K.; Titus, K.; Hines, J.E.
1990-01-01
We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases
ERIC Educational Resources Information Center
Dykiert, Dominika; Gale, Catharine R.; Deary, Ian J.
2009-01-01
This study investigated the possibility that apparent sex differences in IQ are at least partly created by the degree of sample restriction from the baseline population. We used a nationally representative sample, the 1970 British Cohort Study. Sample sizes varied from 6518 to 11,389 between data-collection sweeps. Principal components analysis of…
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.
Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.
Chaibub Neto, Elias
2015-01-01
In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965
Parajulee, M N; Shrestha, R B; Leser, J F
2006-04-01
A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.
Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach
NASA Astrophysics Data System (ADS)
Xiao, T.
2012-12-01
One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.
2014-04-15
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less
Using e-mail recruitment and an online questionnaire to establish effect size: A worked example.
Kirkby, Helen M; Wilson, Sue; Calvert, Melanie; Draper, Heather
2011-06-09
Sample size calculations require effect size estimations. Sometimes, effect size estimations and standard deviation may not be readily available, particularly if efficacy is unknown because the intervention is new or developing, or the trial targets a new population. In such cases, one way to estimate the effect size is to gather expert opinion. This paper reports the use of a simple strategy to gather expert opinion to estimate a suitable effect size to use in a sample size calculation. Researchers involved in the design and analysis of clinical trials were identified at the University of Birmingham and via the MRC Hubs for Trials Methodology Research. An email invited them to participate.An online questionnaire was developed using the free online tool 'Survey Monkey©'. The questionnaire described an intervention, an electronic participant information sheet (e-PIS), which may increase recruitment rates to a trial. Respondents were asked how much they would need to see recruitment rates increased by, based on 90%. 70%, 50% and 30% baseline rates, (in a hypothetical study) before they would consider using an e-PIS in their research.Analyses comprised simple descriptive statistics. The invitation to participate was sent to 122 people; 7 responded to say they were not involved in trial design and could not complete the questionnaire, 64 attempted it, 26 failed to complete it. Thirty-eight people completed the questionnaire and were included in the analysis (response rate 33%; 38/115). Of those who completed the questionnaire 44.7% (17/38) were at the academic grade of research fellow 26.3% (10/38) senior research fellow, and 28.9% (11/38) professor. Dependent upon the baseline recruitment rates presented in the questionnaire, participants wanted recruitment rate to increase from 6.9% to 28.9% before they would consider using the intervention. This paper has shown that in situations where effect size estimations cannot be collected from previous research, opinions from researchers and trialists can be quickly and easily collected by conducting a simple study using email recruitment and an online questionnaire. The results collected from the survey were successfully used in sample size calculations for a PhD research study protocol.
Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use
Arthur, Steve M.; Schwartz, Charles C.
1999-01-01
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.
VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS
Huang, Jian; Horowitz, Joel L.; Wei, Fengrong
2010-01-01
We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739
Large exchange bias effect in NiFe2O4/CoO nanocomposites
NASA Astrophysics Data System (ADS)
Mohan, Rajendra; Prasad Ghosh, Mritunjoy; Mukherjee, Samrat
2018-03-01
In this work, we report the exchange bias effect of NiFe2O4/CoO nanocomposites, synthesized via chemical co-precipitation method. Four samples of different particle size ranging from 4 nm to 31 nm were prepared with the annealing temperature varying from 200 °C to 800 °C. X-ray diffraction analysis of all the samples confirmed the presence of cubic spinel phase of Nickel ferrite along with CoO phase without trace of any impurity. Sizes of the particles were studied from transmission electron micrographs and were found to be in agreement with those estimated from x-ray diffraction. Field cooled (FC) hysteresis loops at 5 K revealed an exchange bias (HE) of 2.2 kOe for the sample heated at 200 °C which decreased with the increase of particle size. Exchange bias expectedly vanished at 300 K due to high thermal energy (kBT) and low effective surface anisotropy. M-T curves revealed a blocking temperature of 135 K for the sample with smaller particle size.
Microstructural and optical properties of Mn doped NiO nanostructures synthesized via sol-gel method
NASA Astrophysics Data System (ADS)
Shah, Shamim H.; Khan, Wasi; Naseem, Swaleha; Husain, Shahid; Nadeem, M.
2018-04-01
Undoped and Mn(0, 5%, 10% and 15%) doped NiO nanostructures were synthesized by sol-gel method. Structure, morphology and optical properties were investigated through XRD, FTIR, SEM/EDS and UV-visible absorption spectroscopy techniques. XRD data analysis reveals the single phase nature with cubic crystal symmetry of the samples and the average crystallite size decreases with the doping of Mn ions upto 10%. FTIR spectra further confirmed the purity and composition of the synthesized samples. The non-spherical shape of the nanostructures was observed from SEM micrographs and gain size of the nanostructures reduces with Mn doping in NiO, whereas agglomeration increases in doped sample. Optical band gap was estimated using Tauc'srelation and found to increase on incorporation of Mn upto 10% in host lattice and then decreases for further doping.
Steigen, Terje K; Claudio, Cheryl; Abbott, David; Schulzer, Michael; Burton, Jeff; Tymchak, Wayne; Buller, Christopher E; John Mancini, G B
2008-06-01
To assess reproducibility of core laboratory performance and impact on sample size calculations. Little information exists about overall reproducibility of core laboratories in contradistinction to performance of individual technicians. Also, qualitative parameters are being adjudicated increasingly as either primary or secondary end-points. The comparative impact of using diverse indexes on sample sizes has not been previously reported. We compared initial and repeat assessments of five quantitative parameters [e.g., minimum lumen diameter (MLD), ejection fraction (EF), etc.] and six qualitative parameters [e.g., TIMI myocardial perfusion grade (TMPG) or thrombus grade (TTG), etc.], as performed by differing technicians and separated by a year or more. Sample sizes were calculated from these results. TMPG and TTG were also adjudicated by a second core laboratory. MLD and EF were the most reproducible, yielding the smallest sample size calculations, whereas percent diameter stenosis and centerline wall motion require substantially larger trials. Of the qualitative parameters, all except TIMI flow grade gave reproducibility characteristics yielding sample sizes of many 100's of patients. Reproducibility of TMPG and TTG was only moderately good both within and between core laboratories, underscoring an intrinsic difficulty in assessing these. Core laboratories can be shown to provide reproducibility performance that is comparable to performance commonly ascribed to individual technicians. The differences in reproducibility yield huge differences in sample size when comparing quantitative and qualitative parameters. TMPG and TTG are intrinsically difficult to assess and conclusions based on these parameters should arise only from very large trials.
Bedload Rating and Flow Competence Curves Vary With Watershed and Bed Material Parameters
NASA Astrophysics Data System (ADS)
Bunte, K.; Abt, S. R.
2003-12-01
Bedload transport rating curves and flow competence curves (largest bedload size for specified flow) are usually not known for streams unless a large number of bedload samples has been collected and analyzed. However, this information is necessary for assessing instream flow needs and stream responses to watershed effects. This study therefore analyzed whether bedload transport rating and flow competence curves were related to stream parameters. Bedload transport rating curves and flow competence curves were obtained from extensive bedload sampling in six gravel- and cobble-bed mountain streams. Samples were collected using bedload traps and a large net sampler, both of which provide steep and relatively well-defined bedload rating and flow competence curves due to a long sampling duration, a large sampler opening and a large sampler capacity. The sampled streams have snowmelt regimes, steep (1-9%) gradients, and watersheds that are mainly forested and relatively undisturbed with basin area sizes of 8 to 105 km2. The channels are slightly incised and can contain flows of more than 1.5 times bankfull with little overbank flow. Exponents of bedload rating and flow competence curves obtained from these measurements were found to systematically increase with basin area size and decrease with the degree of channel armoring. By contrast, coefficients of bedload rating and flow competence curves decreased with basin size and increased with armoring. All of these relationships were well-defined (0.86 < r2 < 0.99). Data sets from other studies in coarse-bedded streams fit the indicated trend if the sampling device used allows measuring bedload transport rates over a wide range and if bedload supply is somewhat low. The existence of a general positive trend between bedload rating curve exponents and basin area, and a negative trend between coefficients and basin area, is confirmed by a large data set of bedload rating curves obtained from Helley-Smith samples. However, in this case, the trends only become visible as basin area sizes span a wide range (1 - 10,000 km2). The well-defined relationships obtained from the bedload trap and the large net sampler suggest that exponents and coefficients of bedload transport rating curves (and flow competence curves) are predictable from an easily obtainable parameter such as basin size. However, the relationships of bedload rating curve exponents and coefficients with basin size and armoring appear to be influenced by the sampling device used and the watershed sediment production.
Extreme grain size reduction in dolomite: microstructures and mechanisms.
NASA Astrophysics Data System (ADS)
Kennedy, L.; White, J. C.
2007-12-01
Pure dolomite sample were deformed at room temperature and under a variety of confining pressures (0 - 100MPa) to examine the processes of grain size reduction. The dolomite is composed of > 97 vol. % dolomite with accessory quartz, calcite, tremolite, and muscovite and has been metamorphosed to amphibolite facies and subsequently annealed. At the hand sample scale, the rock is isotropic, except for minor, randomly oriented tremolite porphyroblasts, and weakly aligned muscovite. At the thin section scale, coarser grains have lobate grain boundaries, exhibit minor to no undulose extinction and few deformation twins, although well- developed subgrains are present. Growth twins are common, as is the presence of well developed {1011} cleavage. Mean grain size 476 microns, and porosity is essentially zero (Austin and Kennedy, 2006). Samples contain diagonal to subvertical faults. Fractures are lined with an exceptionally fine-grained, powdered dolomite. Even experiments done at no confining pressure and stopped before sliding on the fracture surfaces occurred had significant powdered gouge developed along the surfaces. In this regard, fracturing of low porosity, pure dolomite, with metamorphic textures (e.g. lobate, interlocking grain boundaries) results in the development of fine-grained gouge. As expected the dolomite exhibited an increase in strength with increasing confining pressure, with a maximum differential stress of ~400MPa at 100 MPa confining pressure. At each chosen confining pressure, two experiments were performed and stopped at different stages along the load-displacement curve: just before yield stress and at peak stress. Microstructures at each stage were observed in order to determine the possible mechanisms for extreme grain size reduction. SEM work shows that in samples with little to no apparent displacement along microfractures, extreme grain size reduction still exists, suggesting that frictional sliding and subsequent cataclasis may not be the mechanism responsible for grain size reduction. Within individual dolomite clasts, apparent Mode I cracks are also lined with powedered gouge. Alternative mechanisms for grain size reduction are explored. Austin et al. 2005, Geological Society, London, Special Publications, 243, 51-66.3.
Sampling studies to estimate the HIV prevalence rate in female commercial sex workers.
Pascom, Ana Roberta Pati; Szwarcwald, Célia Landmann; Barbosa Júnior, Aristides
2010-01-01
We investigated sampling methods being used to estimate the HIV prevalence rate among female commercial sex workers. The studies were classified according to the adequacy or not of the sample size to estimate HIV prevalence rate and according to the sampling method (probabilistic or convenience). We identified 75 studies that estimated the HIV prevalence rate among female sex workers. Most of the studies employed convenience samples. The sample size was not adequate to estimate HIV prevalence rate in 35 studies. The use of convenience sample limits statistical inference for the whole group. It was observed that there was an increase in the number of published studies since 2005, as well as in the number of studies that used probabilistic samples. This represents a large advance in the monitoring of risk behavior practices and HIV prevalence rate in this group.
Structural properties and gas sensing behavior of sol-gel grown nanostructured zinc oxide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajyaguru, Bhargav; Gadani, Keval; Kansara, S. B.
2016-05-06
In this communication, we report the results of the studies on structural properties and gas sensing behavior of nanostructured ZnO grown using acetone precursor based modified sol-gel technique. Final product of ZnO was sintered at different temperatures to vary the crystallite size while their structural properties have been studied using X-ray diffraction (XRD) measurement performed at room temperature. XRD results suggest the single phasic nature of all the samples and crystallite size increases from 11.53 to 20.96 nm with increase in sintering temperature. Gas sensing behavior has been studied for acetone gas which indicates that lower sintered samples are moremore » capable to sense the acetone gas and related mechanism has been discussed in the light of crystallite size, crystal boundary density, defect mechanism and possible chemical reaction between gas traces and various oxygen species.« less
Cu-doped Cd1- x Zn x S alloy: synthesis and structural investigations
NASA Astrophysics Data System (ADS)
Yadav, Indu; Ahlawat, Dharamvir Singh; Ahlawat, Rachna
2016-03-01
Copper doped Cd1- x Zn x S ( x ≤ 1) quantum dots have been synthesized using chemical co-precipitation method. Structural investigation of the synthesized nanomaterials has been carried out by powder XRD method. The XRD results have confirmed that as-prepared Cu-doped Cd1- x Zn x S quantum dots have hexagonal structure. The average nanocrystallite size was estimated in the range 2-12 nm using Debye-Scherrer formula. The lattice constants, lattice plane, d-spacing, unit cell volume, Lorentz factor and dislocation density were also calculated from XRD data. The change in particle size was observed with the change in Zn concentration. Furthermore, FTIR spectra of the prepared samples were observed for identification of COO- and O-H functional groups. The TEM study has also reported the same size range of nanoparticles. The increase in agglomeration has been observed with the increase in Zn concentration in the prepared samples.
Porous silicon structures with high surface area/specific pore size
Northrup, M.A.; Yu, C.M.; Raley, N.F.
1999-03-16
Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gases in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters. 9 figs.
Porous silicon structures with high surface area/specific pore size
Northrup, M. Allen; Yu, Conrad M.; Raley, Norman F.
1999-01-01
Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gasses in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters.
Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S
2015-02-01
With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.
Effect of wire size on maxillary arch force/couple systems for a simulated high canine malocclusion.
Major, Paul W; Toogood, Roger W; Badawi, Hisham M; Carey, Jason P; Seru, Surbhi
2014-12-01
To better understand the effects of copper nickel titanium (CuNiTi) archwire size on bracket-archwire mechanics through the analysis of force/couple distributions along the maxillary arch. The hypothesis is that wire size is linearly related to the forces and moments produced along the arch. An Orthodontic Simulator was utilized to study a simplified high canine malocclusion. Force/couple distributions produced by passive and elastic ligation using two wire sizes (Damon 0.014 and 0.018 inch) measured with a sample size of 144. The distribution and variation in force/couple loading around the arch is a complicated function of wire size. The use of a thicker wire increases the force/couple magnitudes regardless of ligation method. Owing to the non-linear material behaviour of CuNiTi, this increase is less than would occur based on linear theory as would apply for stainless steel wires. The results demonstrate that an increase in wire size does not result in a proportional increase of applied force/moment. This discrepancy is explained in terms of the non-linear properties of CuNiTi wires. This non-proportional force response in relation to increased wire size warrants careful consideration when selecting wires in a clinical setting. © 2014 British Orthodontic Society.
Pituitary gland volumes in bipolar disorder.
Clark, Ian A; Mackay, Clare E; Goodwin, Guy M
2014-12-01
Bipolar disorder has been associated with increased Hypothalamic-Pituitary-Adrenal axis function. The mechanism is not well understood, but there may be associated increases in pituitary gland volume (PGV) and these small increases may be functionally significant. However, research investigating PGV in bipolar disorder reports mixed results. The aim of the current study was twofold. First, to assess PGV in two novel samples of patients with bipolar disorder and matched healthy controls. Second, to perform a meta-analysis comparing PGV across a larger sample of patients and matched controls. Sample 1 consisted of 23 established patients and 32 matched controls. Sample 2 consisted of 39 medication-naïve patients and 42 matched controls. PGV was measured on structural MRI scans. Seven further studies were identified comparing PGV between patients and matched controls (total n; 244 patients, 308 controls). Both novel samples showed a small (approximately 20mm(3) or 4%), but non-significant, increase in PGV in patients. Combining the two novel samples showed a significant association of age and PGV. Meta-analysis showed a trend towards a larger pituitary gland in patients (effect size: .23, CI: -.14, .59). While results suggest a possible small difference in pituitary gland volume between patients and matched controls, larger mega-analyses with sample sizes greater even than those used in the current meta-analysis are still required. There is a small but potentially functionally significant increase in PGV in patients with bipolar disorder compared to controls. Results demonstrate the difficulty of finding potentially important but small effects in functional brain disorders. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Siregar, N.; Indrayana, I. P. T.; Suharyadi, E.; Kato, T.; Iwata, S.
2017-05-01
Mn0.5Zn0.5Fe2O4 nanoparticles have been successfully synthesized through coprecipitation method by varying NaOH concentrations from 0.5 M to 6 M and synthesis temperatures from 30 to 120 °C. The X-ray diffraction (XRD) pattern indicates samples consisting of multiphase structures such as spinel of Mn0.5Zn0.5Fe2O4, α-MnO2, ZnO, λ-MnO2, and γ-Fe2O3. The crystallite size of Mn0.5Zn0.5Fe2O4 is in the range of 14.1 to 26.7 nm. The Transmission electron microscope (TEM) image shows that sample was agglomerate. The hysteresis loops confirm that nanoparticles are soft magnetic materials with low coercivity (H c) in the range of 45.9 to 68.5 Oe. Those values increased relatively with increasing particles size. For NaOH concentration variation, the maximum magnetization of the sample increased from 10.4 emu/g to 11.6 emu/g with increasing ferrite content. Meanwhile, the maximum magnetization increased from 7.9 to 15.7 emu/g for samples with various synthesis temperature. The highest coercivity of 68.5 Oe was attained for a sample of 6 M NaOH under 90 °C. The highest magnetization of 15.7 emu/g was achieved for a sample of 1.5 M NaOH under 120 °C caused by the maximum crystallinity of sample.
A fiber optic sensor for noncontact measurement of shaft speed, torque, and power
NASA Technical Reports Server (NTRS)
Madzsar, George C.
1990-01-01
A fiber optic sensor which enables noncontact measurement of the speed, torque and power of a rotating shaft was fabricated and tested. The sensor provides a direct measurement of shaft rotational speed and shaft angular twist, from which torque and power can be determined. Angles of twist between 0.005 and 10 degrees were measured. Sensor resolution is limited by the sampling rate of the analog to digital converter, while accuracy is dependent on the spot size of the focused beam on the shaft. Increasing the sampling rate improves measurement resolution, and decreasing the focused spot size increases accuracy. Digital processing allows for enhancement of an electronically or optically degraded signal.
A fiber optic sensor for noncontact measurement of shaft speed, torque and power
NASA Technical Reports Server (NTRS)
Madzsar, George C.
1990-01-01
A fiber optic sensor which enables noncontact measurement of the speed, torque and power of a rotating shaft was fabricated and tested. The sensor provides a direct measurement of shaft rotational speed and shaft angular twist, from which torque and power can be determined. Angles of twist between 0.005 and 10 degrees were measured. Sensor resolution is limited by the sampling rate of the analog to digital converter, while accuracy is dependent on the spot size of the focused beam on the shaft. Increasing the sampling rate improves measurement resolution, and decreasing the focused spot size increases accuracy. Digital processing allows for enhancement of an electronically or optically degraded signal.
Can we estimate molluscan abundance and biomass on the continental shelf?
NASA Astrophysics Data System (ADS)
Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.
2017-11-01
Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.
Serra, Gerardo V.; Porta, Norma C. La; Avalos, Susana; Mazzuferi, Vilma
2013-01-01
The alfalfa caterpillar, Colias lesbia (Fabricius) (Lepidoptera: Pieridae), is a major pest of alfalfa, Medicago sativa L. (Fabales: Fabaceae), crops in Argentina. Its management is based mainly on chemical control of larvae whenever the larvae exceed the action threshold. To develop and validate fixed-precision sequential sampling plans, an intensive sampling programme for C. lesbia eggs was carried out in two alfalfa plots located in the Province of Córdoba, Argentina, from 1999 to 2002. Using Resampling for Validation of Sampling Plans software, 12 additional independent data sets were used to validate the sequential sampling plan with precision levels of 0.10 and 0.25 (SE/mean), respectively. For a range of mean densities of 0.10 to 8.35 eggs/sample, an average sample size of only 27 and 26 sample units was required to achieve a desired precision level of 0.25 for the sampling plans of Green and Kuno, respectively. As the precision level was increased to 0.10, average sample size increased to 161 and 157 sample units for the sampling plans of Green and Kuno, respectively. We recommend using Green's sequential sampling plan because it is less sensitive to changes in egg density. These sampling plans are a valuable tool for researchers to study population dynamics and to evaluate integrated pest management strategies. PMID:23909840
Spatial studies of planetary nebulae with IRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawkins, G.W.; Zuckerman, B.
1991-06-01
The infrared sizes at the four IRAS wavelengths of 57 planetaries, most with 20-60 arcsec optical size, are derived from spatial deconvolution of one-dimensional survey mode scans. Survey observations from multiple detectors and hours confirmed (HCON) observations are combined to increase the sampling to a rate that is sufficient for successful deconvolution. The Richardson-Lucy deconvolution algorithm is used to obtain an increase in resolution of a factor of about 2 or 3 from the normal IRAS detector sizes of 45, 45, 90, and 180 arcsec at wavelengths 12, 25, 60, and 100 microns. Most of the planetaries deconvolve at 12more » and 25 microns to sizes equal to or smaller than the optical size. Some of the planetaries with optical rings 60 arcsec or more in diameter show double-peaked IRAS profiles. Many, such as NGC 6720 and NGC 6543 show all infrared sizes equal to the optical size, while others indicate increasing infrared size with wavelength. Deconvolved IRAS profiles are presented for the 57 planetaries at nearly all wavelengths where IRAS flux densities are 1-2 Jy or higher. 60 refs.« less
The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-01-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333
The impact of accelerating faster than exponential population growth on genetic variation.
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-03-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.
Daszkiewicz, Karol; Maquer, Ghislain; Zysset, Philippe K
2017-06-01
Boundary conditions (BCs) and sample size affect the measured elastic properties of cancellous bone. Samples too small to be representative appear stiffer under kinematic uniform BCs (KUBCs) than under periodicity-compatible mixed uniform BCs (PMUBCs). To avoid those effects, we propose to determine the effective properties of trabecular bone using an embedded configuration. Cubic samples of various sizes (2.63, 5.29, 7.96, 10.58 and 15.87 mm) were cropped from [Formula: see text] scans of femoral heads and vertebral bodies. They were converted into [Formula: see text] models and their stiffness tensor was established via six uniaxial and shear load cases. PMUBCs- and KUBCs-based tensors were determined for each sample. "In situ" stiffness tensors were also evaluated for the embedded configuration, i.e. when the loads were transmitted to the samples via a layer of trabecular bone. The Zysset-Curnier model accounting for bone volume fraction and fabric anisotropy was fitted to those stiffness tensors, and model parameters [Formula: see text] (Poisson's ratio) [Formula: see text] and [Formula: see text] (elastic and shear moduli) were compared between sizes. BCs and sample size had little impact on [Formula: see text]. However, KUBCs- and PMUBCs-based [Formula: see text] and [Formula: see text], respectively, decreased and increased with growing size, though convergence was not reached even for our largest samples. Both BCs produced upper and lower bounds for the in situ values that were almost constant across samples dimensions, thus appearing as an approximation of the effective properties. PMUBCs seem also appropriate for mimicking the trabecular core, but they still underestimate its elastic properties (especially in shear) even for nearly orthotropic samples.
Microstructural changes in steel 10Kh9V2MFBR during creep for 40000 hours at 600°C
NASA Astrophysics Data System (ADS)
Fedoseeva, A. E.; Kozlov, P. A.; Dudko, V. A.; Skorobogatykh, V. N.; Shchenkova, I. A.; Kaibyshev, R. O.
2015-10-01
In this work, we have investigated microstructural changes in steel 10Kh9V2MFBR (analog of P02 steel) after long-term creep tests at a temperature of 600°C under an initial stress of 137 MPa. Time to rupture was found to be more than 40000 h. It has been established that, in the zone of grips and in the neck region of the sample, the size of the particles of the M 23C6 carbides increases from 85 nm to 152 nm and 182 nm, respectively. In addition, large particles of the Laves phase with an average size of 295 nm are separated. The particles of these phases are located along high-angle boundaries. During prolonged aging and creep, the transformation of the M(C,N) particles enriched in V into the Z phase occurs. The average size of particles of the Z phase after prolonged ageing was 48 nm; after creep, it reached 97 nm. The size of M(C,N) particles enriched by Nb increases from 26 nm after tempering to 55 nm after prolonged aging and creep. It has been established that, in spite of an increase in the transverse size of the laths of tempered martensite from 0.4 to 0.9 µm in the neck of the sample, the misorientation of the lath boundaries does not increase. No recrystallization processes were found to develop in the steel during creep.
Ratna Sunil, B; Sampath Kumar, T S; Chakkingal, Uday; Nandakumar, V; Doble, Mukesh; Devi Prasad, V; Raghunath, M
2016-02-01
The objective of the present work is to investigate the role of different grain sizes produced by equal channel angular pressing (ECAP) on the degradation behavior of magnesium alloy using in vitro and in vivo studies. Commercially available AZ31 magnesium alloy was selected and processed by ECAP at 300°C for up to four passes using route Bc. Grain refinement from a starting size of 46μm to a grain size distribution of 1-5μm was successfully achieved after the 4th pass. Wettability of ECAPed samples assessed by contact angle measurements was found to increase due to the fine grain structure. In vitro degradation and bioactivity of the samples studied by immersing in super saturated simulated body fluid (SBF 5×) showed rapid mineralization within 24h due to the increased wettability in fine grained AZ31 Mg alloy. Corrosion behavior of the samples assessed by weight loss and electrochemical tests conducted in SBF 5× clearly showed the prominent role of enhanced mineral deposition on ECAPed AZ31 Mg in controlling the abnormal degradation. Cytotoxicity studies by MTT colorimetric assay showed that all the samples are viable. Additionally, cell adhesion was excellent for ECAPed samples particularly for the 3rd and 4th pass samples. In vivo experiments conducted using New Zealand White rabbits clearly showed lower degradation rate for ECAPed sample compared with annealed AZ31 Mg alloy and all the samples showed biocompatibility and no health abnormalities were noticed in the animals after 60days of in vivo studies. These results suggest that the grain size plays an important role in degradation management of magnesium alloys and ECAP technique can be adopted to achieve fine grain structures for developing degradable magnesium alloys for biomedical applications. Copyright © 2015 Elsevier B.V. All rights reserved.
DuFour, Mark R.; Mayer, Christine M.; Kocovsky, Patrick; Qian, Song; Warner, David M.; Kraus, Richard T.; Vandergoot, Christopher
2017-01-01
Hydroacoustic sampling of low-density fish in shallow water can lead to low sample sizes of naturally variable target strength (TS) estimates, resulting in both sparse and variable data. Increasing maximum beam compensation (BC) beyond conventional values (i.e., 3 dB beam width) can recover more targets during data analysis; however, data quality decreases near the acoustic beam edges. We identified the optimal balance between data quantity and quality with increasing BC using a standard sphere calibration, and we quantified the effect of BC on fish track variability, size structure, and density estimates of Lake Erie walleye (Sander vitreus). Standard sphere mean TS estimates were consistent with theoretical values (−39.6 dB) up to 18-dB BC, while estimates decreased at greater BC values. Natural sources (i.e., residual and mean TS) dominated total fish track variation, while contributions from measurement related error (i.e., number of single echo detections (SEDs) and BC) were proportionally low. Increasing BC led to more fish encounters and SEDs per fish, while stability in size structure and density were observed at intermediate values (e.g., 18 dB). Detection of medium to large fish (i.e., age-2+ walleye) benefited most from increasing BC, as proportional changes in size structure and density were greatest in these size categories. Therefore, when TS data are sparse and variable, increasing BC to an optimal value (here 18 dB) will maximize the TS data quantity while limiting lower-quality data near the beam edges.
Sampling strategies for radio-tracking coyotes
Smith, G.J.; Cary, J.R.; Rongstad, O.J.
1981-01-01
Ten coyotes radio-tracked for 24 h periods were most active at night and moved little during daylight hours. Home-range size determined from radio-locations of 3 adult coyotes increased with the number of locations until an asymptote was reached at about 35-40 independent day locations or 3 6 nights of hourly radio-locations. Activity of the coyote did not affect the asymptotic nature of the home-range calculations, but home-range sizes determined from more than 3 nights of hourly locations were considerably larger than home-range sizes determined from daylight locations. Coyote home-range sizes were calculated from daylight locations, full-night tracking periods, and half-night tracking periods. Full- and half-lnight sampling strategies involved obtaining hourly radio-locations during 12 and 6 h periods, respectively. The half-night sampling strategy was the best compromise for our needs, as it adequately indexed the home-range size, reduced time and energy spent, and standardized the area calculation without requiring the researcher to become completely nocturnal. Sight tracking also provided information about coyote activity and sociability.
Tan, Lingzhao; Fan, Chunyu; Zhang, Chunyu; von Gadow, Klaus; Fan, Xiuhua
2017-12-01
This study aims to establish a relationship between the sampling scale and tree species beta diversity temperate forests and to identify the underlying causes of beta diversity at different sampling scales. The data were obtained from three large observational study areas in the Changbai mountain region in northeastern China. All trees with a dbh ≥1 cm were stem-mapped and measured. The beta diversity was calculated for four different grain sizes, and the associated variances were partitioned into components explained by environmental and spatial variables to determine the contributions of environmental filtering and dispersal limitation to beta diversity. The results showed that both beta diversity and the causes of beta diversity were dependent on the sampling scale. Beta diversity decreased with increasing scales. The best-explained beta diversity variation was up to about 60% which was discovered in the secondary conifer and broad-leaved mixed forest (CBF) study area at the 40 × 40 m scale. The variation partitioning result indicated that environmental filtering showed greater effects at bigger grain sizes, while dispersal limitation was found to be more important at smaller grain sizes. What is more, the result showed an increasing explanatory ability of environmental effects with increasing sampling grains but no clearly trend of spatial effects. The study emphasized that the underlying causes of beta diversity variation may be quite different within the same region depending on varying sampling scales. Therefore, scale effects should be taken into account in future studies on beta diversity, which is critical in identifying different relative importance of spatial and environmental drivers on species composition variation.
Jennions, Michael D; Møller, Anders P
2002-01-01
Both significant positive and negative relationships between the magnitude of research findings (their 'effect size') and their year of publication have been reported in a few areas of biology. These trends have been attributed to Kuhnian paradigm shifts, scientific fads and bias in the choice of study systems. Here we test whether or not these isolated cases reflect a more general trend. We examined the relationship using effect sizes extracted from 44 peer-reviewed meta-analyses covering a wide range of topics in ecological and evolutionary biology. On average, there was a small but significant decline in effect size with year of publication. For the original empirical studies there was also a significant decrease in effect size as sample size increased. However, the effect of year of publication remained even after we controlled for sampling effort. Although these results have several possible explanations, it is suggested that a publication bias against non-significant or weaker findings offers the most parsimonious explanation. As in the medical sciences, non-significant results may take longer to publish and studies with both small sample sizes and non-significant results may be less likely to be published. PMID:11788035
Mächtle, W
1999-01-01
Sedimentation velocity is a powerful tool for the analysis of complex solutions of macromolecules. However, sample turbidity imposes an upper limit to the size of molecular complexes currently amenable to such analysis. Furthermore, the breadth of the particle size distribution, combined with possible variations in the density of different particles, makes it difficult to analyze extremely complex mixtures. These same problems are faced in the polymer industry, where dispersions of latices, pigments, lacquers, and emulsions must be characterized. There is a rich history of methods developed for the polymer industry finding use in the biochemical sciences. Two such methods are presented. These use analytical ultracentrifugation to determine the density and size distributions for submicron-sized particles. Both methods rely on Stokes' equations to estimate particle size and density, whereas turbidity, corrected using Mie's theory, provides the concentration measurement. The first method uses the sedimentation time in dispersion media of different densities to evaluate the particle density and size distribution. This method works provided the sample is chemically homogeneous. The second method splices together data gathered at different sample concentrations, thus permitting the high-resolution determination of the size distribution of particle diameters ranging from 10 to 3000 nm. By increasing the rotor speed exponentially from 0 to 40,000 rpm over a 1-h period, size distributions may be measured for extremely broadly distributed dispersions. Presented here is a short history of particle size distribution analysis using the ultracentrifuge, along with a description of the newest experimental methods. Several applications of the methods are provided that demonstrate the breadth of its utility, including extensions to samples containing nonspherical and chromophoric particles. PMID:9916040
Mills, Kathryn L; Goddings, Anne-Lise; Herting, Megan M; Meuwese, Rosa; Blakemore, Sarah-Jayne; Crone, Eveline A; Dahl, Ronald E; Güroğlu, Berna; Raznahan, Armin; Sowell, Elizabeth R; Tamnes, Christian K
2016-11-01
Longitudinal studies including brain measures acquired through magnetic resonance imaging (MRI) have enabled population models of human brain development, crucial for our understanding of typical development as well as neurodevelopmental disorders. Brain development in the first two decades generally involves early cortical grey matter volume (CGMV) increases followed by decreases, and monotonic increases in cerebral white matter volume (CWMV). However, inconsistencies regarding the precise developmental trajectories call into question the comparability of samples. This issue can be addressed by conducting a comprehensive study across multiple datasets from diverse populations. Here, we present replicable models for gross structural brain development between childhood and adulthood (ages 8-30years) by repeating analyses in four separate longitudinal samples (391 participants; 852 scans). In addition, we address how accounting for global measures of cranial/brain size affect these developmental trajectories. First, we found evidence for continued development of both intracranial volume (ICV) and whole brain volume (WBV) through adolescence, albeit following distinct trajectories. Second, our results indicate that CGMV is at its highest in childhood, decreasing steadily through the second decade with deceleration in the third decade, while CWMV increases until mid-to-late adolescence before decelerating. Importantly, we show that accounting for cranial/brain size affects models of regional brain development, particularly with respect to sex differences. Our results increase confidence in our knowledge of the pattern of brain changes during adolescence, reduce concerns about discrepancies across samples, and suggest some best practices for statistical control of cranial volume and brain size in future studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Backhouse, Martin E
2002-01-01
A number of approaches to conducting economic evaluations could be adopted. However, some decision makers have a preference for wholly stochastic cost-effectiveness analyses, particularly if the sampled data are derived from randomised controlled trials (RCTs). Formal requirements for cost-effectiveness evidence have heightened concerns in the pharmaceutical industry that development costs and times might be increased if formal requirements increase the number, duration or costs of RCTs. Whether this proves to be the case or not will depend upon the timing, nature and extent of the cost-effectiveness evidence required. To illustrate how different requirements for wholly stochastic cost-effectiveness evidence could have a significant impact on two of the major determinants of new drug development costs and times, namely RCT sample size and study duration. Using data collected prospectively in a clinical evaluation, sample sizes were calculated for a number of hypothetical cost-effectiveness study design scenarios. The results were compared with a baseline clinical trial design. The sample sizes required for the cost-effectiveness study scenarios were mostly larger than those for the baseline clinical trial design. Circumstances can be such that a wholly stochastic cost-effectiveness analysis might not be a practical proposition even though its clinical counterpart is. In such situations, alternative research methodologies would be required. For wholly stochastic cost-effectiveness analyses, the importance of prior specification of the different components of study design is emphasised. However, it is doubtful whether all the information necessary for doing this will typically be available when product registration trials are being designed. Formal requirements for wholly stochastic cost-effectiveness evidence based on the standard frequentist paradigm have the potential to increase the size, duration and number of RCTs significantly and hence the costs and timelines associated with new product development. Moreover, it is possible to envisage situations where such an approach would be impossible to adopt. Clearly, further research is required into the issue of how to appraise the economic consequences of alternative economic evaluation research strategies.
Fluorine-doped NiO nanostructures: Structural, morphological and spectroscopic studies
NASA Astrophysics Data System (ADS)
Singh, Kulwinder; Kumar, Manjeet; Singh, Dilpreet; Singh, Manjinder; Singh, Paviter; Singh, Bikramjeet; Kaur, Gurpreet; Bala, Rajni; Thakur, Anup; Kumar, Akshay
2018-05-01
Nanostructured NiO has been prepared by co-precipitation method. In this study, the effect of fluorine doping (1, 3 and 5 wt. %) on the structural, morphological as well as optical properties of NiO nanostructures has been studied. X-ray diffraction (XRD) has employed for studying the structural properties. Cubic crystal structure of NiO was confirmed by the XRD analysis. Crystallite size increased with increase in doping concentration. Nelson-Riley factor (NRF) analysis indicated the presence of defect states in the synthesized samples. Field emission scanning electron microscopy showed the spherical morphology of the synthesized samples and also revealed that the particle size varied with dopant content. The optical properties were studied using UV-Visible Spectroscopy. The results indicated that the band gap energy of the synthesized nanostructures decreased with increase in doping concentration upto 3% but increased as the doping concentration was further raised to 5%. This can be ascribed to the defect states variations in the synthesized samples. The results suggested that the synthesized nanostructures are promising candidate for optoelectronic as well as gas sensing applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anandakumar, U.; Webb, J.E.; Singh, R.N.
The matrix cracking behavior of a zircon matrix - uniaxial SCS 6 fiber composite was studied as a function of initial flaw size and temperature. The composites were fabricated by a tape casting and hot pressing technique. Surface flaws of controlled size were introduced using a vicker`s indenter. The composite samples were tested in three point flexure at three different temperatures to study the non steady state and steady state matrix cracking behavior. The composite samples exhibited steady state and non steady matrix cracking behavior at all temperatures. The steady state matrix cracking stress and steady state crack size increasedmore » with increasing temperature. The results of the study correlated well with the results predicted by the matrix cracking models.« less
NASA Astrophysics Data System (ADS)
Yang, Kun Vanna; Lim, Chao Voon Samuel; Zhang, Kai; Sun, Jifeng; Yang, Xiaoguang; Huang, Aijun; Wu, Xinhua; Davies, Christopher H.
2015-12-01
Heat-treated Ti-6Al-4V forged bar with colony microstructure was machined into double-cone-shaped samples for a series of isothermal uniaxial compression test at 1223 K (950 °C) with varying constant crosshead speeds of 12.5, 1.25, and 0.125 mms-1 to a height reduction of 70 pct. Another set of samples deformed under the same conditions were heat treated at 1173 K (900 °C) for an hour followed by water quench. Finite element modeling was used to provide the strains, strain rates, and temperature profiles of the hot compression samples, and the microstructure and texture evolution was examined at four positions on each sample, representative of different strain ranges. Lamellae fragmentation and kinking are the dominant microstructural features at lower strain range up to a maximum of 2.0, whereas globularization dominates at strains above 2.0 for the as-deformed samples. The globularization fraction generally increases with strain, or by post-deformation heat treatment, but fluctuates at lower strain. The grain size of the globular α is almost constant with strain and maximizes for samples with the lowest crosshead speed due to the longer deformation time. The globular α grain also coarsens because of post-deformation heat treatment, with its size increasing with strain level. With respect to texture evolution, a basal transverse ring and another component 30 deg from ND is determined for samples deformed at 12.5 mms-1, which is consistent with the temperature increase to close to β-transus from simulation results. The texture type remains unchanged with its intensity increased and spreads with increasing strain.
Ferguson, Philip E.; Sales, Catherine M.; Hodges, Dalton C.; Sales, Elizabeth W.
2015-01-01
Background Recent publications have emphasized the importance of a multidisciplinary strategy for maximum conservation and utilization of lung biopsy material for advanced testing, which may determine therapy. This paper quantifies the effect of a multidisciplinary strategy implemented to optimize and increase tissue volume in CT-guided transthoracic needle core lung biopsies. The strategy was three-pronged: (1) once there was confidence diagnostic tissue had been obtained and if safe for the patient, additional biopsy passes were performed to further increase volume of biopsy material, (2) biopsy material was placed in multiple cassettes for processing, and (3) all tissue ribbons were conserved when cutting blocks in the histology laboratory. This study quantifies the effects of strategies #1 and #2. Design This retrospective analysis comparing CT-guided lung biopsies from 2007 and 2012 (before and after multidisciplinary approach implementation) was performed at a single institution. Patient medical records were reviewed and main variables analyzed include biopsy sample size, radiologist, number of blocks submitted, diagnosis, and complications. The biopsy sample size measured was considered to be directly proportional to tissue volume in the block. Results Biopsy sample size increased 2.5 fold with the average total biopsy sample size increasing from 1.0 cm (0.9–1.1 cm) in 2007 to 2.5 cm (2.3–2.8 cm) in 2012 (P<0.0001). The improvement was statistically significant for each individual radiologist. During the same time, the rate of pneumothorax requiring chest tube placement decreased from 15% to 7% (P = 0.065). No other major complications were identified. The proportion of tumor within the biopsy material was similar at 28% (23%–33%) and 35% (30%–40%) for 2007 and 2012, respectively. The number of cases with at least two blocks available for testing increased from 10.7% to 96.4% (P<0.0001). Conclusions The effect of this multidisciplinary strategy to CT-guided lung biopsies was effective in significantly increasing tissue volume and number of blocks available for advanced diagnostic testing. PMID:26479367
Synthesis and magnetic properties of NiFe2-xSmxO4 nanopowder
NASA Astrophysics Data System (ADS)
Hassanzadeh-Tabrizi, S. A.; Behbahanian, Shahrzad; Amighian, Jamshid
2016-07-01
NiFe2-xSmxO4 (x=0.00, 0.05, 0.10 and 0.15) nanopowders were synthesized via a sol-gel combustion route. The structural studies were carried out by X-ray diffractometer, Fourier transform infrared spectroscopy, scanning electron microscopy and transmission electron microscopy. The XRD results confirmed the formation of single-phase spinel cubic structure. The crystallite size decreased with an increase of samarium ion concentration, while lattice parameter and lattice strain increased with samarium substitution. TEM micrographs showed that agglomerated nanoparticles with particle sizes ranging from 35 to 90 nm were obtained. The magnetic studies were carried out using vibrating sample magnetometer. Magnetic measurements revealed that the saturation magnetization (Ms) of NiFe2-xSmxO4 nanoparticles decreases with increasing Sm3+substitution. The reduction of saturation magnetization is attributed to the dilution of the magnetic interaction. The coercivity (Hc) of samples increases by adding samarium.
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
NASA Astrophysics Data System (ADS)
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
Almutairy, Meznah; Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.
Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989
Bi, Xiufang; Hemar, Yacine; Balaban, Murat O; Liao, Xiaojun
2015-11-01
The effect of ultrasound treatment on particle size, color, viscosity, polyphenol oxidase (PPO) activity and microstructure in diluted avocado puree was investigated. The treatments were carried out at 20 kHz (375 W/cm(2)) for 0-10 min. The surface mean diameter (D[3,2]) was reduced to 13.44 μm from an original value of 52.31 μm by ultrasound after 1 min. A higher L(∗) value, ΔE value and lower a(∗) value was observed in ultrasound treated samples. The avocado puree dilution followed pseudoplastic flow behavior, and the viscosity of diluted avocado puree (at 100 s(-1)) after ultrasound treatment for 1 min was 6.0 and 74.4 times higher than the control samples for dilution levels of 1:2 and 1:9, respectively. PPO activity greatly increased under all treatment conditions. A maximum increase of 25.1%, 36.9% and 187.8% in PPO activity was found in samples with dilution ratios of 1:2, 1:5 and 1:9, respectively. The increase in viscosity and measured PPO activity might be related to the decrease in particle size. The microscopy images further confirmed that ultrasound treatment induced disruption of avocado puree structure. Copyright © 2015 Elsevier B.V. All rights reserved.
The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution
NASA Astrophysics Data System (ADS)
Shin, H.; Heo, J.; Kim, T.; Jung, Y.
2007-12-01
The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.
Rosner, Bernard; Colditz, Graham A.
2011-01-01
Purpose Age at menopause, a major marker in the reproductive life, may bias results for evaluation of breast cancer risk after menopause. Methods We follow 38,948 premenopausal women in 1980 and identify 2,586 who reported hysterectomy without bilateral oophorectomy, and 31,626 who reported natural menopause during 22 years of follow-up. We evaluate risk factors for natural menopause, impute age at natural menopause for women reporting hysterectomy without bilateral oophorectomy and estimate the hazard of reaching natural menopause in the next 2 years. We apply this imputed age at menopause to both increase sample size and to evaluate the relation between postmenopausal exposures and risk of breast cancer. Results Age, cigarette smoking, age at menarche, pregnancy history, body mass index, history of benign breast disease, and history of breast cancer were each significantly related to age at natural menopause; duration of oral contraceptive use and family history of breast cancer were not. The imputation increased sample size substantially and although some risk factors after menopause were weaker in the expanded model (height, and alcohol use), use of hormone therapy is less biased. Conclusions Imputing age at menopause increases sample size, broadens generalizability making it applicable to women with hysterectomy, and reduces bias. PMID:21441037
Ferromagnetism appears in nitrogen implanted nanocrystalline diamond films
NASA Astrophysics Data System (ADS)
Remes, Zdenek; Sun, Shih-Jye; Varga, Marian; Chou, Hsiung; Hsu, Hua-Shu; Kromka, Alexander; Horak, Pavel
2015-11-01
The nanocrystalline diamond films turn to be ferromagnetic after implanting various nitrogen doses on them. Through this research, we confirm that the room-temperature ferromagnetism of the implanted samples is derived from the measurements of magnetic circular dichroism (MCD) and superconducting quantum interference device (SQUID). Samples with larger crystalline grains as well as higher implanted doses present more robust ferromagnetic signals at room temperature. Raman spectra indicate that the small grain-sized samples are much more disordered than the large grain-sized ones. We propose that a slightly large saturated ferromagnetism could be observed at low temperature, because the increased localization effects have a significant impact on more disordered structure.
Standardized mean differences cause funnel plot distortion in publication bias assessments.
Zwetsloot, Peter-Paul; Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris Ah; Chamuleau, Steven Aj; MacLeod, Malcolm R; Wever, Kimberley E
2017-09-08
Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results.
Standardized mean differences cause funnel plot distortion in publication bias assessments
Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris AH; Chamuleau, Steven AJ; MacLeod, Malcolm R
2017-01-01
Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results. PMID:28884685
Estimating population sizes for elusive animals: the forest elephants of Kakum National Park, Ghana.
Eggert, L S; Eggert, J A; Woodruff, D S
2003-06-01
African forest elephants are difficult to observe in the dense vegetation, and previous studies have relied upon indirect methods to estimate population sizes. Using multilocus genotyping of noninvasively collected samples, we performed a genetic survey of the forest elephant population at Kakum National Park, Ghana. We estimated population size, sex ratio and genetic variability from our data, then combined this information with field observations to divide the population into age groups. Our population size estimate was very close to that obtained using dung counts, the most commonly used indirect method of estimating the population sizes of forest elephant populations. As their habitat is fragmented by expanding human populations, management will be increasingly important to the persistence of forest elephant populations. The data that can be obtained from noninvasively collected samples will help managers plan for the conservation of this keystone species.
NASA Astrophysics Data System (ADS)
Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.
2016-11-01
The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.
Role of CaCO3 and Charcoal Application on Organic Matter Retention in Silt-sized Aggregates
NASA Astrophysics Data System (ADS)
Berhe, A. A.; Kaiser, M.; Ghezzehei, T.; Myrold, D.; Kleber, M.
2011-12-01
The effectiveness of charcoal and calcium carbonate (CaCO3) applications to improve soil conditions has been well documented. However, their influence on the formation of silt-sized aggregates and the amount and protection of associated organic matter (OM) against microbial decomposition under differing soil mineralogical and microbiological conditions are still unknown. For sustainable management of agricultural soils, silt-sized aggregates (2-50 μm) are of particularly large importance because they store up to 60% of soil organic carbon and with mean residence times between 70 and 400 years. The objectives of this study are i) to analyze the ability of soil amendments (CaCO3, charcoal and their combined application) to increase the amount of silt-sized aggregates and associated organic matter, ii) vary soil mineral conditions to establish relevant boundary conditions for amendment-induced aggregation process, iii) to determine how amendment-induced changes in formation of silt-sized aggregates relate to microbial decomposition of OM. We set up artificial high reactive (clay: 40%, sand: 57%, SOM: 3%) and low reactive soils (clay: 10%, sand: 89%, SOM: 1%) and mixed them with charcoal (1%) and/or CaCO3 (0.2%). The samples were adjusted to a water potential of 0.3 bar using a nutrient solution and sub samples were incubated with microbial innoculum. After four months, silt-sized aggregates are separated by a combination of wet-sieving and sedimentation. We hypothesize that the relative increase in amount of silt-sized aggregates and associated OM is larger for less reactive soils than for high reactive soils because of a relative larger increase in binding agents by addition of charcoal and/or CaCO3 in less reactive soils. The effect of charcoal and/or CaCO3 application on the amount of silt-sized aggregates and associated OM is expected to increases with an increase in microbial activity. Between different treatments, we expect the incubated 'charcoal+CaCO3' combination to have the largest effect on silt-size scale aggregation processes because the amount of microbial derived cementing agents, charcoal derived functional groups containing OM, and Ca2+ ions are enhanced at the same time.
Samples in applied psychology: over a decade of research in review.
Shen, Winny; Kiger, Thomas B; Davies, Stacy E; Rasch, Rena L; Simon, Kara M; Ones, Deniz S
2011-09-01
This study examines sample characteristics of articles published in Journal of Applied Psychology (JAP) from 1995 to 2008. At the individual level, the overall median sample size over the period examined was approximately 173, which is generally adequate for detecting the average magnitude of effects of primary interest to researchers who publish in JAP. Samples using higher units of analyses (e.g., teams, departments/work units, and organizations) had lower median sample sizes (Mdn ≈ 65), yet were arguably robust given typical multilevel design choices of JAP authors despite the practical constraints of collecting data at higher units of analysis. A substantial proportion of studies used student samples (~40%); surprisingly, median sample sizes for student samples were smaller than working adult samples. Samples were more commonly occupationally homogeneous (~70%) than occupationally heterogeneous. U.S. and English-speaking participants made up the vast majority of samples, whereas Middle Eastern, African, and Latin American samples were largely unrepresented. On the basis of study results, recommendations are provided for authors, editors, and readers, which converge on 3 themes: (a) appropriateness and match between sample characteristics and research questions, (b) careful consideration of statistical power, and (c) the increased popularity of quantitative synthesis. Implications are discussed in terms of theory building, generalizability of research findings, and statistical power to detect effects. PsycINFO Database Record (c) 2011 APA, all rights reserved
Microstructural development of cobalt ferrite ceramics and its influence on magnetic properties
NASA Astrophysics Data System (ADS)
Kim, Gi-Yeop; Jeon, Jae-Ho; Kim, Myong-Ho; Suvorov, Danilo; Choi, Si-Young
2013-11-01
The microstructural evolution and its influence on magnetic properties in cobalt ferrite were investigated. The cobalt ferrite powders were prepared via a solid-state reaction route and then sintered at 1200 °C for 1, 2, and 16 h in air. The microstructures from sintered samples represented a bimodal distribution of grain size, which is associated with abnormal grain growth behavior. And thus, with increasing sintering time, the number and size of abnormal grains accordingly increased but the matrix grains were frozen with stagnant grain growth. In the sample sintered for 16 h, all of the matrix grains were consumed and the abnormal grains consequently impinged on each other. With the appearance of abnormal grains, the magnetic coercivity significantly decreased from 586.3 Oe (1 h sintered sample) to 168.3 Oe (16 h sintered sample). This is due to the magnetization in abnormal grains being easily flipped. In order to achieve high magnetic coercivity of cobalt ferrite, it is thus imperative to fabricate the fine and homogeneous microstructure.
NASA Astrophysics Data System (ADS)
Manjili, Mohsen Hajipour; Halali, Mohammad
2018-02-01
Samples of INCONEL 718 were levitated and melted in a slag by the application of an electromagnetic field. The effects of temperature, time, and slag composition on the inclusion content of the samples were studied thoroughly. Samples were compared with the original alloy to study the effect of the process on inclusions. Size, shape, and chemical composition of remaining non-metallic inclusions were investigated. The samples were prepared by Standard Guide for Preparing and Evaluating Specimens for Automatic Inclusion Assessment of Steel (ASTM E 768-99) method and the results were reported by means of the Standard Test Methods for Determining the Inclusion Content of Steel (ASTM E 45-97). Results indicated that by increasing temperature and processing time, greater level of cleanliness could be achieved, and numbers and size of the remaining inclusions decreased significantly. It was also observed that increasing calcium fluoride content of the slag helped reduce inclusion content.
Selbig, William R.; ,; Roger T. Bannerman,
2011-01-01
A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.
Selbig, William R; Bannerman, Roger T
2011-04-01
A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.
Selbig, W.R.; Bannerman, R.T.
2011-01-01
A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water. ?? 2010 Publishing Technology.
Mesh-size effects on drift sample composition as determined with a triple net sampler
Slack, K.V.; Tilley, L.J.; Kennelly, S.S.
1991-01-01
Nested nets of three different mesh apertures were used to study mesh-size effects on drift collected in a small mountain stream. The innermost, middle, and outermost nets had, respectively, 425 ??m, 209 ??m and 106 ??m openings, a design that reduced clogging while partitioning collections into three size groups. The open area of mesh in each net, from largest to smallest mesh opening, was 3.7, 5.7 and 8.0 times the area of the net mouth. Volumes of filtered water were determined with a flowmeter. The results are expressed as (1) drift retained by each net, (2) drift that would have been collected by a single net of given mesh size, and (3) the percentage of total drift (the sum of the catches from all three nets) that passed through the 425 ??m and 209 ??m nets. During a two day period in August 1986, Chironomidae larvae were dominant numerically in all 209 ??m and 106 ??m samples and midday 425 ??m samples. Large drifters (Ephemerellidae) occurred only in 425 ??m or 209 ??m nets, but the general pattern was an increase in abundance and number of taxa with decreasing mesh size. Relatively more individuals occurred in the larger mesh nets at night than during the day. The two larger mesh sizes retained 70% of the total sediment/detritus in the drift collections, and this decreased the rate of clogging of the 106 ??m net. If an objective of a sampling program is to compare drift density or drift rate between areas or sampling dates, the same mesh size should be used for all sample collection and processing. The mesh aperture used for drift collection should retain all species and life stages of significance in a study. The nested net design enables an investigator to test the adequacy of drift samples. ?? 1991 Kluwer Academic Publishers.
Zelt, Ronald B.; Hobza, Christopher M.; Burton, Bethany L.; Schaepe, Nathaniel J.; Piatak, Nadine
2017-11-16
Sediment management is a challenge faced by reservoir managers who have several potential options, including dredging, for mitigation of storage capacity lost to sedimentation. As sediment is removed from reservoir storage, potential use of the sediment for socioeconomic or ecological benefit could potentially defray some costs of its removal. Rivers that transport a sandy sediment load will deposit the sand load along a reservoir-headwaters reach where the current of the river slackens progressively as its bed approaches and then descends below the reservoir water level. Given a rare combination of factors, a reservoir deposit of alluvial sand has potential to be suitable for use as proppant for hydraulic fracturing in unconventional oil and gas development. In 2015, the U.S. Geological Survey began a program of researching potential sources of proppant sand from reservoirs, with an initial focus on the Missouri River subbasins that receive sand loads from the Nebraska Sand Hills. This report documents the methods and results of assessments of the suitability of river delta sediment as proppant for a pilot study area in the delta headwaters of Lewis and Clark Lake, Nebraska and South Dakota. Results from surface-geophysical surveys of electrical resistivity guided borings to collect 3.7-meter long cores at 25 sites on delta sandbars using the direct-push method to recover duplicate, 3.8-centimeter-diameter cores in April 2015. In addition, the U.S. Geological Survey collected samples of upstream sand sources in the lower Niobrara River valley.At the laboratory, samples were dried, weighed, washed, dried, and weighed again. Exploratory analysis of natural sand for determining its suitability as a proppant involved application of a modified subset of the standard protocols known as American Petroleum Institute (API) Recommended Practice (RP) 19C. The RP19C methods were not intended for exploration-stage evaluation of raw materials. Results for the washed samples are not directly applicable to evaluations of suitability for use as fracture sand because, except for particle-size distribution, the API-recommended practices for assessing proppant properties (sphericity, roundness, bulk density, and crush resistance) require testing of specific proppant size classes. An optical imaging particle-size analyzer was used to make measurements of particle-size distribution and particle shape. Measured samples were sieved to separate the dominant-size fraction, and the separated subsample was further tested for roundness, sphericity, bulk density, and crush resistance.For the bulk washed samples collected from the Missouri River delta, the geometric mean size averaged 0.27 millimeters (mm), 80 percent of the samples were predominantly sand in the API 40/70 size class, and 17 percent were predominantly sand in the API 70/140 size class. Distributions of geometric mean size among the four sandbar complexes were similar, but samples collected from sandbar complex B were slightly coarser sand than those from the other three complexes. The average geometric mean sizes among the four sandbar complexes ranged only from 0.26 to 0.30 mm. For 22 main-stem sampling locations along the lower Niobrara River, geometric mean size averaged 0.26 mm, an average of 61 percent was sand in the API 40/70 size class, and 28 percent was sand in the API 70/140 size class. Average composition for lower Niobrara River samples was 48 percent medium sand, 37 percent fine sand, and about 7 percent each very fine sand and coarse sand fractions. On average, samples were moderately well sorted.Particle shape and strength were assessed for the dominant-size class of each sample. For proppant strength, crush resistance was tested at a predetermined level of stress (34.5 megapascals [MPa], or 5,000 pounds-force per square inch). To meet the API minimum requirement for proppant, after the crush test not more than 10 percent of the tested sample should be finer than the precrush dominant-size class. For particle shape, all samples surpassed the recommended minimum criteria for sphericity and roundness, with most samples being well-rounded. For proppant strength, of 57 crush-resistance tested Missouri River delta samples of 40/70-sized sand, 23 (40 percent) were interpreted as meeting the minimum criterion at 34.5 MPa, or 5,000 pounds-force per square inch. Of 12 tested samples of 70/140-sized sand, 9 (75 percent) of the Missouri River delta samples had less than 10 percent fines by volume following crush testing, achieving the minimum criterion at 34.5 MPa. Crush resistance for delta samples was strongest at sandbar complex A, where 67 percent of tested samples met the 10-percent fines criterion at the 34.5-MPa threshold. This frequency was higher than was indicated by samples from sandbar complexes B, C, and D that had rates of 50, 46, and 42 percent, respectively. The group of sandbar complex A samples also contained the largest percentages of samples dominated by the API 70/140 size class, which overall had a higher percentage of samples meeting the minimum criterion compared to samples dominated by coarser size classes; however, samples from sandbar complex A that had the API 40/70 size class tested also had a higher rate for meeting the minimum criterion (57 percent) than did samples from sandbar complexes B, C, and D (50, 43, and 40 percent, respectively). For samples collected along the lower Niobrara River, of the 25 tested samples of 40/70-sized sand, 9 samples passed the API minimum criterion at 34.5 MPa, but only 3 samples passed the more-stringent criterion of 8 percent postcrush fines. All four tested samples of 70/140 sand passed the minimum criterion at 34.5 MPa, with postcrush fines percentage of at most 4.1 percent.For two reaches of the lower Niobrara River, where hydraulic sorting was energized artificially by the hydraulic head drop at and immediately downstream from Spencer Dam, suitability of channel deposits for potential use as fracture sand was confirmed by test results. All reach A washed samples were well-rounded and had sphericity scores above 0.65, and samples for 80 percent of sampled locations met the crush-resistance criterion at the 34.5-MPa stress level. A conservative lower-bound estimate of sand volume in the reach A deposits was about 86,000 cubic meters. All reach B samples were well-rounded but sphericity averaged 0.63, a little less than the average for upstream reaches A and SP. All four samples tested passed the crush-resistance test at 34.5 MPa. Of three reach B sandbars, two had no more than 3 percent fines after the crush test, surpassing more stringent criteria for crush resistance that accept a maximum of 6 percent fines following the crush test for the API 70/140 size class.Relative to the crush-resistance test results for the API 40/70 size fraction of two samples of mine output from Loup River settling-basin dredge spoils near Genoa, Nebr., four of five reach A sample locations compared favorably. The four samples had increases in fines composition of 1.6–5.9 percentage points, whereas fines in the two mine-output samples increased by an average 6.8 percentage points.
NASA Astrophysics Data System (ADS)
Syazwan, M. M.; Hapishah, A. N.; Azis, R. S.; Abbas, Z.; Hamidon, M. N.
2018-06-01
The effect of grain growth via sintering temperature on some magnetic properties is reported in this research. Ni0.6Zn0.4Fe2O4 nanoparticles were mechanically alloyed for 6 h and the sintering process starting from 600 to 1200 °C with 25 °C increment with only one sample subjected to all sintering scheme. The resulting change in the material was observed after each sintering. Single phase has been formed at 600 °C and above and the intensity peaks increased with sintering temperature as well as crystallinity increment. The morphological studies showed grain size increment as the sintering temperature increased. Moreover, the density increased while the porosity decreased with increasing sintering temperature. The saturation induction, Bs increased with the increased of grain size. On the other hand, the coercivity-vs-grain size plot reveals the critical single-domain-to-multidomain grain size to be about ∼400 nm. The initial permeability, μi value was increased with grain size enhancement. The microstructural grain growth, as exposed for the first time by this research, is shown as a process of multiple activation energy barriers.
Geochemical and radiological characterization of soils from former radium processing sites.
Landa, E R
1984-02-01
Soil samples were collected from former radium processing sites in Denver, CO, and East Orange, NJ. Particle-size separations and radiochemical analyses of selected samples showed that while the greatest contents of both 226Ra and U were generally found in the finest (less than 45 micron) fraction, the pattern was not always of progressive increase in radionuclide content with decreasing particle size. Leaching tests on these samples showed a large portion of the 226Ra and U to be soluble in dilute hydrochloric acid. Radon-emanation coefficients measured for bulk samples of contaminated soil were about 20%. Recovery of residual uranium and vanadium, as an adjunct to any remedial action program, appears unlikely due to economic considerations.
On the Kaolinite Floc Size at the Steady State of Flocculation in a Turbulent Flow
Zhu, Zhongfan; Wang, Hongrui; Yu, Jingshan; Dou, Jie
2016-01-01
The flocculation of cohesive fine-grained sediment plays an important role in the transport characteristics of pollutants and nutrients absorbed on the surface of sediment in estuarine and coastal waters through the complex processes of sediment transport, deposition, resuspension and consolidation. Many laboratory experiments have been carried out to investigate the influence of different flow shear conditions on the floc size at the steady state of flocculation in the shear flow. Most of these experiments reported that the floc size decreases with increasing shear stresses and used a power law to express this dependence. In this study, we performed a Couette-flow experiment to measure the size of the kaolinite floc through sampling observation and an image analysis system at the steady state of flocculation under six flow shear conditions. The results show that the negative correlation of the floc size on the flow shear occurs only at high shear conditions, whereas at low shear conditions, the floc size increases with increasing turbulent shear stresses regardless of electrolyte conditions. Increasing electrolyte conditions and the initial particle concentration could lead to a larger steady-state floc size. PMID:26901652
On the Kaolinite Floc Size at the Steady State of Flocculation in a Turbulent Flow.
Zhu, Zhongfan; Wang, Hongrui; Yu, Jingshan; Dou, Jie
2016-01-01
The flocculation of cohesive fine-grained sediment plays an important role in the transport characteristics of pollutants and nutrients absorbed on the surface of sediment in estuarine and coastal waters through the complex processes of sediment transport, deposition, resuspension and consolidation. Many laboratory experiments have been carried out to investigate the influence of different flow shear conditions on the floc size at the steady state of flocculation in the shear flow. Most of these experiments reported that the floc size decreases with increasing shear stresses and used a power law to express this dependence. In this study, we performed a Couette-flow experiment to measure the size of the kaolinite floc through sampling observation and an image analysis system at the steady state of flocculation under six flow shear conditions. The results show that the negative correlation of the floc size on the flow shear occurs only at high shear conditions, whereas at low shear conditions, the floc size increases with increasing turbulent shear stresses regardless of electrolyte conditions. Increasing electrolyte conditions and the initial particle concentration could lead to a larger steady-state floc size.
Girgis, E; Portugal, R D; Loosvelt, H; Van Bael, M J; Gordon, I; Malfait, M; Temst, K; Van Haesendonck, C; Leunissen, L H A; Jonckheere, R
2003-10-31
Magnetization reversal was studied in square arrays of square Co/CoO dots with lateral size varying between 200 and 900 nm. While reference nonpatterned Co/CoO films show the typical shift and increased width of the hysteresis loop due to exchange bias, the patterned samples reveal a pronounced size dependence. In particular, an anomaly appears in the upper branch of the magnetization cycle and becomes stronger as the dot size decreases. This anomaly, which is absent at room temperature in the patterned samples, can be understood in terms of a competition between magnetostatic interdot interaction and exchange anisotropy during the magnetic switching process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp; Zhang, Xu
2015-07-07
Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources andmore » pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.« less
Study design requirements for RNA sequencing-based breast cancer diagnostics.
Mer, Arvind Singh; Klevebring, Daniel; Grönberg, Henrik; Rantalainen, Mattias
2016-02-01
Sequencing-based molecular characterization of tumors provides information required for individualized cancer treatment. There are well-defined molecular subtypes of breast cancer that provide improved prognostication compared to routine biomarkers. However, molecular subtyping is not yet implemented in routine breast cancer care. Clinical translation is dependent on subtype prediction models providing high sensitivity and specificity. In this study we evaluate sample size and RNA-sequencing read requirements for breast cancer subtyping to facilitate rational design of translational studies. We applied subsampling to ascertain the effect of training sample size and the number of RNA sequencing reads on classification accuracy of molecular subtype and routine biomarker prediction models (unsupervised and supervised). Subtype classification accuracy improved with increasing sample size up to N = 750 (accuracy = 0.93), although with a modest improvement beyond N = 350 (accuracy = 0.92). Prediction of routine biomarkers achieved accuracy of 0.94 (ER) and 0.92 (Her2) at N = 200. Subtype classification improved with RNA-sequencing library size up to 5 million reads. Development of molecular subtyping models for cancer diagnostics requires well-designed studies. Sample size and the number of RNA sequencing reads directly influence accuracy of molecular subtyping. Results in this study provide key information for rational design of translational studies aiming to bring sequencing-based diagnostics to the clinic.
NASA Astrophysics Data System (ADS)
Bitar, Z.; El-Said Bakeer, D.; Awad, R.
2017-07-01
Zinc Cobalt nano ferrite doped with Praseodymium, Zn0.5Co0.5Fe2-xPrxO4 (0 ≤ x ≤ 0.2), were prepared by co-precipitation method from an aqueous solution containing metal chlorides and two concentrations of poly(vinylpyrrolidone) (PVP) 0 and 30g/L as capping agent. The samples were characterized using X-ray powder diffraction (XRD), Transmission Electron Microscope (TEM), UV-visible optical spectroscopy, Fourier transform infrared (FTIR) and Electron Paramagnetic Resonance (EPR). XRD results display the formation of cubic spinel structure with space group Fd3m and the lattice parameter (a) is slightly decreased for PVP capping samples. The particle size that determined by TEM, decreases for PVP capping samples. The optical band energy Eg increases for PVP capping samples, confirming the variation of energy gap with the particle size. The FTIR results indicate that the metal oxide bands were shifted for the PVP capping samples. EPR data shows that the PVP addition increases the magnetic resonance field and hence decreases the g-factor.
NASA Astrophysics Data System (ADS)
Jux, Maximilian; Finke, Benedikt; Mahrholz, Thorsten; Sinapius, Michael; Kwade, Arno; Schilde, Carsten
2017-04-01
Several epoxy Al(OH)O (boehmite) dispersions in an epoxy resin are produced in a kneader to study the mechanistic correlation between the nanoparticle size and mechanical properties of the prepared nanocomposites. The agglomerate size is set by a targeted variation in solid content and temperature during dispersion, resulting in a different level of stress intensity and thus a different final agglomerate size during the process. The suspension viscosity was used for the estimation of stress energy in laminar shear flow. Agglomerate size measurements are executed via dynamic light scattering to ensure the quality of the produced dispersions. Furthermore, various nanocomposite samples are prepared for three-point bending, tension, and fracture toughness tests. The screening of the size effect is executed with at least seven samples per agglomerate size and test method. The variation of solid content is found to be a reliable method to adjust the agglomerate size between 138-354 nm during dispersion. The size effect on the Young's modulus and the critical stress intensity is only marginal. Nevertheless, there is a statistically relevant trend showing a linear increase with a decrease in agglomerate size. In contrast, the size effect is more dominant to the sample's strain and stress at failure. Unlike microscaled agglomerates or particles, which lead to embrittlement of the composite material, nanoscaled agglomerates or particles cause the composite elongation to be nearly of the same level as the base material. The observed effect is valid for agglomerate sizes between 138-354 nm and a particle mass fraction of 10 wt%.
Olives, Casey; Valadez, Joseph J; Pagano, Marcello
2014-03-01
To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.
Support vector regression to predict porosity and permeability: Effect of sample size
NASA Astrophysics Data System (ADS)
Al-Anazi, A. F.; Gates, I. D.
2012-02-01
Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.
Rast, Philippe; Hofer, Scott M.
2014-01-01
We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544
Effect of high pressure processing on dispersive and aggregative properties of almond milk.
Dhakal, Santosh; Giusti, M Monica; Balasubramaniam, V M
2016-08-01
A study was conducted to investigate the impact of high pressure (450 and 600 MPa at 30 °C) and thermal (72, 85 and 99 °C at 0.1 MPa) treatments on dispersive and aggregative characteristics of almond milk. Experiments were conducted using a kinetic pressure testing unit and water bath. Particle size distribution, microstructure, UV absorption spectra, pH and color changes of processed and unprocessed samples were analyzed. Raw almond milk represented the mono model particle size distribution with average particle diameters of 2 to 3 µm. Thermal or pressure treatment of almond milk shifted the particle size distribution towards right and increased particle size by five- to six-fold. Micrographs confirmed that both the treatments increased particle size due to aggregation of macromolecules. Pressure treatment produced relatively more and larger aggregates than those produced by heat treated samples. The apparent aggregation rate constant for 450 MPa and 600 MPa processed samples were k450MPa,30°C = 0.0058 s(-1) and k600MPa,30°C = 0.0095 s(-1) respectively. This study showed that dispersive and aggregative properties of high pressure and heat-treated almond milk were different due to differences in protein denaturation, particles coagulation and aggregates morphological characteristics. Knowledge gained from the study will help food processors to formulate novel plant-based beverages treated with high pressure. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Estimating individual glomerular volume in the human kidney: clinical perspectives.
Puelles, Victor G; Zimanyi, Monika A; Samuel, Terence; Hughson, Michael D; Douglas-Denton, Rebecca N; Bertram, John F; Armitage, James A
2012-05-01
Measurement of individual glomerular volumes (IGV) has allowed the identification of drivers of glomerular hypertrophy in subjects without overt renal pathology. This study aims to highlight the relevance of IGV measurements with possible clinical implications and determine how many profiles must be measured in order to achieve stable size distribution estimates. We re-analysed 2250 IGV estimates obtained using the disector/Cavalieri method in 41 African and 34 Caucasian Americans. Pooled IGV analysis of mean and variance was conducted. Monte-Carlo (Jackknife) simulations determined the effect of the number of sampled glomeruli on mean IGV. Lin's concordance coefficient (R(C)), coefficient of variation (CV) and coefficient of error (CE) measured reliability. IGV mean and variance increased with overweight and hypertensive status. Superficial glomeruli were significantly smaller than juxtamedullary glomeruli in all subjects (P < 0.01), by race (P < 0.05) and in obese individuals (P < 0.01). Subjects with multiple chronic kidney disease (CKD) comorbidities showed significant increases in IGV mean and variability. Overall, mean IGV was particularly reliable with nine or more sampled glomeruli (R(C) > 0.95, <5% difference in CV and CE). These observations were not affected by a reduced sample size and did not disrupt the inverse linear correlation between mean IGV and estimated total glomerular number. Multiple comorbidities for CKD are associated with increased IGV mean and variance within subjects, including overweight, obesity and hypertension. Zonal selection and the number of sampled glomeruli do not represent drawbacks for future longitudinal biopsy-based studies of glomerular size and distribution.
Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.
Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). Wemore » explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.« less
Effect size and statistical power in the rodent fear conditioning literature - A systematic review.
Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.
Effect size and statistical power in the rodent fear conditioning literature – A systematic review
Macleod, Malcolm R.
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451
NASA Astrophysics Data System (ADS)
Tsubokawa, Yumiko; Ishikawa, Masahiro
2017-09-01
Graphite-bearing polycrystalline olivine and polycrystalline clinopyroxene with submicron to micron grain size were successfully sintered from a single crystal of naturally occurring olivine (Fo88-92Fa12-8: Mg1.76-1.84Fe0.16-0.24SiO4) and a single crystal of naturally occurring clinopyroxene (Di99Hed1: Ca0.92Na0.07Mn0.01Mg0.93Fe0.01Al0.06Si2O6). The milled powders of both these crystals were sintered under argon gas flow at temperatures ranging from 1130 to 1350 °C for 2 h. As the sintering temperature increased, the average grain size of olivine increased from 0.2 to 1.4 µm and that of clinopyroxene increased from 0.1 to 2.4 µm. The porosity of sintered samples remained at an almost-constant volume of 2-5% for olivine and 3-4% for clinopyroxene. The samples sintered from powders milled with ethanol exhibited trace amount of graphite, identified via Raman spectroscopy analysis. As the sintering temperature increased, the intensity of the graphite Raman peak decreased, compared with both olivine and clinopyroxene peaks. The carbon content of the sintered samples was estimated to be a few hundred ppm. The in-plane size ( L a ) of graphite in the sintered olivine was estimated to be <15 nm. Our experiments demonstrate new possibilities for preparing graphite-bearing silicate-mantle mineral rocks, and this method might be useful in understanding the influence of the physical properties of graphite on grain-size-sensitive rheology or the seismic velocity of the Earth's mantle.[Figure not available: see fulltext.
Junno, Juho-Antti; Niskanen, Markku; Maijanen, Heli; Holt, Brigitte; Sladek, Vladimir; Niinimäki, Sirpa; Berner, Margit
2018-02-01
The stature/bi-iliac breadth method provides reasonably precise, skeletal frame size (SFS) based body mass (BM) estimations across adults as a whole. In this study, we examine the potential effects of age changes in anthropometric dimensions on the estimation accuracy of SFS-based body mass estimation. We use anthropometric data from the literature and our own skeletal data from two osteological collections to study effects of age on stature, bi-iliac breadth, body mass, and body composition, as they are major components behind body size and body size estimations. We focus on males, as relevant longitudinal data are based on male study samples. As a general rule, lean body mass (LBM) increases through adolescence and early adulthood until people are aged in their 30s or 40s, and starts to decline in the late 40s or early 50s. Fat mass (FM) tends to increase until the mid-50s and declines thereafter, but in more mobile traditional societies it may decline throughout adult life. Because BM is the sum of LBM and FM, it exhibits a curvilinear age-related pattern in all societies. Skeletal frame size is based on stature and bi-iliac breadth, and both of those dimensions are affected by age. Skeletal frame size based body mass estimation tends to increase throughout adult life in both skeletal and anthropometric samples because an age-related increase in bi-iliac breadth more than compensates for an age-related stature decline commencing in the 30s or 40s. Combined with the above-mentioned curvilinear BM change, this results in curvilinear estimation bias. However, for simulations involving low to moderate percent body fat, the stature/bi-iliac method works well in predicting body mass in younger and middle-aged adults. Such conditions are likely to have applied to most human paleontological and archaeological samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gritti, Fabrice; Guiochon, Georges
2015-03-06
Previous data have shown that could deliver a minimum reduced plate height as small as 1.7. Additionally, the reduction of the mesopore size after C18 derivatization and the subsequent restriction for sample diffusivity across the Titan-C18 particles were found responsible for the unusually small value of the experimental optimum reduced velocity (5 versus 10 for conventional particles) and for the large values of the average reduced solid-liquid mass transfer resistance coefficients (0.032 versus 0.016) measured for a series of seven n-alkanophenones. The improvements in column efficiency made by increasing the average mesopore size of the Titan silica from 80 to 120Å are investigated from a quantitative viewpoint based on the accurate measurements of the reduced coefficients (longitudinal diffusion, trans-particle mass transfer resistance, and eddy diffusion) and of the intra-particle diffusivity, pore, and surface diffusion for the same series of n-alkanophenone compounds. The experimental results reveal an increase (from 0% to 30%) of the longitudinal diffusion coefficients for the same sample concentration distribution (from 0.25 to 4) between the particle volume and the external volume of the column, a 40% increase of the intra-particle diffusivity for the same sample distribution (from 1 to 7) between the particle skeleton volume and the bulk phase, and a 15-30% decrease of the solid-liquid mass transfer coefficient for the n-alkanophenone compounds. Pore and surface diffusion are increased by 60% and 20%, respectively. The eddy dispersion term and the maximum column efficiency (295000plates/m) remain virtually unchanged. The rate of increase of the total plate height with increasing the chromatographic speed is reduced by 20% and it is mostly controlled (75% and 70% for 80 and 120Å pore size) by the flow rate dependence of the eddy dispersion term. Copyright © 2015 Elsevier B.V. All rights reserved.
Chen, Xiao; Lu, Bin; Yan, Chao-Gan
2018-01-01
Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
DEVELOPMENT OF AN RH -DENUDED MIE ACTIVE SAMPLING SYSTEM AND TARGETED AEROSOL CALIBRATION
The MIE pDR 1200 nephelometer provides time resolved aerosol concentrations during personal and fixed-site sampling. Active (pumped) operation allows defining an upper PM2.5 particle size, however, this dramatically increases the aerosol mass passing through the phot...
studies. Investigators must supply positive and negative controls. Current pricing for CIDR Program studies are for a minimum study size of 90 samples and increasing in multiples of 90. Please inquire for for the assay is included for CIDR Program studies. FFPE samples are supported for MethylationEPIC
Clinal variation of some mammals during the Holocene in Missouri
NASA Astrophysics Data System (ADS)
Purdue, James R.
1980-03-01
Eastern cottontail ( Sylvilagus floridanus), fox squirrel ( Sciurus niger), and gray squirrel ( Sciurus carolinensis) were examined for clinal variation during the Holocene. Modern samples of all three species displayed strong east-west patterns along the western edge of the eastern deciduous forest: S. floridanus and S. niger decrease and S. carolinensis increases in size. Archeological samples of S. carolinensis from Rodgers Shelter (23BE125), Benton County, Missouri, and Graham Cave (23MT2), Montgomery County, Missouri, indicated an increase in size from early to middle Holocene. Sylvilagus floridanus from Rodgers Shelter decreased in size from early to middle Holocene and then increased during the late Holocene to modern proportions. A literature survey reveals that clinal variation is a common phenomenon among modern homeotherms. In introduced species, clinal variation has developed after relatively few generations, indicating rapid adaptations to environmental conditions; often winter climatic variables are implicated. Morphological variation in the study species during the Holocene is interpreted as a response to changing climates. Studies of morphological clines may lead to another valuable data source for reconstructing past ecologies.
Performance of a Line Loss Correction Method for Gas Turbine Emission Measurements
NASA Astrophysics Data System (ADS)
Hagen, D. E.; Whitefield, P. D.; Lobo, P.
2015-12-01
International concern for the environmental impact of jet engine exhaust emissions in the atmosphere has led to increased attention on gas turbine engine emission testing. The Society of Automotive Engineers Aircraft Exhaust Emissions Measurement Committee (E-31) has published an Aerospace Information Report (AIR) 6241 detailing the sampling system for the measurement of non-volatile particulate matter from aircraft engines, and is developing an Aerospace Recommended Practice (ARP) for methodology and system specification. The Missouri University of Science and Technology (MST) Center for Excellence for Aerospace Particulate Emissions Reduction Research has led numerous jet engine exhaust sampling campaigns to characterize emissions at different locations in the expanding exhaust plume. Particle loss, due to various mechanisms, occurs in the sampling train that transports the exhaust sample from the engine exit plane to the measurement instruments. To account for the losses, both the size dependent penetration functions and the size distribution of the emitted particles need to be known. However in the proposed ARP, particle number and mass are measured, but size is not. Here we present a methodology to generate number and mass correction factors for line loss, without using direct size measurement. A lognormal size distribution is used to represent the exhaust aerosol at the engine exit plane and is defined by the measured number and mass at the downstream end of the sample train. The performance of this line loss correction is compared to corrections based on direct size measurements using data taken by MST during numerous engine test campaigns. The experimental uncertainty in these correction factors is estimated. Average differences between the line loss correction method and size based corrections are found to be on the order of 10% for number and 2.5% for mass.
Confidence crisis of results in biomechanics research.
Knudson, Duane
2017-11-01
Many biomechanics studies have small sample sizes and incorrect statistical analyses, so reporting of inaccurate inferences and inflated magnitude of effects are common in the field. This review examines these issues in biomechanics research and summarises potential solutions from research in other fields to increase the confidence in the experimental effects reported in biomechanics. Authors, reviewers and editors of biomechanics research reports are encouraged to improve sample sizes and the resulting statistical power, improve reporting transparency, improve the rigour of statistical analyses used, and increase the acceptance of replication studies to improve the validity of inferences from data in biomechanics research. The application of sports biomechanics research results would also improve if a larger percentage of unbiased effects and their uncertainty were reported in the literature.
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
ZnFe2O4 nanoparticles dispersed in a highly porous silica aerogel matrix: a magnetic study.
Bullita, S; Casu, A; Casula, M F; Concas, G; Congiu, F; Corrias, A; Falqui, A; Loche, D; Marras, C
2014-03-14
We report the detailed structural characterization and magnetic investigation of nanocrystalline zinc ferrite nanoparticles supported on a silica aerogel porous matrix which differ in size (in the range 4-11 nm) and the inversion degree (from 0.4 to 0.2) as compared to bulk zinc ferrite which has a normal spinel structure. The samples were investigated by zero-field-cooling-field-cooling, thermo-remnant DC magnetization measurements, AC magnetization investigation and Mössbauer spectroscopy. The nanocomposites are superparamagnetic at room temperature; the temperature of the superparamagnetic transition in the samples decreases with the particle size and therefore it is mainly determined by the inversion degree rather than by the particle size, which would give an opposite effect on the blocking temperature. The contribution of particle interaction to the magnetic behavior of the nanocomposites decreases significantly in the sample with the largest particle size. The values of the anisotropy constant give evidence that the anisotropy constant decreases upon increasing the particle size of the samples. All these results clearly indicate that, even when dispersed with low concentration in a non-magnetic and highly porous and insulating matrix, the zinc ferrite nanoparticles show a magnetic behavior similar to that displayed when they are unsupported or dispersed in a similar but denser matrix, and with higher loading. The effective anisotropy measured for our samples appears to be systematically higher than that measured for supported zinc ferrite nanoparticles of similar size, indicating that this effect probably occurs as a consequence of the high inversion degree.
Froud, Robert; Bjørkli, Tom; Bright, Philip; Rajendran, Dévan; Buchbinder, Rachelle; Underwood, Martin; Evans, David; Eldridge, Sandra
2015-11-30
Low back pain is a common and costly health complaint for which there are several moderately effective treatments. In some fields there is evidence that funder and financial conflicts are associated with trial outcomes. It is not clear whether effect sizes in back pain trials relate to journal impact factor, reporting conflicts of interest, or reporting funding. We performed a systematic review of English-language papers reporting randomised controlled trials of treatments for non-specific low back pain, published between 2006-2012. We modelled the relationship using 5-year journal impact factor, and categories of reported of conflicts of interest, and categories of reported funding (reported none and reported some, compared to not reporting these) using meta-regression, adjusting for sample size, and publication year. We also considered whether impact factor could be predicted by the direction of outcome, or trial sample size. We could abstract data to calculate effect size in 99 of 146 trials that met our inclusion criteria. Effect size is not associated with impact factor, reporting of funding source, or reporting of conflicts of interest. However, explicitly reporting 'no trial funding' is strongly associated with larger absolute values of effect size (adjusted β=1.02 (95 % CI 0.44 to 1.59), P=0.001). Impact factor increases by 0.008 (0.004 to 0.012) per unit increase in trial sample size (P<0.001), but does not differ by reported direction of the LBP trial outcome (P=0.270). The absence of associations between effect size and impact factor, reporting sources of funding, and conflicts of interest reflects positively on research and publisher conduct in the field. Strong evidence of a large association between absolute magnitude of effect size and explicit reporting of 'no funding' suggests authors of unfunded trials are likely to report larger effect sizes, notwithstanding direction. This could relate in part to quality, resources, and/or how pragmatic a trial is.
NASA Astrophysics Data System (ADS)
Reitz, M. A.; Seeber, L.; Schaefer, J. M.; Ferguson, E. K.
2012-12-01
Early studies pioneering the method for catchment wide erosion rates by measuring 10Be in alluvial sediment were taken at river mouths and used the sand size grain fraction from the riverbeds in order to average upstream erosion rates and measure erosion patterns. Finer particles (<0.0625 mm) were excluded to reduce the possibility of a wind-blown component of sediment and coarser particles (>2 mm) were excluded to better approximate erosion from the entire upstream catchment area (coarse grains are generally found near the source). Now that the sensitivity of 10Be measurements is rapidly increasing, we can precisely measure erosion rates from rivers eroding active tectonic regions. These active regions create higher energy drainage systems that erode faster and carry coarser sediment. In these settings, does the sand-sized fraction fully capture the average erosion of the upstream drainage area? Or does a different grain size fraction provide a more accurate measure of upstream erosion? During a study of the Neto River in Calabria, southern Italy, we took 8 samples along the length of the river, focusing on collecting samples just below confluences with major tributaries, in order to use the high-resolution erosion rate data to constrain tectonic motion. The samples we measured were sieved to either a 0.125 mm - 0.710 mm fraction or the 0.125 mm - 4 mm fraction (depending on how much of the former was available). After measuring these 8 samples for 10Be and determining erosion rates, we used the approach by Granger et al. [1996] to calculate the subcatchment erosion rates between each sample point. In the subcatchments of the river where we used grain sizes up to 4 mm, we measured very low 10Be concentrations (corresponding to high erosion rates) and calculated nonsensical subcatchment erosion rates (i.e. negative rates). We, therefore, hypothesize that the coarser grain sizes we included are preferentially sampling a smaller upstream area, and not the entire upstream catchment, which is assumed when measurements are based solely on the sand sized fraction. To test this hypothesis, we used samples with a variety of grain sizes from the Shillong Plateau. We sieved 5 samples into three grain size fractions: 0.125 mm - 710 mm, 710 mm - 4 mm, and >4 mm and measured 10Be concentrations in each fraction. Although there is some variation in the grain size fraction that yields the highest erosion rate, generally, the coarser grain size fractions have higher erosion rates. More significant are the results when calculating the subcatchment erosion rates, which suggest that even medium sized grains (710 mm - 4 mm) are sampling an area smaller than the entire upstream area; this finding is consistent with the nonsensical results from the Neto River study. This result has numerous implications for the interpretations of 10Be erosion rates: most importantly, an alluvial sample may not be averaging the entire upstream area, even when using the sand size fraction - resulting erosion rates more pertinent for that sample point rather than the entire catchment.
NASA Astrophysics Data System (ADS)
Gnanasaravanan, S.; Rajkumar, P.
2013-05-01
The present study investigates the characterization of minerals in the River Sand (R - Sand) and the Manufactured sand (M-Sand) through FTIR spectroscopic studies. The R - Sand is collected from seven different locations in Cauvery River and M - Sand is collected from eight different manufactures around the Cauvery River belt in Salem, Erode, Tirupur and Namakkal districts of Tamilnadu, India. To extend the effectiveness of the analysis, the samples were subjected to grain size separation to classify the bulk samples into different grain sizes. All the samples were analyzed using FTIR spectrometer. The number of minerals identified with the help of FTIR spectra in overall (bulk) samples of R - Sand is 14 and of M - Sand is 13. The number has been increased while going for grain size separation, i.e., from 14 to 31 for R - Sand and from 13 to 20 for M - Sand. Among all minerals, quartz plays a major role. The relative distribution and the crystallinity nature of quartz have been discussed based on the extinction co-efficient and the crystallinity index values computed. There is no major variation found in M - Sand while going for grain size separation.
Spatial and temporal variation of body size among early Homo.
Will, Manuel; Stock, Jay T
2015-05-01
The estimation of body size among the earliest members of the genus Homo (2.4-1.5Myr [millions of years ago]) is central to interpretations of their biology. It is widely accepted that Homo ergaster possessed increased body size compared with Homo habilis and Homo rudolfensis, and that this may have been a factor involved with the dispersal of Homo out of Africa. The study of taxonomic differences in body size, however, is problematic. Postcranial remains are rarely associated with craniodental fossils, and taxonomic attributions frequently rest upon the size of skeletal elements. Previous body size estimates have been based upon well-preserved specimens with a more reliable species assessment. Since these samples are small (n < 5) and disparate in space and time, little is known about geographical and chronological variation in body size within early Homo. We investigate temporal and spatial variation in body size among fossils of early Homo using a 'taxon-free' approach, considering evidence for size variation from isolated and fragmentary postcranial remains (n = 39). To render the size of disparate fossil elements comparable, we derived new regression equations for common parameters of body size from a globally representative sample of hunter-gatherers and applied them to available postcranial measurements from the fossils. The results demonstrate chronological and spatial variation but no simple temporal or geographical trends for the evolution of body size among early Homo. Pronounced body size increases within Africa take place only after hominin populations were established at Dmanisi, suggesting that migrations into Eurasia were not contingent on larger body sizes. The primary evidence for these marked changes among early Homo is based upon material from Koobi Fora after 1.7Myr, indicating regional size variation. The significant body size differences between specimens from Koobi Fora and Olduvai support the cranial evidence for at least two co-existing morphotypes in the Early Pleistocene of eastern Africa. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sample size allocation in multiregional equivalence studies.
Liao, Jason J Z; Yu, Ziji; Li, Yulan
2018-06-17
With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches. Copyright © 2018 John Wiley & Sons, Ltd.
Lopes Antunes, Ana Carolina; Dórea, Fernanda; Halasa, Tariq; Toft, Nils
2016-05-01
Surveillance systems are critical for accurate, timely monitoring and effective disease control. In this study, we investigated the performance of univariate process monitoring control algorithms in detecting changes in seroprevalence for endemic diseases. We also assessed the effect of sample size (number of sentinel herds tested in the surveillance system) on the performance of the algorithms. Three univariate process monitoring control algorithms were compared: Shewart p Chart(1) (PSHEW), Cumulative Sum(2) (CUSUM) and Exponentially Weighted Moving Average(3) (EWMA). Increases in seroprevalence were simulated from 0.10 to 0.15 and 0.20 over 4, 8, 24, 52 and 104 weeks. Each epidemic scenario was run with 2000 iterations. The cumulative sensitivity(4) (CumSe) and timeliness were used to evaluate the algorithms' performance with a 1% false alarm rate. Using these performance evaluation criteria, it was possible to assess the accuracy and timeliness of the surveillance system working in real-time. The results showed that EWMA and PSHEW had higher CumSe (when compared with the CUSUM) from week 1 until the end of the period for all simulated scenarios. Changes in seroprevalence from 0.10 to 0.20 were more easily detected (higher CumSe) than changes from 0.10 to 0.15 for all three algorithms. Similar results were found with EWMA and PSHEW, based on the median time to detection. Changes in the seroprevalence were detected later with CUSUM, compared to EWMA and PSHEW for the different scenarios. Increasing the sample size 10 fold halved the time to detection (CumSe=1), whereas increasing the sample size 100 fold reduced the time to detection by a factor of 6. This study investigated the performance of three univariate process monitoring control algorithms in monitoring endemic diseases. It was shown that automated systems based on these detection methods identified changes in seroprevalence at different times. Increasing the number of tested herds would lead to faster detection. However, the practical implications of increasing the sample size (such as the costs associated with the disease) should also be taken into account. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kaiser, Michael; Grunwald, Dennis; Marhan, Sven; Poll, Christian; Bamminger, Chris; Ludwig, Bernard
2016-04-01
Potential increases in soil temperature due to climate change might result in intensified soil organic matter (SOM) decomposition and thus higher CO2 emissions. Management options to increase and stabilize SOM include the application of biochar. However, the effects of biochar amendments under elevated soil temperatures on SOM dynamics are largely unknown. The objective of this study was to analyze the effect of biochar application and elevated soil temperature on the amount and composition of OM associated with fractions of different turnover kinetics. Samples were taken from four treatments of the Hohenheim Climate Change Experiment with the factors temperature (ambient or elevated by 2.5 °C in 4 cm depth, six years before sampling) and biochar (control and 30 t / ha Miscanthus pyrolysis biochar, one year before sampling) in two depths (0 - 5 and 5 - 15 cm). Basal respiration and microbial biomass C were analyzed within an incubation experiment. Aggregate size-fractions were separated by wet-sieving and the free light, occluded light (oLF), and heavy fractions were isolated by density fractionation. All fractions were analyzed for organic C and δ13C as well as by infrared spectroscopy. Preliminary data suggest that biochar significantly increased basal respiration and that the microbial biomass C was significantly affected by elevated temperature. No biochar-C was found in the microbial biomass. Biochar and elevated temperature had only minor effects on the organic C associated with aggregate-size classes, although biochar was incorporated into all fractions already after one year of application. Biochar application significantly increased the organic C associated with oLF. In most samples affected by biochar, the proportion of C=O groups was significantly increased. The results suggest that already after one year, biochar-mineral interactions were formed leading to an aggregate occlusion of applied biochar. At least in the short-term, the effect of biochar on the amount and composition of OM associated with different aggregate-size and density fractions seem to be independent from soil temperature.
The effect of Nb additions on the thermal stability of melt-spun Nd2Fe14B
NASA Astrophysics Data System (ADS)
Lewis, L. H.; Gallagher, K.; Panchanathan, V.
1999-04-01
Elevated-temperature superconducting quantum interference device (SQUID) magnetometry was performed on two samples of melt-spun and optimally annealed Nd2Fe14B; one sample contained 2.3 wt % Nb and one was Nb-free. Continuous full hysteresis loops were measured with a SQUID magnetometer at T=630 K, above the Curie temperature of the 2-14-1 phase, as a function of field (1 T⩽H⩽-1 T) and time on powdered samples sealed in quartz tubes at a vacuum of 10-6 Torr. The measured hysteresis signals were deconstructed into a high-field linear paramagnetic portion and a low-field ferromagnetic signal of unclear origin. While the saturation magnetization of the ferromagnetic signal from both samples grows with time, the signal from the Nb-containing sample is always smaller. The coercivity data are consistent with a constant impurity particle size in the Nb-containing sample and an increasing impurity particle size in the Nb-free sample. The paramagnetic susceptibility signal from the Nd2Fe14B-type phase in the Nb-free sample increases with time, while that from the Nb-containing sample remains constant. It is suggested that the presence of Nb actively suppresses the thermally induced formation of poorly crystallized Fe-rich regions that apparently exist in samples of both compositions.
Genetic Stock Identification Of Production Colonies Of Russian Honey Bees
USDA-ARS?s Scientific Manuscript database
The prevalence of Nosema ceranae in managed honey bee colonies has increased dramatically in the past 10 – 20 years worldwide. A variety of genetic testing methods for species identification and prevalence are now available. However sample size and preservation method of samples prior to testing hav...
Molten salt synthesis of nanocrystalline phase of high dielectric constant material CaCu3Ti4O12.
Prakash, B Shri; Varma, K B R
2008-11-01
Nanocrystalline powders of giant dielectric constant material, CaCu3Ti4O12 (CCTO), have been prepared successfully by the molten salt synthesis (MSS) using KCl at 750 degrees C/10 h, which is significantly lower than the calcination temperature (approximately 1000 degrees C) that is employed to obtain phase pure CCTO in the conventional solid-state reaction route. The water washed molten salt synthesized powder, characterized by X-ray powder diffraction (XRD), Scanning electron microscopy (SEM), and Transmission electron microscopy (TEM) confirmed to be a phase pure CCTO associated with approximately 150 nm sized crystallites of nearly spherical shape. The decrease in the formation temperature/duration of CCTO in MSS method was attributed to an increase in the diffusion rate or a decrease in the diffusion length of reacting ions in the molten salt medium. As a consequence of liquid phase sintering, pellets of as-synthesized KCl containing CCTO powder exhibited higher sinterability and grain size than that of KCl free CCTO samples prepared by both MSS method and conventional solid-state reaction route. The grain size and the dielectric constant of KCl containing CCTO ceramics increased with increasing sintering temperature (900 degrees C-1050 degrees C). Indeed the dielectric constants of these ceramics were higher than that of KCl free CCTO samples prepared by both MSS method and those obtained via the solid-state reaction route and sintered at the same temperature. Internal barrier layer capacitance (IBLC) model was invoked to correlate the observed dielectric constant with the grain size in these samples.
Shivaramu, N J; Lakshminarasappa, B N; Nagabhushana, K R; Singh, Fouran
2016-02-05
Nanocrystalline Y2O3 is synthesized by solution combustion technique using urea and glycine as fuels. X-ray diffraction (XRD) pattern of as prepared sample shows amorphous nature while annealed samples show cubic nature. The average crystallite size is calculated using Scherrer's formula and is found to be in the range 14-30 nm for samples synthesized using urea and 15-20 nm for samples synthesized using glycine respectively. Field emission scanning electron microscopy (FE-SEM) image of 1173 K annealed Y2O3 samples show well separated spherical shape particles and the average particle size is found to be in the range 28-35 nm. Fourier transformed infrared (FTIR) and Raman spectroscopy reveals a stretching of Y-O bond. Electron spin resonance (ESR) shows V(-) center, O2(-) and Y(2+) defects. A broad photoluminescence (PL) emission with peak at ~386nm is observed when the sample is excited with 252 nm. Thermoluminescence (TL) properties of γ-irradiated Y2O3 nanopowder are studied at a heating rate of 5 K s(-1). The samples prepared by using urea show a prominent and well resolved peak at ~383 K and a weak one at ~570 K. It is also found that TL glow peak intensity (I(m1)) at ~383 K increases with increase in γ-dose up to ~6.0 kGy and then decreases with increase in dose. However, glycine used Y2O3 shows a prominent TL glow with peaks at 396 K and 590 K. Among the fuels, urea used Y2O3 shows simple and well resolved TL glows. This might be due to fuel and hence particle size effect. The kinetic parameters are calculated by Chen's glow curve peak shape method and results are discussed in detail. Copyright © 2015. Published by Elsevier B.V.
Study of structural and magnetic properties of melt spun Nd2Fe13.6Zr0.4B ingot and ribbon
NASA Astrophysics Data System (ADS)
Amin, Muhammad; Siddiqi, Saadat A.; Ashfaq, Ahmad; Saleem, Murtaza; Ramay, Shahid M.; Mahmood, Asif; Al-Zaghayer, Yousef S.
2015-12-01
Nd2Fe13.6Zr0.4B hard magnetic material were prepared using arc-melting technique on a water-cooled copper hearth kept under argon gas atmosphere. The prepared samples, Nd2Fe13.6Zr0.4B ingot and ribbon are characterized using X-ray diffraction (XRD), scanning electron microscopy (SEM) for crystal structure determination and morphological studies, respectively. The magnetic properties of the samples have been explored using vibrating sample magnetometer (VSM). The lattice constants slightly increased due to the difference in the ionic radii of Fe and that of Zr. The bulk density decreased due to smaller molar weight and low density of Zr as compared to that of Fe. Ingot sample shows almost single crystalline phase with larger crystallite sizes whereas ribbon sample shows a mixture of amorphous and crystalline phases with smaller crystallite sizes. The crystallinity of the material was highly affected with high thermal treatments. Magnetic measurements show noticeable variation in magnetic behavior with the change in crystallite size. The sample prepared in ingot type shows soft while ribbon shows hard magnetic behavior.
Ye, Peng; Vander Wal, Randy; Boehman, Andre L.; ...
2014-12-26
The effect of rail pressure and biodiesel fueling on the morphology of exhaust particulate agglomerates and the nanostructure of primary particles (soot) was investigated with a common-rail turbocharged direct injection diesel engine. The engine was operated at steady state on a dynamometer running at moderate speed with both low (30%) and medium–high (60%) fixed loads, and exhaust particulate was sampled for analysis. Ultra-low sulfur diesel and its 20% v/v blends with soybean methyl ester biodiesel were used. Fuel injection occurred in a single event around top dead center at three different injection pressures. Exhaust particulate samples were characterized with TEMmore » imaging, scanning mobility particle sizing, thermogravimetric analysis, Raman spectroscopy, and XRD analysis. Particulate morphology and oxidative reactivity were found to vary significantly with rail pressure and with biodiesel blend level. Higher biodiesel content led to increases in the primary particle size and oxidative reactivity but did not affect nanoscale disorder in the as-received samples. For particulates generated with higher injection pressures, the initial oxidative reactivity increased, but there was no detectable correlation with primary particle size or nanoscale disorder.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Peng; Vander Wal, Randy; Boehman, Andre L.
The effect of rail pressure and biodiesel fueling on the morphology of exhaust particulate agglomerates and the nanostructure of primary particles (soot) was investigated with a common-rail turbocharged direct injection diesel engine. The engine was operated at steady state on a dynamometer running at moderate speed with both low (30%) and medium–high (60%) fixed loads, and exhaust particulate was sampled for analysis. Ultra-low sulfur diesel and its 20% v/v blends with soybean methyl ester biodiesel were used. Fuel injection occurred in a single event around top dead center at three different injection pressures. Exhaust particulate samples were characterized with TEMmore » imaging, scanning mobility particle sizing, thermogravimetric analysis, Raman spectroscopy, and XRD analysis. Particulate morphology and oxidative reactivity were found to vary significantly with rail pressure and with biodiesel blend level. Higher biodiesel content led to increases in the primary particle size and oxidative reactivity but did not affect nanoscale disorder in the as-received samples. For particulates generated with higher injection pressures, the initial oxidative reactivity increased, but there was no detectable correlation with primary particle size or nanoscale disorder.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saxena, Shailendra K., E-mail: phd1211512@iiti.ac.in; Sahu, Gayatri; Sagdeo, Pankaj R.
Quantum confinement effect has been studied in cheese like silicon nano-structures (Ch-SiNS) fabricated by metal induced chemical etching using different etching times. Scanning electron microscopy is used for the morphological study of these Ch-SiNS. A visible photoluminescence (PL) emission is observed from the samples under UV excitation at room temperature due to quantum confinement effect. The average size of Silicon Nanostructures (SiNS) present in the samples has been estimated by bond polarizability model using Raman Spectroscopy from the red-shift observed from SiNSs as compared to its bulk counterpart. The sizes of SiNS present in the samples decreases as etching timemore » increase from 45 to 75 mintunes.« less
A strategy for characterized aerosol-sampling transport efficiency.
NASA Astrophysics Data System (ADS)
Schwarz, J. P.
2017-12-01
A fundamental concern when sampling aerosol in the laboratory or in situ, on the ground or (especially) from aircraft, is characterizing transport losses due to particles contacting the walls of tubing used for transport. Depending on the size range of the aerosol, different mechanisms dominate these losses: diffusion for the ultra-fine, and inertial and gravitational settling losses for the coarse mode. In the coarse mode, losses become intractable very quickly with increasing particle size above 5 µm diameter. Here we present these issues, with a concept approach to reducing aerosol losses via strategic dilution with porous tubing including results of laboratory testing of a prototype. We infer the potential value of this approach to atmospheric aerosol sampling.
Tenailleau, Quentin M; Bernard, Nadine; Pujol, Sophie; Houot, Hélène; Joly, Daniel; Mauny, Frédéric
2015-01-01
Environmental epidemiological studies rely on the quantification of the exposure level in a surface defined as the subject's exposure area. For residential exposure, this area is often the subject's neighborhood. However, the variability of the size and nature of the neighborhoods makes comparison of the findings across studies difficult. This article examines the impact of the neighborhood's definition on environmental noise exposure levels obtained from four commonly used sampling techniques: address point, façade, buffers, and official zoning. A high-definition noise model, built on a middle-sized French city, has been used to estimate LAeq,24 h exposure in the vicinity of 10,825 residential buildings. Twelve noise exposure indicators have been used to assess inhabitants' exposure. Influence of urban environmental factors was analyzed using multilevel modeling. When the sampled area increases, the average exposure increases (+3.9 dB), whereas the SD decreases (-1.6 dB) (P<0.01). Most of the indicators differ statistically. When comparing indicators from the 50-m and 400-m radius buffers, the assigned LAeq,24 h level varies across buildings from -9.4 to +22.3 dB. This variation is influenced by urban environmental characteristics (P<0.01). On the basis of this study's findings, sampling technique, neighborhood size, and environmental composition should be carefully considered in further exposure studies.
NASA Astrophysics Data System (ADS)
Austin, N. J.; Evans, B.; Dresen, G. H.; Rybacki, E.
2009-12-01
Deformed rocks commonly consist of several mineral phases, each with dramatically different mechanical properties. In both naturally and experimentally deformed rocks, deformation mechanisms and, in turn, strength, are commonly investigated by analyzing microstructural elements such as crystallographic preferred orientation (CPO) and recrystallized grain size. Here, we investigated the effect of variations in the volume fraction and the geometry of rigid second phases on the strength and evolution of CPO and grain size of synthetic calcite rocks. Experiments using triaxial compression and torsional loading were conducted at 1023 K and equivalent strain rates between ~2e-6 and 1e-3 s-1. The second phases in these synthetic assemblages are rigid carbon spheres or splinters with known particle size distributions and geometries, which are chemically inert at our experimental conditions. Under hydrostatic conditions, the addition of as little as 1 vol.% carbon spheres poisons normal grain growth. Shape is also important: for an equivalent volume fraction and grain dimension, carbon splinters result in a finer calcite grain size than carbon spheres. In samples deformed at “high” strain rates, or which have “large” mean free spacing of the pinning phase, the final recrystallized grain size is well explained by competing grain growth and grain size reduction processes, where the grain-size reduction rate is determined by the rate that mechanical work is done during deformation. In these samples, the final grain size is finer than in samples heat-treated hydrostatically for equivalent durations. The addition of 1 vol.% spheres to calcite has little effect on either the strength or CPO development. Adding 10 vol.% splinters increases the strength at low strains and low strain rates, but has little effect on the strength at high strains and/or high strain rates, compared to pure samples. A CPO similar to that in pure samples is observed, although the intensity is reduced in samples containing 10 vol.% splinters. When 10 vol.% spheres are added to calcite, the strength of the aggregate is reduced, and a distinct and strong CPO develops. Viscoplastic self consistent calculations were used to model the evolution of CPO in these materials, and these suggest a variation in the activity of the various slip systems within pure samples and those containing 10 vol.% spheres. The applicability of these laboratory observations has been tested with field-based observations made in the Morcles Nappe (Swiss Helvetic Alps). In the Morcles Nappe, calcite grain size becomes progressively finer as the thrust contact is approached, and there is a concomitant increase in CPO intensity, with the strongest CPO’s in the finest-grained, quartz-rich limestones, nearest the thrust contact, which are interpreted to have been deformed to the highest strains. Thus, our laboratory results may be used to provide insight into the distribution of strain observed in natural shear zones.
A contemporary decennial global Landsat sample of changing agricultural field sizes
NASA Astrophysics Data System (ADS)
White, Emma; Roy, David
2014-05-01
Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by historic patterns of LCLU (Albania, France and India). Landsat images sensed in two time periods, up to 25 years apart, are used to extract field object classifications at each hotspot using a multispectral image segmentation approach. The field size distributions for the two periods are compared statistically and quantify examples of significant increasing field size associated primarily with agricultural technological innovation (Argentina and U.S.) and decreasing field size associated with rapid societal changes (Albania and Zimbabwe). The implications of this research, and the potential of higher spatial resolution data from planned global coverage satellites, to provide improved agricultural monitoring are discussed.
NASA Astrophysics Data System (ADS)
Jiang, Chengpeng; Fan, Xi'an; Hu, Jie; Feng, Bo; Xiang, Qiusheng; Li, Guangqiang; Li, Yawei; He, Zhu
2018-04-01
During the past few decades, Bi2Te3-based alloys have been investigated extensively because of their promising application in the area of low temperature waste heat thermoelectric power generation. However, their thermal stability must be evaluated to explore the appropriate service temperature. In this work, the thermal stability of zone melting p-type (Bi, Sb)2Te3-based ingots was investigated under different annealing treatment conditions. The effect of service temperature on the thermoelectric properties and hardness of the samples was also discussed in detail. The results showed that the grain size, density, dimension size and mass remained nearly unchanged when the service temperature was below 523 K, which suggested that the geometry size of zone melting p-type (Bi, Sb)2Te3-based materials was stable below 523 K. The power factor and Vickers hardness of the ingots also changed little and maintained good thermal stability. Unfortunately, the thermal conductivity increased with increasing annealing temperature, which resulted in an obvious decrease of the zT value. In addition, the thermal stabilities of the zone melting p-type (Bi, Sb)2Te3-based materials and the corresponding powder metallurgy samples were also compared. All evidence implied that the thermal stabilities of the zone-melted (ZMed) p-type (Bi, Sb)2Te3 ingots in terms of crystal structure, geometry size, power factor (PF) and hardness were better than those of the corresponding powder metallurgy samples. However, their thermal stabilities in terms of zT values were similar under different annealing temperatures.
NASA Astrophysics Data System (ADS)
Shin, Jung-Wook; Park, Jinku; Choi, Jang-Geun; Jo, Young-Heon; Kang, Jae Joong; Joo, HuiTae; Lee, Sang Heon
2017-12-01
The aim of this study was to examine the size structure of phytoplankton under varying coastal upwelling intensities and to determine the resulting primary productivity in the southwestern East Sea. Samples of phytoplankton assemblages were collected on five occasions from the Hupo Bank, off the east coast of Korea, during 2012-2013. Because two major surface currents have a large effect on water mass transport in this region, we first performed a Backward Particle Tracking Experiment (BPTE) to determine the coastal sea from which the collected samples originated according to advection time of BPTE particles, following which we used upwelling age (UA) to determine the intensity of coastal upwelling in the region of origin for each sample. Only samples that were affected by coastal upwelling in the region of origin were included in subsequent analyses. We found that as UA increased, there was a decreasing trend in the concentration of picophytoplankton, and increasing trends in the concentration of nanophytoplankton and microphytoplankton. We also examined the relationship between the size structure of phytoplankton and primary productivity in the Ulleung Basin (UB), which has experienced significant variation over the past decade. We found that primary productivity in UB was closely related to the strength of the southerly wind, which is the most important mechanism for coastal upwelling in the southwestern East Sea. Thus, the size structure of phytoplankton is determined by the intensity of coastal upwelling, which is regulated by the southerly wind, and makes an important contribution to primary productivity.
Methodological Issues in Curriculum-Based Reading Assessment.
ERIC Educational Resources Information Center
Fuchs, Lynn S.; And Others
1984-01-01
Three studies involving elementary students examined methodological issues in curriculum-based reading assessment. Results indicated that (1) whereas sample duration did not affect concurrent validity, increasing duration reduced performance instability and increased performance slopes and (2) domain size was related inversely to performance slope…
Hampton, Paul M
2018-02-01
As body size increases, some predators eliminate small prey from their diet exhibiting an ontogenetic shift toward larger prey. In contrast, some predators show a telescoping pattern of prey size in which both large and small prey are consumed with increasing predator size. To explore a functional explanation for the two feeding patterns, I examined feeding effort as both handling time and number of upper jaw movements during ingestion of fish of consistent size. I used a range of body sizes from two snake species that exhibit ontogenetic shifts in prey size (Nerodia fasciata and N. rhombifer) and a species that exhibits telescoping prey size with increased body size (Thamnophis proximus). For the two Nerodia species, individuals with small or large heads exhibited greater difficulty in feeding effort compared to snakes of intermediate size. However, for T. proximus measures of feeding effort were negatively correlated with head length and snout-vent length (SVL). These data indicate that ontogenetic shifters of prey size develop trophic morphology large enough that feeding effort increases for disproportionately small prey. I also compared changes in body size among the two diet strategies for active foraging snake species using data gleaned from the literature to determine if increased change in body size and thereby feeding morphology is observable in snakes regardless of prey type or foraging habitat. Of the 30 species sampled from literature, snakes that exhibit ontogenetic shifts in prey size have a greater magnitude of change in SVL than species that have telescoping prey size patterns. Based upon the results of the two data sets above, I conclude that ontogenetic shifts away from small prey occur in snakes due, in part, to growth of body size and feeding structures beyond what is efficient for handling small prey. Copyright © 2017. Published by Elsevier GmbH.
Vertical distribution of the prokaryotic cell size in the Mediterranean Sea
NASA Astrophysics Data System (ADS)
La Ferla, R.; Maimone, G.; Azzaro, M.; Conversano, F.; Brunet, C.; Cabral, A. S.; Paranhos, R.
2012-12-01
Distributions of prokaryotic cell size and morphology were studied in different areas of the Mediterranean Sea by using image analysis on samples collected from surface down to bathypelagic layers (max depth 4,900 m) in the Southern Tyrrhenian, Southern Adriatic and Eastern Mediterranean Seas. Distribution of cell size of prokaryotes in marine ecosystem is very often not considered, which makes our study first in the context of prokaryotic ecology. In the deep Mediterranean layers, an usually-not-considered form of carbon sequestration through prokaryotic cells has been highlighted, which is consistent with an increase in cell size with the depth of the water column. A wide range in prokaryotic cell volumes was observed (between 0.045 and 0.566 μm3). Increase in cell size with depth was opposed to cell abundance distribution. Our results from microscopic observations were confirmed by the increasing HNA/LNA ratio (HNA, cells with high nucleic acid content; LNA, cells with low nucleic acid content) along the water column. Implications of our results on the increasing cell size with depth are in the fact that the quantitative estimation of prokaryotic biomass changes along the water column and the amount of carbon sequestered in the deep biota is enhanced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappellari, Michele
2013-11-20
The distribution of galaxies on the mass-size plane as a function of redshift or environment is a powerful test for galaxy formation models. Here we use integral-field stellar kinematics to interpret the variation of the mass-size distribution in two galaxy samples spanning extreme environmental densities. The samples are both identically and nearly mass-selected (stellar mass M {sub *} ≳ 6 × 10{sup 9} M {sub ☉}) and volume-limited. The first consists of nearby field galaxies from the ATLAS{sup 3D} parent sample. The second consists of galaxies in the Coma Cluster (Abell 1656), one of the densest environments for which good, resolvedmore » spectroscopy can be obtained. The mass-size distribution in the dense environment differs from the field one in two ways: (1) spiral galaxies are replaced by bulge-dominated disk-like fast-rotator early-type galaxies (ETGs), which follow the same mass-size relation and have the same mass distribution as in the field sample; (2) the slow-rotator ETGs are segregated in mass from the fast rotators, with their size increasing proportionally to their mass. A transition between the two processes appears around the stellar mass M {sub crit} ≈ 2 × 10{sup 11} M {sub ☉}. We interpret this as evidence for bulge growth (outside-in evolution) and bulge-related environmental quenching dominating at low masses, with little influence from merging. In contrast, significant dry mergers (inside-out evolution) and halo-related quenching drives the mass and size growth at the high-mass end. The existence of these two processes naturally explains the diverse size evolution of galaxies of different masses and the separability of mass and environmental quenching.« less
The effect of plasma pre-treatment on NaHCO3 desizing of blended sizes on cotton fabrics
NASA Astrophysics Data System (ADS)
Li, Xuming; Qiu, Yiping
2012-03-01
The influence of the He/O2 atmospheric pressure plasma jet pre-treatment on subsequent NaHCO3 desizing of blends of starch phosphate and poly(vinyl alcohol) on cotton fabrics is investigated. Atomic force microscopy and scanning electron microscopy analysis indicate that the surface topography of the samples has significantly changed and the surface roughness increases with an increase in plasma exposure time. X-ray photoelectron spectroscopy analysis shows that a larger number of oxygen-containing polar groups are formed on the sized fabric surface after the plasma treatment. The results of the percent desizing ratio (PDR) indicate that the plasma pretreatment facilitated the blended sizes removal from the cotton fabrics in subsequent NaHCO3 treatment and the PDR increases with prolonging plasma treatment time. The plasma technology is a promising pretreatment for desizing of blended sizes due to dramatically reduced desizing time.
Li, Yiwen; Shen, Yang; Pi, Lu; Hu, Wenli; Chen, Mengqin; Luo, Yan; Li, Zhi; Su, Shijun; Ding, Sanglan; Gan, Zhiwei
2016-01-01
A total of 27 settled dust samples were collected from urban roads, parks, and roofs in Chengdu, China to investigate particle size distribution and perchlorate levels in different size fractions. Briefly, fine particle size fractions (<250 μm) were the dominant composition in the settled dust samples, with mean percentages of 80.2%, 69.5%, and 77.2% for the urban roads, roofs, and the parks, respectively. Perchlorate was detected in all of the size-fractionated dust samples, with concentrations ranging from 73.0 to 6160 ng g(-1), and the median perchlorate levels increased with decreasing particle size. The perchlorate level in the finest fraction (<63 μm) was significantly higher than those in the coarser fractions. To our knowledge, this is the first report on perchlorate concentrations in different particle size fractions. The calculated perchlorate loadings revealed that perchlorate was mainly associated with finer particles (<125 μm). An exposure assessment indicated that exposure to perchlorate via settled road dust intake is safe to both children and adults in Chengdu, China. However, due to perchlorate mainly existing in fine particles, there is a potential for perchlorate to transfer into surface water and the atmosphere by runoff and wind erosion or traffic emission, and this could act as an important perchlorate pollution source for the indoor environment, and merits further study.
Physicochemical properties of respirable-size lunar dust
NASA Astrophysics Data System (ADS)
McKay, D. S.; Cooper, B. L.; Taylor, L. A.; James, J. T.; Thomas-Keprta, K.; Pieters, C. M.; Wentworth, S. J.; Wallace, W. T.; Lee, T. S.
2015-02-01
We separated the respirable dust and other size fractions from Apollo 14 bulk sample 14003,96 in a dry nitrogen environment. While our toxicology team performed in vivo and in vitro experiments with the respirable fraction, we studied the size distribution and shape, chemistry, mineralogy, spectroscopy, iron content and magnetic resonance of various size fractions. These represent the finest-grained lunar samples ever measured for either FMR np-Fe0 index or precise bulk chemistry, and are the first instance we know of in which SEM/TEM samples have been obtained without using liquids. The concentration of single-domain, nanophase metallic iron (np-Fe0) increases as particle size diminishes to 2 μm, confirming previous extrapolations. Size-distribution studies disclosed that the most frequent particle size was in the 0.1-0.2 μm range suggesting a relatively high surface area and therefore higher potential toxicity. Lunar dust particles are insoluble in isopropanol but slightly soluble in distilled water (~0.2 wt%/3 days). The interaction between water and lunar fines, which results in both agglomeration and partial dissolution, is observable on a macro scale over time periods of less than an hour. Most of the respirable grains were smooth amorphous glass. This suggests less toxicity than if the grains were irregular, porous, or jagged, and may account for the fact that lunar dust is less toxic than ground quartz.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, K.S.; Cauvet, D.; Lybeer, M.
1999-04-01
Anthropogenic activities related to 100 years of industrialization in the metropolitan Detroit area have significantly enriched the bed sediment of the lower reaches of the Rouge River in Cr, Cu, Fe, Ni, Pb, and Zn. These enriched elements, which may represent a threat to biota, are predominantly present in sequentially extracted reducible and oxidizable chemical phases with small contributions from residual phases. In size-fractionated samples trace metal concentrations generally increase with decreasing particle size, with the greatest contribution to this increase from the oxidizable phase. Experimental results obtained on replicate samples of river sediment demonstrate that the accuracy of themore » sequential extraction procedure, evaluated by comparing the sums of the three individual fractions, is generally better than 10%. Oxidizable and reducible phases therefore constitute important sources of potentially available heavy metals that need to be explicitly considered when evaluating sediment and water quality impacts on biota.« less
Application of a Probalistic Sizing Methodology for Ceramic Structures
NASA Astrophysics Data System (ADS)
Rancurel, Michael; Behar-Lafenetre, Stephanie; Cornillon, Laurence; Leroy, Francois-Henri; Coe, Graham; Laine, Benoit
2012-07-01
Ceramics are increasingly used in the space industry to take advantage of their stability and high specific stiffness properties. Their brittle behaviour often leads to size them by increasing the safety factors that are applied on the maximum stresses. It induces to oversize the structures. This is inconsistent with the major driver in space architecture, the mass criteria. This paper presents a methodology to size ceramic structures based on their failure probability. Thanks to failure tests on samples, the Weibull law which characterizes the strength distribution of the material is obtained. A-value (Q0.0195%) and B-value (Q0.195%) are then assessed to take into account the limited number of samples. A knocked-down Weibull law that interpolates the A- & B- values is also obtained. Thanks to these two laws, a most-likely and a knocked- down prediction of failure probability are computed for complex ceramic structures. The application of this methodology and its validation by test is reported in the paper.
An Investigation of Community Attitudes Toward Blast Noise: Complaint Survey Protocol
2010-10-11
increase complaints (Hume et al., 2003a). If an individual is already stressed by other non-noise factors, the source noise many be more annoying than...protocol (lab staffing, sampling and locating records, callback schedules) focused on completing the data collection for any given noise event within...relationship (e.g., increased feelings of importance of the installation tend to be associated with decreased annoyance). Due to the limited sample size only
NASA Technical Reports Server (NTRS)
Waegell, Mordecai J.; Palacios, David M.
2011-01-01
Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter
Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios
NASA Technical Reports Server (NTRS)
Juarez, Alfredo; Harper, Susana A.
2016-01-01
The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.
Haynes, R J; Belyaeva, O N; Zhou, Y-F
2015-01-01
In order to better characterize mechanically shredded municipal green waste used for composting, five samples from different origins were separated into seven particle size fractions (>20mm, 10-20mm, 5-10mm, 2-5mm, 1-2mm, 0.5-1.0mm and <0.5mm diameter) and analyzed for organic C and nutrient content. With decreasing particle size there was a decrease in organic C content and an increase in macronutrient, micronutrient and ash content. This reflected a concentration of lignified woody material in the larger particle fractions and of green stems and leaves and soil in the smaller particle sizes. The accumulation of nutrients in the smaller sized fractions means the practice of using large particle sizes for green fuel and/or mulch does not greatly affect nutrient cycling via green waste composting. During a 100-day incubation experiment, using different particle size fractions of green waste, there was a marked increase in both cumulative CO2 evolution and mineral N accumulation with decreasing particle size. Results suggested that during composting of bulk green waste (with a high initial C/N ratio such as 50:1), mineral N accumulates because decomposition and net N immobilization in larger particles is slow while net N mineralization proceeds rapidly in the smaller (<1mm dia.) fractions. Initially, mineral N accumulated in green waste as NH4(+)-N, but over time, nitrification proceeded resulting in accumulation of NO3(-)-N. It was concluded that the nutrient content, N mineralization potential and decomposition rate of green waste differs greatly among particle size fractions and that chemical analysis of particle size fractions provides important additional information over that of a bulk sample. Copyright © 2014 Elsevier Ltd. All rights reserved.
van der Ham, Joris L.; de Mutsert, Kim
2014-01-01
The Deepwater Horizon oil spill impacted Louisiana's coastal estuaries physically, chemically, and biologically. To better understand the ecological consequences of this oil spill on Louisiana estuaries, we compared the abundance and size of two Gulf shrimp species (Farfantepeneus aztecus and Litopeneus setiferus) in heavily affected and relatively unaffected estuaries, before and after the oil spill. Two datasets were used to conduct this study: data on shrimp abundance and size before the spill were available from Louisiana Department of Wildlife and Fisheries (LDWF). Data on shrimp abundance and size from after the spill were independently collected by the authors and by LDWF. Using a Before-After-Control-Impact with Paired sampling (BACIP) design with monthly samples of two selected basins, we found brown shrimp to become more abundant and the mean size of white shrimp to become smaller. Using a BACIP with data on successive shrimp year-classes of multiple basins, we found both species to become more abundant in basins that were affected by the spill, while mean shrimp size either not change after the spill, or increased in both affected and unaffected basins. We conclude that following the oil spill abundances of both species increased within affected estuaries, whereas mean size may have been unaffected. We propose two factors that may have caused these results: 1) exposure to polycyclic aromatic hydrocarbons (PAHs) may have reduced the growth rate of shrimp, resulting in a delayed movement of shrimp to offshore habitats, and an increase of within-estuary shrimp abundance, and 2) fishing closures established immediately after the spill, may have resulted in decreased fishing effort and an increase in shrimp abundance. This study accentuates the complexities in determining ecological effects of oil spills, and the need of studies on the organismal level to reveal cause-and-effect relationships of such events. PMID:25272142
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G
2012-10-01
Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawellek, Nicole; Krivov, Alexander V.; Marshall, Jonathan P.
The radii of debris disks and the sizes of their dust grains are important tracers of the planetesimal formation mechanisms and physical processes operating in these systems. Here we use a representative sample of 34 debris disks resolved in various Herschel Space Observatory (Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA) programs to constrain the disk radii and the size distribution of their dust. While we modeled disks with both warm and cold components, and identified warm inner disks around about two-thirds of the stars, we focusmore » our analysis only on the cold outer disks, i.e., Kuiper-belt analogs. We derive the disk radii from the resolved images and find a large dispersion for host stars of any spectral class, but no significant trend with the stellar luminosity. This argues against ice lines as a dominant player in setting the debris disk sizes, since the ice line location varies with the luminosity of the central star. Fixing the disk radii to those inferred from the resolved images, we model the spectral energy distribution to determine the dust temperature and the grain size distribution for each target. While the dust temperature systematically increases toward earlier spectral types, the ratio of the dust temperature to the blackbody temperature at the disk radius decreases with the stellar luminosity. This is explained by a clear trend of typical sizes increasing toward more luminous stars. The typical grain sizes are compared to the radiation pressure blowout limit s {sub blow} that is proportional to the stellar luminosity-to-mass ratio and thus also increases toward earlier spectral classes. The grain sizes in the disks of G- to A-stars are inferred to be several times s {sub blow} at all stellar luminosities, in agreement with collisional models of debris disks. The sizes, measured in the units of s {sub blow}, appear to decrease with the luminosity, which may be suggestive of the disk's stirring level increasing toward earlier-type stars. The dust opacity index β ranges between zero and two, and the size distribution index q varies between three and five for all the disks in the sample.« less
NASA Astrophysics Data System (ADS)
Yonatan Mulushoa, S.; Murali, N.; Tulu Wegayehu, M.; Margarette, S. J.; Samatha, K.
2018-03-01
Cu-Cr substituted magnesium ferrite materials (Mg1 - xCuxCrxFe21 - xO4 with x = 0.0-0.7) have been synthesized by the solid state reaction method. XRD analysis revealed the prepared samples are cubic spinel with single phase face centered cubic. A significant decrease of ∼41.15 nm in particle size is noted in response to the increase in Cu-Cr substitution level. The room temperature resistivity increases gradually from 0.553 × 105 Ω cm (x = 0.0) to 0.105 × 108 Ω cm (x = 0.7). Temperature dependent DC-electrical resistivity of all the samples, exhibits semiconductor like behavior. Cu-Cr doped materials can be suitable to limit the eddy current losses. VSM result shows pure and doped magnesium ferrite particles show soft ferrimagnetic nature at room temperature. The saturation magnetization of the samples decreases initially from 34.5214 emu/g for x = 0.0 to 18.98 emu/g (x = 0.7). Saturation magnetization, remanence and coercivity are decreased with doping, which may be due to the increase in grain size.
Ciazela, Jakub; Siepak, Marcin
2016-06-01
We determined the Cd, Cr, Cu, Ni, Pb, and Zn concentrations in soil samples collected along the eight main outlet roads of Poznań. Samples were collected at distances of 1, 5, and 10 m from the roadway edges at depth intervals of 0-20 and 40-60 cm. The metal content was determined in seven grain size fractions. The highest metal concentrations were observed in the smallest fraction (<0.063 mm), which were up to four times higher than those in sand fractions. Soil Pb, Cu, and Zn (and to a lesser extent Ni, Cr, and Cd) all increased in relation to the geochemical background. At most sampling sites, metal concentrations decreased with increasing distance from roadway edges and increasing depth. In some locations, the accumulation of metals in soils appears to be strongly influenced by wind direction. Our survey findings should contribute in predicting the behavior of metals along outlet road, which is important by assessing sources for further migration of heavy metals into the groundwater, plants, and humans.
Scott, Frank I.; McConnell, Ryan A.; Lewis, Matthew E.; Lewis, James D.
2014-01-01
Background Significant advances have been made in clinical and epidemiologic research methods over the past 30 years. We sought to demonstrate the impact of these advances on published research in gastroenterology from 1980 to 2010. Methods Three journals (Gastroenterology, Gut, and American Journal of Gastroenterology) were selected for evaluation given their continuous publication during the study period. Twenty original clinical articles were randomly selected from each journal from 1980, 1990, 2000, and 2010. Each article was assessed for topic studied, whether the outcome was clinical or physiologic, study design, sample size, number of authors and centers collaborating, and reporting of statistical methods such as sample size calculations, p-values, confidence intervals, and advanced techniques such as bioinformatics or multivariate modeling. Research support with external funding was also recorded. Results A total of 240 articles were included in the study. From 1980 to 2010, there was a significant increase in analytic studies (p<0.001), clinical outcomes (p=0.003), median number of authors per article (p<0.001), multicenter collaboration (p<0.001), sample size (p<0.001), and external funding (p<0.001)). There was significantly increased reporting of p-values (p=0.01), confidence intervals (p<0.001), and power calculations (p<0.001). There was also increased utilization of large multicenter databases (p=0.001), multivariate analyses (p<0.001), and bioinformatics techniques (p=0.001). Conclusions There has been a dramatic increase in complexity in clinical research related to gastroenterology and hepatology over the last three decades. This increase highlights the need for advanced training of clinical investigators to conduct future research. PMID:22475957
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thakur, Priya, E-mail: priyathakur1191@gmail.com; Thakur, Anjna; Yadav, Kamlesh, E-mail: kamlesh.yadav001@gmail.com
In this paper(LaMnO{sub 3}){sub 1−x}/ (TiO{sub 2}) {sub x} (where x = 0.0, 0.1, 0.2, 0.3 and 0.4) nanocomposite are prepared by mixing the LaMnO{sub 3} and TiO{sub 2} (Sigma Chemicals, particle size ∼21 nm) nanoparticle in appropriate ratio. These samples were characterized by using FESEM, EDS and FTIR to study the optical properties. Field Emission Scanning Electron Microscopy (FESEM) image of pure LaMnO{sub 3} sample shows that the uniform particle size distribution is observed. The average particle size of the LaMnO{sub 3} nanoparticles is 43 nm. The crystallite size increases from 16-24 nm with increasing the weight percentage of TiO{sub 2} inmore » LaMnO{sub 3}/TiO{sub 2} nanocomposite up to x = 0.4. The Fourier transform infrared spectroscopy (FTIR) spectra show that the absorption peaks appear at 450 cm{sup −1} and 491 cm{sup −1} which represent the Mn-O bending and Ti-O stretching mode respectively. The broadening of these peaks with increasing the concentration of TiO{sub 2} is also observed. It gives an evidence for the formation of metal oxygen bond. The absorption band at 600 cm{sup −1} corresponds to the stretching mode, which indicates the pervoskite phase present in the sample. The values of band gap are found 2.1, 1.9, 1.5, 1.3 and 1.2 eV for the x = 0.0, 0.1, 0.2, 0.3, and 0.4 respectively. Thus, the decrease in band gap and increase in refractive index with increasing concentration of TiO{sub 2} has been observed. These prepared nanocomposites can be used in the energy applications, to make the electrical devices and as a catalyst for photocatalytic processes e.g. hydrogenation.« less
Invited paper: Dielectric properties of CaCu3Ti4O12 polycrystalline ceramics
NASA Astrophysics Data System (ADS)
Lee, Sung Yun; Hong, Youn Woo; Yoo, Sang Im
2011-12-01
We investigated the relationship between the microstructures and dielectric properties of various CaCu3Ti4O12 (CCTO) polycrystalline ceramics sintered in air. An abrupt increase in the dielectric constant ( ɛ r) from ˜3,000 to ˜170,000 at 1 kHz occurred with increasing the sintering temperature from 980 to 1000°C for 12 h, respectively, which was accompanied by a very large increase in the average grain size from 5 to 300 µm, respectively, due to an abnormal grain growth. With further increasing the sintering temperature, the ɛ r value at 1 kHz was slightly decreased to ˜150,000 at 1020°C with no variation in the average grain size, significantly decreased to ˜77,000 at 1040°C with a large decrease in the average grain size (˜150 µm), and then maintained the values of ˜76,000 and ˜69,000 at 1060 and 1080°C, respectively, without noticeable variation in the average grain size. While no abnormal grain growth occurred in the CCTO samples sintered at 980°C for the holding time to 24 h and thus their ɛ r values showed relatively lower ɛ r values (< ˜4,000 at 1 kHz), the abnormal grain growth occurred in the samples after a certain holding time at a given sintering temperature of higher than 1000°C and thus their ɛ r values abruptly increased. Analyses by the complex impedance ( Z*) and modulus ( M*) spectroscopy revealed that the ɛ r values of the CCTO samples were dominantly affected by the electrical properties of grain boundary so that high ɛ r values over 10,000 at 1 kHz were attributable to the high capacitance ( C) of grain boundary, which is in good agreement with grain boundary internal barrier layer capacitor (IBLC) model.
A USANS/SANS study of the accessibility of pores in the Barnett Shale to methane and water
Ruppert, Leslie F.; Sakurovs, Richard; Blach, Tomasz P.; He, Lilin; Melnichenko, Yuri B.; Mildner, David F.; Alcantar-Lopez, Leo
2013-01-01
Shale is an increasingly important source of natural gas in the United States. The gas is held in fine pores that need to be accessed by horizontal drilling and hydrofracturing techniques. Understanding the nature of the pores may provide clues to making gas extraction more efficient. We have investigated two Mississippian Barnett Shale samples, combining small-angle neutron scattering (SANS) and ultrasmall-angle neutron scattering (USANS) to determine the pore size distribution of the shale over the size range 10 nm to 10 μm. By adding deuterated methane (CD4) and, separately, deuterated water (D2O) to the shale, we have identified the fraction of pores that are accessible to these compounds over this size range. The total pore size distribution is essentially identical for the two samples. At pore sizes >250 nm, >85% of the pores in both samples are accessible to both CD4 and D2O. However, differences in accessibility to CD4 are observed in the smaller pore sizes (~25 nm). In one sample, CD4 penetrated the smallest pores as effectively as it did the larger ones. In the other sample, less than 70% of the smallest pores (4, but they were still largely penetrable by water, suggesting that small-scale heterogeneities in methane accessibility occur in the shale samples even though the total porosity does not differ. An additional study investigating the dependence of scattered intensity with pressure of CD4 allows for an accurate estimation of the pressure at which the scattered intensity is at a minimum. This study provides information about the composition of the material immediately surrounding the pores. Most of the accessible (open) pores in the 25 nm size range can be associated with either mineral matter or high reflectance organic material. However, a complementary scanning electron microscopy investigation shows that most of the pores in these shale samples are contained in the organic components. The neutron scattering results indicate that the pores are not equally proportioned in the different constituents within the shale. There is some indication from the SANS results that the composition of the pore-containing material varies with pore size; the pore size distribution associated with mineral matter is different from that associated with organic phases.
A new estimator of the discovery probability.
Favaro, Stefano; Lijoi, Antonio; Prünster, Igor
2012-12-01
Species sampling problems have a long history in ecological and biological studies and a number of issues, including the evaluation of species richness, the design of sampling experiments, and the estimation of rare species variety, are to be addressed. Such inferential problems have recently emerged also in genomic applications, however, exhibiting some peculiar features that make them more challenging: specifically, one has to deal with very large populations (genomic libraries) containing a huge number of distinct species (genes) and only a small portion of the library has been sampled (sequenced). These aspects motivate the Bayesian nonparametric approach we undertake, since it allows to achieve the degree of flexibility typically needed in this framework. Based on an observed sample of size n, focus will be on prediction of a key aspect of the outcome from an additional sample of size m, namely, the so-called discovery probability. In particular, conditionally on an observed basic sample of size n, we derive a novel estimator of the probability of detecting, at the (n+m+1)th observation, species that have been observed with any given frequency in the enlarged sample of size n+m. Such an estimator admits a closed-form expression that can be exactly evaluated. The result we obtain allows us to quantify both the rate at which rare species are detected and the achieved sample coverage of abundant species, as m increases. Natural applications are represented by the estimation of the probability of discovering rare genes within genomic libraries and the results are illustrated by means of two expressed sequence tags datasets. © 2012, The International Biometric Society.
Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B
2017-08-15
In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of the WPC or BPC can increase the required number of clusters. By illustrating how the parameters required for sample size calculations arise from the CRXO design and by providing guidance on both how to choose values for the parameters and perform the sample size calculations, the implementation of the sample size formulae for CRXO trials may improve.
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
NASA Astrophysics Data System (ADS)
Milliere, L.; Maskasheva, K.; Laurent, C.; Despax, B.; Boudou, L.; Teyssedre, G.
2016-01-01
The aim of this work is to limit charge injection from a semi-conducting electrode into low density polyethylene (LDPE) under dc field by tailoring the polymer surface using a silver nanoparticles-containing layer. The layer is composed of a plane of silver nanoparticles embedded in a semi-insulating organosilicon matrix deposited on the polyethylene surface by a plasma process. Size, density and surface coverage of the nanoparticles are controlled through the plasma process. Space charge distribution in 300 μm thick LDPE samples is measured by the pulsed-electroacoustic technique following a short term (step-wise voltage increase up to 50 kV mm-1, 20 min in duration each, followed by a polarity inversion) and a longer term (up to 12 h under 40 kV mm-1) protocols for voltage application. A comparative study of space charge distribution between a reference polyethylene sample and the tailored samples is presented. It is shown that the barrier effect depends on the size distribution and the surface area covered by the nanoparticles: 15 nm (average size) silver nanoparticles with a high surface density but still not percolating form an efficient barrier layer that suppress charge injection. It is worthy to note that charge injection is detected for samples tailored with (i) percolating nanoparticles embedded in organosilicon layer; (ii) with organosilicon layer only, without nanoparticles and (iii) with smaller size silver particles (<10 nm) embedded in organosilicon layer. The amount of injected charges in the tailored samples increases gradually in the samples ranking given above. The mechanism of charge injection mitigation is discussed on the basis of complementary experiments carried out on the nanocomposite layer such as surface potential measurements. The ability of silver clusters to stabilize electrical charges close to the electrode thereby counterbalancing the applied field appears to be a key factor in explaining the charge injection mitigation effect.
Estimating individual glomerular volume in the human kidney: clinical perspectives
Puelles, Victor G.; Zimanyi, Monika A.; Samuel, Terence; Hughson, Michael D.; Douglas-Denton, Rebecca N.; Bertram, John F.
2012-01-01
Background. Measurement of individual glomerular volumes (IGV) has allowed the identification of drivers of glomerular hypertrophy in subjects without overt renal pathology. This study aims to highlight the relevance of IGV measurements with possible clinical implications and determine how many profiles must be measured in order to achieve stable size distribution estimates. Methods. We re-analysed 2250 IGV estimates obtained using the disector/Cavalieri method in 41 African and 34 Caucasian Americans. Pooled IGV analysis of mean and variance was conducted. Monte-Carlo (Jackknife) simulations determined the effect of the number of sampled glomeruli on mean IGV. Lin’s concordance coefficient (RC), coefficient of variation (CV) and coefficient of error (CE) measured reliability. Results. IGV mean and variance increased with overweight and hypertensive status. Superficial glomeruli were significantly smaller than juxtamedullary glomeruli in all subjects (P < 0.01), by race (P < 0.05) and in obese individuals (P < 0.01). Subjects with multiple chronic kidney disease (CKD) comorbidities showed significant increases in IGV mean and variability. Overall, mean IGV was particularly reliable with nine or more sampled glomeruli (RC > 0.95, <5% difference in CV and CE). These observations were not affected by a reduced sample size and did not disrupt the inverse linear correlation between mean IGV and estimated total glomerular number. Conclusions. Multiple comorbidities for CKD are associated with increased IGV mean and variance within subjects, including overweight, obesity and hypertension. Zonal selection and the number of sampled glomeruli do not represent drawbacks for future longitudinal biopsy-based studies of glomerular size and distribution. PMID:21984554
Graham, Simon; O'Connor, Catherine C; Morgan, Stephen; Chamberlain, Catherine; Hocking, Jane
2017-06-01
Background Aboriginal and Torres Strait Islanders (Aboriginal) are Australia's first peoples. Between 2006 and 2015, HIV notifications increased among Aboriginal people; however, among non-Aboriginal people, notifications remained relatively stable. This systematic review and meta-analysis aims to examine the prevalence of HIV among Aboriginal people overall and by subgroups. In November 2015, a search of PubMed and Web of Science, grey literature and abstracts from conferences was conducted. A study was included if it reported the number of Aboriginal people tested and those who tested positive for HIV. The following variables were extracted: gender; Aboriginal status; population group (men who have sex with men, people who inject drugs, adults, youth in detention and pregnant females) and geographical location. An assessment of between study heterogeneity (I 2 test) and within study bias (selection, measurement and sample size) was also conducted. Seven studies were included; all were cross-sectional study designs. The overall sample size was 3772 and the prevalence of HIV was 0.1% (I 2 =38.3%, P=0.136). Five studies included convenient samples of people attending Australian Needle and Syringe Program Centres, clinics, hospitals and a youth detention centre, increasing the potential of selection bias. Four studies had a sample size, thus decreasing the ability to report pooled estimates. The prevalence of HIV among Aboriginal people in Australia is low. Community-based programs that include both prevention messages for those at risk of infection and culturally appropriate clinical management and support for Aboriginal people living with HIV are needed to prevent HIV increasing among Aboriginal people.
Effect of Microstructural Interfaces on the Mechanical Response of Crystalline Metallic Materials
NASA Astrophysics Data System (ADS)
Aitken, Zachary H.
Advances in nano-scale mechanical testing have brought about progress in the understanding of physical phenomena in materials and a measure of control in the fabrication of novel materials. In contrast to bulk materials that display size-invariant mechanical properties, sub-micron metallic samples show a critical dependence on sample size. The strength of nano-scale single crystalline metals is well-described by a power-law function, sigma ∝ D-n, where D is a critical sample size and n is a experimentally-fit positive exponent. This relationship is attributed to source-driven plasticity and demonstrates a strengthening as the decreasing sample size begins to limit the size and number of dislocation sources. A full understanding of this size-dependence is complicated by the presence of microstructural features such as interfaces that can compete with the dominant dislocation-based deformation mechanisms. In this thesis, the effects of microstructural features such as grain boundaries and anisotropic crystallinity on nano-scale metals are investigated through uniaxial compression testing. We find that nano-sized Cu covered by a hard coating displays a Bauschinger effect and the emergence of this behavior can be explained through a simple dislocation-based analytic model. Al nano-pillars containing a single vertically-oriented coincident site lattice grain boundary are found to show similar deformation to single-crystalline nano-pillars with slip traces passing through the grain boundary. With increasing tilt angle of the grain boundary from the pillar axis, we observe a transition from dislocation-dominated deformation to grain boundary sliding. Crystallites are observed to shear along the grain boundary and molecular dynamics simulations reveal a mechanism of atomic migration that accommodates boundary sliding. We conclude with an analysis of the effects of inherent crystal anisotropy and alloying on the mechanical behavior of the Mg alloy, AZ31. Through comparison to pure Mg, we show that the size effect dominates the strength of samples below 10 microm, that differences in the size effect between hexagonal slip systems is due to the inherent crystal anisotropy, suggesting that the fundamental mechanism of the size effect in these slip systems is the same.
Sampling scales define occupancy and underlying occupancy-abundance relationships in animals.
Steenweg, Robin; Hebblewhite, Mark; Whittington, Jesse; Lukacs, Paul; McKelvey, Kevin
2018-01-01
Occupancy-abundance (OA) relationships are a foundational ecological phenomenon and field of study, and occupancy models are increasingly used to track population trends and understand ecological interactions. However, these two fields of ecological inquiry remain largely isolated, despite growing appreciation of the importance of integration. For example, using occupancy models to infer trends in abundance is predicated on positive OA relationships. Many occupancy studies collect data that violate geographical closure assumptions due to the choice of sampling scales and application to mobile organisms, which may change how occupancy and abundance are related. Little research, however, has explored how different occupancy sampling designs affect OA relationships. We develop a conceptual framework for understanding how sampling scales affect the definition of occupancy for mobile organisms, which drives OA relationships. We explore how spatial and temporal sampling scales, and the choice of sampling unit (areal vs. point sampling), affect OA relationships. We develop predictions using simulations, and test them using empirical occupancy data from remote cameras on 11 medium-large mammals. Surprisingly, our simulations demonstrate that when using point sampling, OA relationships are unaffected by spatial sampling grain (i.e., cell size). In contrast, when using areal sampling (e.g., species atlas data), OA relationships are affected by spatial grain. Furthermore, OA relationships are also affected by temporal sampling scales, where the curvature of the OA relationship increases with temporal sampling duration. Our empirical results support these predictions, showing that at any given abundance, the spatial grain of point sampling does not affect occupancy estimates, but longer surveys do increase occupancy estimates. For rare species (low occupancy), estimates of occupancy will quickly increase with longer surveys, even while abundance remains constant. Our results also clearly demonstrate that occupancy for mobile species without geographical closure is not true occupancy. The independence of occupancy estimates from spatial sampling grain depends on the sampling unit. Point-sampling surveys can, however, provide unbiased estimates of occupancy for multiple species simultaneously, irrespective of home-range size. The use of occupancy for trend monitoring needs to explicitly articulate how the chosen sampling scales define occupancy and affect the occupancy-abundance relationship. © 2017 by the Ecological Society of America.
Effect of MeV electron irradiation on the free volume of polyimide
NASA Astrophysics Data System (ADS)
Alegaonkar, P. S.; Bhoraskar, V. N.
2004-08-01
The free volume of the microvoids in the polyimide samples, irradiated with 6 MeV electrons, was measured by the positron annihilation technique. The free volume initially decreased the virgin value from similar to13.70 to similar to10.98 Angstrom(3) and then increased to similar to18.11 Angstrom(3) with increasing the electron fluence, over the range of 5 x 10(14) - 5 x 10(15) e/cm(2). The evolution of gaseous species from the polyimide during electron irradiation was confirmed by the residual gas analysis technique. The polyimide samples irradiated with 6 MeV electrons in AgNO3 solution were studied with the Rutherford back scattering technique. The diffusion of silver in these polyimide samples was observed for fluences >2 x 10(15) e/cm(2), at which microvoids of size greater than or equal to3 Angstrom are produced. Silver atoms did not diffuse in the polyimide samples, which were first irradiated with electrons and then immersed in AgNO3 solution. These results indicate that during electron irradiation, the microvoids with size greater than or equal to3 Angstrom were retained in the surface region through which silver atoms of size similar to2.88 Angstrom could diffuse into the polyimide. The average depth of diffusion of silver atoms in the polyimide was similar to2.5 mum.
Methods to increase reproducibility in differential gene expression via meta-analysis
Sweeney, Timothy E.; Haynes, Winston A.; Vallania, Francesco; Ioannidis, John P.; Khatri, Purvesh
2017-01-01
Findings from clinical and biological studies are often not reproducible when tested in independent cohorts. Due to the testing of a large number of hypotheses and relatively small sample sizes, results from whole-genome expression studies in particular are often not reproducible. Compared to single-study analysis, gene expression meta-analysis can improve reproducibility by integrating data from multiple studies. However, there are multiple choices in designing and carrying out a meta-analysis. Yet, clear guidelines on best practices are scarce. Here, we hypothesized that studying subsets of very large meta-analyses would allow for systematic identification of best practices to improve reproducibility. We therefore constructed three very large gene expression meta-analyses from clinical samples, and then examined meta-analyses of subsets of the datasets (all combinations of datasets with up to N/2 samples and K/2 datasets) compared to a ‘silver standard’ of differentially expressed genes found in the entire cohort. We tested three random-effects meta-analysis models using this procedure. We showed relatively greater reproducibility with more-stringent effect size thresholds with relaxed significance thresholds; relatively lower reproducibility when imposing extraneous constraints on residual heterogeneity; and an underestimation of actual false positive rate by Benjamini–Hochberg correction. In addition, multivariate regression showed that the accuracy of a meta-analysis increased significantly with more included datasets even when controlling for sample size. PMID:27634930
Temperature dependence of the size distribution function of InAs quantum dots on GaAs(001)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arciprete, F.; Fanfoni, M.; Patella, F.
2010-04-15
We present a detailed atomic-force-microscopy study of the effect of annealing on InAs/GaAs(001) quantum dots grown by molecular-beam epitaxy. Samples were grown at a low growth rate at 500 deg. C with an InAs coverage slightly greater than critical thickness and subsequently annealed at several temperatures. We find that immediately quenched samples exhibit a bimodal size distribution with a high density of small dots (<50 nm{sup 3}) while annealing at temperatures greater than 420 deg. C leads to a unimodal size distribution. This result indicates a coarsening process governing the evolution of the island size distribution function which is limitedmore » by the attachment-detachment of the adatoms at the island boundary. At higher temperatures one cannot ascribe a single rate-determining step for coarsening because of the increased role of adatom diffusion. However, for long annealing times at 500 deg. C the island size distribution is strongly affected by In desorption.« less
Room-temperature processing of CdSe quantum dots with tunable sizes
NASA Astrophysics Data System (ADS)
Joo, So-Yeong; Jeong, Da-Woon; Lee, Chan-Gi; Kim, Bum-Sung; Park, Hyun-Su; Kim, Woo-Byoung
2017-06-01
In this work, CdSe quantum dots (QDs) with tunable sizes have been fabricated via photo-induced chemical etching at room temperature, and the related reaction mechanism was investigated. The surface of QDs was oxidized by the holes generated through photon irradiation of oxygen species, and the obtained oxide layer was dissolved in an aqueous solution of 3-amino-1-propanol (APOL) with an APOL:H2O volume ratio of 5:1. The generated electrons promoted QD surface interactions with amino groups, which ultimately passivated surface defects. The absorption and photoluminescence emission peaks of the produced QDs were clearly blue-shifted about 26 nm with increasing time, and the resulting quantum yield for an 8 h etched sample was increased from 20% to 26%, as compared to the initial sample.
Effect of Sampling Plans on the Risk of Escherichia coli O157 Illness.
Kiermeier, Andreas; Sumner, John; Jenson, Ian
2015-07-01
Australia exports about 150,000 to 200,000 tons of manufacturing beef to the United States annually. Each lot is tested for Escherichia coli O157 using the N-60 sampling protocol, where 60 small pieces of surface meat from each lot of production are tested. A risk assessment of E. coli O157 illness from the consumption of hamburgers made from Australian manufacturing meat formed the basis to evaluate the effect of sample size and amount on the number of illnesses predicted. The sampling plans evaluated included no sampling (resulting in an estimated 55.2 illnesses per annum), the current N-60 plan (50.2 illnesses), N-90 (49.6 illnesses), N-120 (48.4 illnesses), and a more stringent N-60 sampling plan taking five 25-g samples from each of 12 cartons (47.4 illnesses per annum). While sampling may detect some highly contaminated lots, it does not guarantee that all such lots are removed from commerce. It is concluded that increasing the sample size or sample amount from the current N-60 plan would have a very small public health effect.
USDA-ARS?s Scientific Manuscript database
The prevalence of Nosema ceranae in managed honey bee colonies has increased dramatically in the past 10 – 20 years worldwide. A variety of genetic testing methods for species identification and prevalence are now available. However sample size and preservation method of samples prior to testing hav...
Makinster, Andrew S.; Persons, William R.; Avery, Luke A.
2011-01-01
The Lees Ferry reach of the Colorado River, a 25-kilometer segment of river located immediately downstream from Glen Canyon Dam, has contained a nonnative rainbow trout (Oncorhynchus mykiss) sport fishery since it was first stocked in 1964. The fishery has evolved over time in response to changes in dam operations and fish management. Long-term monitoring of the rainbow trout population downstream of Glen Canyon Dam is an essential component of the Glen Canyon Dam Adaptive Management Program. A standardized sampling design was implemented in 1991 and has changed several times in response to independent, external scientific-review recommendations and budget constraints. Population metrics (catch per unit effort, proportional stock density, and relative condition) were estimated from 1991 to 2009 by combining data collected at fixed sampling sites during this time period and at random sampling sites from 2002 to 2009. The validity of combining population metrics for data collected at fixed and random sites was confirmed by a one-way analysis of variance by fish-length class size. Analysis of the rainbow trout population metrics from 1991 to 2009 showed that the abundance of rainbow trout increased from 1991 to 1997, following implementation of a more steady flow regime, but declined from about 2000 to 2007. Abundance in 2008 and 2009 was high compared to previous years, which was likely the result of increased early survival caused by improved habitat conditions following the 2008 high-flow experiment at Glen Canyon Dam. Proportional stock density declined between 1991 and 2006, reflecting increased natural reproduction and large numbers of small fish in samples. Since 2001, the proportional stock density has been relatively stable. Relative condition varied with size class of rainbow trout but has been relatively stable since 1991 for fish smaller than 152 millimeters (mm), except for a substantial decrease in 2009. Relative condition was more variable for larger size classes, and substantial decreases were observed for the 152-304-mm size class in 2009 and 305-405-mm size class in 2008 that persisted into 2009.
Visual context processing deficits in schizophrenia: effects of deafness and disorganization.
Horton, Heather K; Silverstein, Steven M
2011-07-01
Visual illusions allow for strong tests of perceptual functioning. Perceptual impairments can produce superior task performance on certain tasks (i.e., more veridical perception), thereby avoiding generalized deficit confounds while tapping mechanisms that are largely outside of conscious control. Using a task based on the Ebbinghaus illusion, a perceptual phenomenon where the perceived size of a central target object is affected by the size of surrounding inducers, we tested hypotheses related to visual integration in deaf (n = 31) and hearing (n = 34) patients with schizophrenia. In past studies, psychiatrically healthy samples displayed increased visual integration relative to schizophrenia samples and thus were less able to correctly judge target sizes. Deafness, and especially the use of sign language, leads to heightened sensitivity to peripheral visual cues and increased sensitivity to visual context. Therefore, relative to hearing subjects, deaf subjects were expected to display increased context sensitivity (ie, a more normal illusion effect as evidenced by a decreased ability to correctly judge central target sizes). Confirming the hypothesis, deaf signers were significantly more sensitive to the illusion than nonsigning hearing patients. Moreover, an earlier age of sign language acquisition, higher levels of linguistic ability, and shorter illness duration were significantly related to increased context sensitivity. As predicted, disorganization was associated with reduced context sensitivity for all subjects. The primary implications of these data are that perceptual organization impairment in schizophrenia is plastic and that it is related to a broader failure in coordinating cognitive activity.
NASA Astrophysics Data System (ADS)
Singleton, Adrian A.; Schmidt, Amanda H.; Bierman, Paul R.; Rood, Dylan H.; Neilson, Thomas B.; Greene, Emily Sophie; Bower, Jennifer A.; Perdrial, Nicolas
2017-01-01
Grain-size dependencies in fallout radionuclide activity have been attributed to either increase in specific surface area in finer grain sizes or differing mineralogical abundances in different grain sizes. Here, we consider a third possibility, that the concentration and composition of grain coatings, where fallout radionuclides reside, controls their activity in fluvial sediment. We evaluated these three possible explanations in two experiments: (1) we examined the effect of sediment grain size, mineralogy, and composition of the acid-extractable materials on the distribution of 7Be, 10Be, 137Cs, and unsupported 210Pb in detrital sediment samples collected from rivers in China and the United States, and (2) we periodically monitored 7Be, 137Cs, and 210Pb retention in samples of known composition exposed to natural fallout in Ohio, USA for 294 days. Acid-extractable materials (made up predominately of Fe, Mn, Al, and Ca from secondary minerals and grain coatings produced during pedogenesis) are positively related to the abundance of fallout radionuclides in our sediment samples. Grain-size dependency of fallout radionuclide concentrations was significant in detrital sediment samples, but not in samples exposed to fallout under controlled conditions. Mineralogy had a large effect on 7Be and 210Pb retention in samples exposed to fallout, suggesting that sieving sediments to a single grain size or using specific surface area-based correction terms may not completely control for preferential distribution of these nuclides. We conclude that time-dependent geochemical, pedogenic, and sedimentary processes together result in the observed differences in nuclide distribution between different grain sizes and substrate compositions. These findings likely explain variability of measured nuclide activities in river networks that exceeds the variability introduced by analytical techniques as well as spatial and temporal differences in erosion rates and processes. In short, we suggest that presence and amount of pedogenic grain coatings is more important than either specific surface area or surface charge in setting the distribution of fallout radionuclides.
Porosity of the Marcellus Shale: A contrast matching small-angle neutron scattering study
Bahadur, Jitendra; Ruppert, Leslie F.; Pipich, Vitaliy; Sakurovs, Richard; Melnichenko, Yuri B.
2018-01-01
Neutron scattering techniques were used to determine the effect of mineral matter on the accessibility of water and toluene to pores in the Devonian Marcellus Shale. Three Marcellus Shale samples, representing quartz-rich, clay-rich, and carbonate-rich facies, were examined using contrast matching small-angle neutron scattering (CM-SANS) at ambient pressure and temperature. Contrast matching compositions of H2O, D2O and toluene, deuterated toluene were used to probe open and closed pores of these three shale samples. Results show that although the mean pore radius was approximately the same for all three samples, the fractal dimension of the quartz-rich sample was higher than for the clay-rich and carbonate-rich samples, indicating different pore size distributions among the samples. The number density of pores was highest in the clay-rich sample and lowest in the quartz-rich sample. Contrast matching with water and toluene mixtures shows that the accessibility of pores to water and toluene also varied among the samples. In general, water accessed approximately 70–80% of the larger pores (>80 nm radius) in all three samples. At smaller pore sizes (~5–80 nm radius), the fraction of accessible pores decreases. The lowest accessibility to both fluids is at pore throat size of ~25 nm radii with the quartz-rich sample exhibiting lower accessibility than the clay- and carbonate-rich samples. The mechanism for this behaviour is unclear, but because the mineralogy of the three samples varies, it is likely that the inaccessible pores in this size range are associated with organics and not a specific mineral within the samples. At even smaller pore sizes (~<2.5 nm radius), in all samples, the fraction of accessible pores to water increases again to approximately 70–80%. Accessibility to toluene generally follows that of water; however, in the smallest pores (~<2.5 nm radius), accessibility to toluene decreases, especially in the clay-rich sample which contains about 30% more closed pores than the quartz- and carbonate-rich samples. Results from this study show that mineralogy of producing intervals within a shale reservoir can affect accessibility of pores to water and toluene and these mineralogic differences may affect hydrocarbon storage and production and hydraulic fracturing characteristics
NASA Astrophysics Data System (ADS)
Majzoobi, G. H.; Rahmani, K.; Atrian, A.
2018-01-01
In this paper, dynamic compaction is employed to produce Mg-SiC nanocomposite samples using a mechanical drop hammer. Different volume fractions of SiC nano reinforcement and magnesium (Mg) micron-size powder as the matrix are mechanically milled and consolidated at different temperatures. It is found that with the increase of temperature the sintering requirements is satisfied and higher quality samples are fabricated. The density, hardness, compressive strength and the wear resistance of the compacted specimens are characterized in this work. It was found that by increasing the content of nano reinforcement, the relative density of the compacted samples decreases, whereas, the micro-hardness and the strength of the samples enhance. Furthermore, higher densification temperatures lead to density increase and hardness reduction. Additionally, it is found that the wear rate of the nanocomposite is increased remarkably by increasing the SiC nano reinforcement.
Donegan, Thomas M.
2018-01-01
Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266
Sample size and allocation of effort in point count sampling of birds in bottomland hardwood forests
Smith, W.P.; Twedt, D.J.; Cooper, R.J.; Wiedenfeld, D.A.; Hamel, P.B.; Ford, R.P.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect of increasing the number of points or visits by comparing results of 150 four-minute point counts obtained from each of four stands on Delta Experimental Forest (DEF) during May 8-May 21, 1991 and May 30-June 12, 1992. For each stand, we obtained bootstrap estimates of mean cumulative number of species each year from all possible combinations of six points and six visits. ANOVA was used to model cumulative species as a function of number of points visited, number of visits to each point, and interaction of points and visits. There was significant variation in numbers of birds and species between regions and localities (nested within region); neither habitat, nor the interaction between region and habitat, was significant. For a = 0.05 and a = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total individuals (MSE = 9.28) and species (MSE = 3.79), respectively, whereas ? 25 percent of the mean could be achieved with five counts per factor level. Sample size sufficient to detect actual differences of Wood Thrush (Hylocichla mustelina) was >200, whereas the Prothonotary Warbler (Protonotaria citrea) required <10 counts. Differences in mean cumulative species were detected among number of points visited and among number of visits to a point. In the lower MAV, mean cumulative species increased with each added point through five points and with each additional visit through four visits. Although no interaction was detected between number of points and number of visits, when paired reciprocals were compared, more points invariably yielded a significantly greater cumulative number of species than more visits to a point. Still, 36 point counts per stand during each of two breeding seasons detected only 52 percent of the known available species pool in DEF.
Effects of biochar amendment on geotechnical properties of landfill cover soil.
Reddy, Krishna R; Yaghoubi, Poupak; Yukselen-Aksoy, Yeliz
2015-06-01
Biochar is a carbon-rich product obtained when plant-based biomass is heated in a closed container with little or no available oxygen. Biochar-amended soil has the potential to serve as a landfill cover material that can oxidise methane emissions for two reasons: biochar amendment can increase the methane retention time and also enhance the biological activity that can promote the methanotrophic oxidation of methane. Hydraulic conductivity, compressibility and shear strength are the most important geotechnical properties that are required for the design of effective and stable landfill cover systems, but no studies have been reported on these properties for biochar-amended landfill cover soils. This article presents physicochemical and geotechnical properties of a biochar, a landfill cover soil and biochar-amended soils. Specifically, the effects of amending 5%, 10% and 20% biochar (of different particle sizes as produced, size-20 and size-40) to soil on its physicochemical properties, such as moisture content, organic content, specific gravity and pH, as well as geotechnical properties, such as hydraulic conductivity, compressibility and shear strength, were determined from laboratory testing. Soil or biochar samples were prepared by mixing them with 20% deionised water based on dry weight. Samples of soil amended with 5%, 10% and 20% biochar (w/w) as-is or of different select sizes, were also prepared at 20% initial moisture content. The results show that the hydraulic conductivity of the soil increases, compressibility of the soil decreases and shear strength of the soil increases with an increase in the biochar amendment, and with a decrease in biochar particle size. Overall, the study revealed that biochar-amended soils can possess excellent geotechnical properties to serve as stable landfill cover materials. © The Author(s) 2015.
Experimental study on microsphere assisted nanoscope in non-contact mode
NASA Astrophysics Data System (ADS)
Ling, Jinzhong; Li, Dancui; Liu, Xin; Wang, Xiaorui
2018-07-01
Microsphere assisted nanoscope was proposed in existing literatures to capture super-resolution images of the nano-structures beneath the microsphere attached on sample surface. In this paper, a microsphere assisted nanoscope working in non-contact mode is designed and demonstrated, in which the microsphere is controlled with a gap separated to sample surface. With a gap, the microsphere is moved in parallel to sample surface non-invasively, so as to observe all the areas of interest. Furthermore, the influence of gap size on image resolution is studied experimentally. Only when the microsphere is close enough to the sample surface, super-resolution image could be obtained. Generally, the resolution decreases when the gap increases as the contribution of evanescent wave disappears. To keep an appropriate gap size, a quantitative method is implemented to estimate the gap variation by observing Newton's rings around the microsphere, serving as a real-time feedback for tuning the gap size. With a constant gap, large-area image with high resolution can be obtained during microsphere scanning. Our study of non-contact mode makes the microsphere assisted nanoscope more practicable and easier to implement.
The effect of sample holder material on ion mobility spectrometry reproducibility
NASA Technical Reports Server (NTRS)
Jadamec, J. Richard; Su, Chih-Wu; Rigdon, Stephen; Norwood, Lavan
1995-01-01
When a positive detection of a narcotic occurs during the search of a vessel, a decision has to be made whether further intensive search is warranted. This decision is based in part on the results of a second sample collected from the same area. Therefore, the reproducibility of both sampling and instrumental analysis is critical in terms of justifying an in depth search. As reported at the 2nd Annual IMS Conference in Quebec City, the U.S. Coast Guard has determined that when paper is utilized as the sample desorption medium for the Barringer IONSCAN, the analytical results using standard reference samples are reproducible. A study was conducted utilizing papers of varying pore sizes and comparing their performance as a desorption material relative to the standard Barringer 50 micron Teflon. Nominal pore sizes ranged from 30 microns down to 2 microns. Results indicate that there is some peak instability in the first two to three windows during the analysis. The severity of the instability was observed to increase as the pore size of the paper is decreased. However, the observed peak instability does not create a situation that results in a decreased reliability or reproducibility in the analytical result.
Solution and Aging of MAR-M246 Nickel-Based Superalloy
NASA Astrophysics Data System (ADS)
Baldan, Renato; da Silva, Antonio Augusto Araújo Pinto; Nunes, Carlos Angelo; Couto, Antonio Augusto; Gabriel, Sinara Borborema; Alkmin, Luciano Braga
2017-02-01
Solution and aging heat-treatments play a key role for the application of the superalloys. The aim of this work is to evaluate the microstructure of the MAR-M246 nickel-based superalloy solutioned at 1200 and 1250 °C for 330 min and aged at 780, 880 and 980 °C for 5, 20 and 80 h. The γ' solvus, solidus and liquidus temperatures were calculated with the aid of the JMatPro software (Ni database). The as-cast and heat-treated samples were characterized by SEM/EDS and SEM-FEG. The γ' size precipitated in the aged samples was measured and compared with JMatPro simulations. The results have shown that the sample solutioned at 1250 °C for 330 min showed a very homogeneous γ matrix with carbides and cubic γ' precipitates uniformly distributed. The mean γ' size of aged samples at 780 and 880 °C for 5, 20 and 80 h did not present significant differences when compared to the solutioned sample. However, a significant increasing in the γ' particles was observed at 980 °C, evidenced by the large mean size of these particles after 80 h of aging heat-treatment.
Zhang, Renlin; Kook, Sanghoon
2014-07-15
The current understanding of soot particle morphology in diesel engines and their dependency on the fuel injection timing and pressure is limited to those sampled from the exhaust. In this study, a thermophoretic sampling and subsequent transmission electron microscope imaging were applied to the in-flame soot particles inside the cylinder of a working diesel engine for various fuel injection timings and pressures. The results show that the number count of soot particles per image decreases by more than 80% when the injection timing is retarded from -12 to -2 crank angle degrees after the top dead center. The late injection also results in over 90% reduction of the projection area of soot particles on the TEM image and the size of soot aggregates also become smaller. The primary particle size, however, is found to be insensitive to the variations in fuel injection timing. For injection pressure variations, both the size of primary particles and soot aggregates are found to decrease with increasing injection pressure, demonstrating the benefits of high injection velocity and momentum. Detailed analysis shows that the number count of soot particles per image increases with increasing injection pressure up to 130 MPa, primarily due to the increased small particle aggregates that are less than 40 nm in the radius of gyration. The fractal dimension shows an overall decrease with the increasing injection pressure. However, there is a case that the fractal dimension shows an unexpected increase between 100 and 130 MPa injection pressure. It is because the small aggregates with more compact and agglomerated structures outnumber the large aggregates with more stretched chain-like structures.
Laboratory Spectrometer for Wear Metal Analysis of Engine Lubricants.
1986-04-01
analysis, the acid digestion technique for sample pretreatment is the best approach available to date because of its relatively large sample size (1000...microliters or more). However, this technique has two major shortcomings limiting its application: (1) it requires the use of hydrofluoric acid (a...accuracy. Sample preparation including filtration or acid digestion may increase analysis times by 20 minutes or more. b. Repeatability In the analysis
The change of family size and structure in China.
1992-04-01
With the socioeconomic development and change of people's values, there is some significant change in family size and structure in China. According to the 10% sample data from the 4th Census, 1 family has 3.97 persons on an average, less than the 3rd Census by 0.44 persons; among all types of families, 1-generation families account for 13.5%, 3 generation families for 18.5%, and 2-generation families account for 68%. Instead of large families consisting of several generations and many members, small families has now become a principal family type in China. According to the analysis of the sample data from the 4th Census, the family size is mainly decided by the fertility level in particular regions, and it also depends on the economic development. So family size is usually smaller in more developed regions, such as in Beijing, Tianjin, Zhejiang, Liaoning as well as in Shanghai of which family size is only 3.08 persons; and family size is generally larger in less developed regions such as in Qinghai, Guangxi, Gansu, Xinjiang, and in Tibet of which family size is as large as 5.13 persons. Specialists regard the increase of the number of families as 1 of the major consequences of the economic development, change of living style, and improvement of living standard, Young people now are more inclined to live separately from their parents. However, the increase of the number of families will undoubtedly place more pressure on housing and require more furniture and other durable consumer goods from the market. Therefore, the government and other social sectors related should make corresponding plans and policies to cope with the increase of families and minifying of family size so as to promote family planning and socioeconomic development, and to create better social circumstances for small families. full text
Aggregate distribution and associated organic carbon influenced by cover crops
NASA Astrophysics Data System (ADS)
Barquero, Irene; García-González, Irene; Benito, Marta; Gabriel, Jose Luis; Quemada, Miguel; Hontoria, Chiquinquirá
2013-04-01
Replacing fallow with cover crops during the non-cropping period seems to be a good alternative to diminish soil degradation by enhancing soil aggregation and increasing organic carbon. The aim of this study was to analyze the effect of replacing fallow by different winter cover crops (CC) on the aggregate distribution and C associated of an Haplic Calcisol. The study area was located in Central Spain, under semi-arid Mediterranean climate. A 4-year field trial was conducted using Barley (Hordeum vulgare L.) and Vetch (Vicia sativa L.) as CC during the intercropping period of maize (Zea mays L.) under irrigation. All treatments were equally irrigated and fertilized. Maize was directly sown over CC residues previously killed in early spring. Composite samples were collected at 0-5 and 5-20 cm depths in each treatment on autumn of 2010. Soil samples were separated by wet sieving into four aggregate-size classes: large macroaggregates ( >2000 µm); small macroaggregates (250-2000 µm); microaggregates (53-250 µm); and < 53 µm (silt + clay size). Organic carbon associated to each aggregate-size class was measured by Walkley-Black Method. Our preliminary results showed that the aggregate-size distribution was dominated by microaggregates (48-53%) and the <53 µm fraction (40-44%) resulting in a low mean weight diameter (MWD). Both cover crops increased aggregate size resulting in a higher MWD (0.28 mm) in comparison with fallow (0.20 mm) in the 0-5 cm layer. Barley showed a higher MWD than fallow also in 5-20 cm layer. Organic carbon concentrations in aggregate-size classes at top layer followed the order: large macroaggregates > small macroaggregates > microaggregates > silt + clay size. Treatments did not influence C concentration in aggregate-size classes. In conclusion, cover crops improved soil structure increasing the proportion of macroaggregates and MWD being Barley more effective than Vetch at subsurface layer.
The effects of pore structure on the behavior of water in lignite coal and activated carbon.
Nwaka, Daniel; Tahmasebi, Arash; Tian, Lu; Yu, Jianglong
2016-09-01
The effects of physical structure (pore structure) on behavior of water in lignite coal and activated carbon (AC) samples were investigated by using Differential Scanning Calorimetry (DSC) and low-temperature X-ray diffraction (XRD) techniques. AC samples with different pore structures were prepared at 800°C in steam and the results were compared with that of parent lignite coal. The DSC results confirmed the presence of two types of freezable water that freeze at -8°C (free water) and -42°C (freezable bound water). A shift in peak position of free water (FW) towards lower temperature was observed in AC samples compared to the lignite coal with decreasing water loading. The amount of free water (FW) increased with increasing gasification conversion. The amounts of free and freezable bound water (FBW) in AC samples were calculated and correlated to pore volume and average pore size. The amount of FW in AC samples is well correlated to the pore volume and average pore size of the samples, while an opposite trend was observed for FBW. The low-temperature XRD analysis confirmed the existence of non-freezable water (NFW) in coal and AC with the boundary between the freezable and non-freezable water (NFW) determined. Copyright © 2016 Elsevier Inc. All rights reserved.
Fahl Mar, Kaysee; Schilling, Joshua; Brown, Walter A.
2018-01-01
Background Recent studies show that placebo response has grown significantly over time in clinical trials for antidepressants, ADHD medications, antiepileptics, and antidiabetics. Contrary to expectations, trial outcome measures and success rates have not been impacted. This study aimed to see if this trend of increasing placebo response and stable efficacy outcome measures is unique to the conditions previously studied or if it occurs in trials for conditions with physiologically-measured symptoms, such as hypertension. Method For this reason, we evaluated the efficacy data reported in the US Food and Drug Administration Medical and Statistical reviews for 23 antihypertensive programs (32,022 patients, 63 trials, 142 treatment arms). Placebo and medication response, effect sizes, and drug-placebo differences were calculated for each treatment arm and examined over time using meta-regression. We also explored the relationship of sample size, trial duration, baseline blood pressure, and number of treatment arms to placebo/drug response and efficacy outcome measures. Results Like trials of other conditions, placebo response has risen significantly over time (R2 = 0.093, p = 0.018) and effect size (R2 = 0.013, p = 0.187) drug-placebo difference (R2 = 0.013, p = 0.182) and success rate (134/142, 94.4%) have remained unaffected, likely due to a significant compensatory increase in antihypertensive response (R2 = 0.086, p<0.001). Treatment arms are likely overpowered with sample sizes increasing over time (R2 = 0.387, p<0.0001) and stable, large effect sizes (0.78 ±0.37). The exploratory analysis of sample size, trial duration, baseline blood pressure, and number of treatment arms yielded mixed results unlikely to explain the pattern of placebo response and efficacy outcomes over time. The magnitude of placebo response had no relationship to effect size (p = 0.877), antihypertensive-placebo differences (p = 0.752), or p-values (p = 0.963) but was correlated with antihypertensive response (R2 = 0.347, p<0.0001). Conclusions As hypothesized, this study shows that placebo response is increasing in clinical trials for hypertension without any evidence of this increase impacting trial outcomes. Attempting to control placebo response in clinical trials for hypertension may not be necessary for successful efficacy outcomes. In exploratory analysis, we noted that despite finding significant relationships, none of the trial or patient characteristics we examined offered a clear explanation of the rise in placebo and stability in outcome measures over time. Collectively, these data suggest that the phenomenon of increasing placebo response and stable efficacy outcomes may be a general trend, occurring across trials for various psychiatric and medical conditions with physiological and non-physiological endpoints. PMID:29489874
Impacts on seafloor geology of drilling disturbance in shallow waters.
Corrêa, Iran C S; Toldo, Elírio E; Toledo, Felipe A L
2010-08-01
This paper describes the effects of drilling disturbance on the seafloor of the upper continental slope of the Campos Basin, Brazil, as a result of the project Environmental Monitoring of Offshore Drilling for Petroleum Exploration--MAPEM. Field sampling was carried out surrounding wells, operated by the company PETROBRAS, to compare sediment properties of the seafloor, including grain-size distribution, total organic carbon, and clay mineral composition, prior to drilling with samples obtained 3 and 22 months after drilling. The sampling grid used had 74 stations, 68 of which were located along 7 radials from the well up to a distance of 500 m. The other 6 stations were used as reference, and were located 2,500 m from the well. The results show no significant sedimentological variation in the area affected by drilling activity. The observed sedimentological changes include a fining of grain size, increase in total organic carbon, an increase in gibbsite, illite, and smectite, and a decrease in kaolinite after drilling took place.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Transport of dissolved organic matter in Boom Clay: Size effects
NASA Astrophysics Data System (ADS)
Durce, D.; Aertsens, M.; Jacques, D.; Maes, N.; Van Gompel, M.
2018-01-01
A coupled experimental-modelling approach was developed to evaluate the effects of molecular weight (MW) of dissolved organic matter (DOM) on its transport through intact Boom Clay (BC) samples. Natural DOM was sampled in-situ in the BC layer. Transport was investigated with percolation experiments on 1.5 cm BC samples by measuring the outflow MW distribution (MWD) by size exclusion chromatography (SEC). A one-dimensional reactive transport model was developed to account for retardation, diffusion and entrapment (attachment and/or straining) of DOM. These parameters were determined along the MWD by implementing a discretisation of DOM into several MW points and modelling the breakthrough of each point. The pore throat diameter of BC was determined as 6.6-7.6 nm. Below this critical size, transport of DOM is MW dependent and two major types of transport were identified. Below MW of 2 kDa, DOM was neither strongly trapped nor strongly retarded. This fraction had an averaged capacity factor of 1.19 ± 0.24 and an apparent dispersion coefficient ranging from 7.5 × 10- 11 to 1.7 × 10- 11 m2/s with increasing MW. DOM with MW > 2 kDa was affected by both retardation and straining that increased significantly with increasing MW while apparent dispersion coefficients decreased. Values ranging from 1.36 to 19.6 were determined for the capacity factor and 3.2 × 10- 11 to 1.0 × 10- 11 m2/s for the apparent dispersion coefficient for species with 2.2 kDa < MW < 9.3 kDa. Straining resulted in an immobilisation of in average 49 ± 6% of the injected 9.3 kDa species. Our findings show that an accurate description of DOM transport requires the consideration of the size effects.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
NASA Astrophysics Data System (ADS)
Sharifzadegan, L.; Sedghi, H.
2018-07-01
In this work, samples of a nominal composition Y1 Ba2 - x Smx Cu3O7 - δ With Sm substitution (X = 0.00, 0.01, 0.03, 0.05) were prepared by the solid state reaction method. And the effect of substituting Sm instead of Ba was investigated on the structural and superconducting properties of the samples. Measurement of electrical resistance and critical temperature was done using 4-Probe connection method. Results indicate that Sm substitution affects the YBSCO superconducting samples, decrease the transition temperature of the superconductor and increases the special electrical resistance and the transition width. Also, XRD studies show that in all samples of the Y-123 phase, the formation and structure is orthorhombic. SEM images showed that the porosity in the samples increased with increasing Sm due to disruption in grain growth and instead, increase Sm in the samples cause decreasing the size of the grain.
Robust functional statistics applied to Probability Density Function shape screening of sEMG data.
Boudaoud, S; Rix, H; Al Harrach, M; Marin, F
2014-01-01
Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.
Potential Reporting Bias in Neuroimaging Studies of Sex Differences.
David, Sean P; Naudet, Florian; Laude, Jennifer; Radua, Joaquim; Fusar-Poli, Paolo; Chu, Isabella; Stefanick, Marcia L; Ioannidis, John P A
2018-04-17
Numerous functional magnetic resonance imaging (fMRI) studies have reported sex differences. To empirically evaluate for evidence of excessive significance bias in this literature, we searched for published fMRI studies of human brain to evaluate sex differences, regardless of the topic investigated, in Medline and Scopus over 10 years. We analyzed the prevalence of conclusions in favor of sex differences and the correlation between study sample sizes and number of significant foci identified. In the absence of bias, larger studies (better powered) should identify a larger number of significant foci. Across 179 papers, median sample size was n = 32 (interquartile range 23-47.5). A median of 5 foci related to sex differences were reported (interquartile range, 2-9.5). Few articles (n = 2) had titles focused on no differences or on similarities (n = 3) between sexes. Overall, 158 papers (88%) reached "positive" conclusions in their abstract and presented some foci related to sex differences. There was no statistically significant relationship between sample size and the number of foci (-0.048% increase for every 10 participants, p = 0.63). The extremely high prevalence of "positive" results and the lack of the expected relationship between sample size and the number of discovered foci reflect probable reporting bias and excess significance bias in this literature.
The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education
ERIC Educational Resources Information Center
Slavin, Robert; Smith, Dewi
2009-01-01
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
Robustness of methods for blinded sample size re-estimation with overdispersed count data.
Schneider, Simon; Schmidli, Heinz; Friede, Tim
2013-09-20
Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between-patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re-estimation have been proposed for overdispersed count data, one of which is based on an EM-algorithm. In this paper we investigate the EM-algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM-based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.
Aqueous phase hydrogenation of phenol catalyzed by Pd and PdAg on ZrO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resende, Karen A.; Hori, Carla E.; Noronha, Fabio B.
Hydrogenation of phenol in aqueous phase was studied over a series of ZrO2-supported Pd catalysts in order to explore the effects of particle size and of Ag addition on the activity of Pd. Kinetic assessments were performed in a batch reactor, on monometallic Pd/ZrO2 samples with different Pd loadings (0.5%, 1% and 2%), as well as on a 1% PdAg/ZrO2 sample. The turnover frequency (TOF) increases with the Pd particle size. The reaction orders in phenol and H2 indicate that the surface coverages by phenol, H2 and their derived intermediates are higher on 0.5% Pd/ZrO2 than on other samples. Themore » activation energy was the lowest on the least active sample (0.5% Pd/ZrO2), while being identical on 1% and 2% Pd/ZrO2 catalysts. Thus, the significantly lower activity of the small Pd particles (1-2 nm on average) in 0.5%Pd/ZrO2 is explained by the unfavorable activation entropies for the strongly bound species. The presence of Ag increases considerably the TOF of the reaction by decreasing the Ea and increasing the coverages of phenol and H2.« less
Twenty-year trends of authorship and sampling in applied biomechanics research.
Knudson, Duane
2012-02-01
This study documented the trends in authorship and sampling in applied biomechanics research published in the Journal of Applied Biomechanics and ISBS Proceedings. Original research articles of the 1989, 1994, 1999, 2004, and 2009 volumes of these serials were reviewed, excluding reviews, modeling papers, technical notes, and editorials. Compared to 1989 volumes, the mean number of authors per paper significantly increased (35 and 100%, respectively) in the 2009 volumes, along with increased rates of hyperauthorship, and a decline in rates of single authorship. Sample sizes varied widely across papers and did not appear to change since 1989.
Chandra Observations of Three Newly Discovered Quadruply Gravitationally Lensed Quasars
NASA Astrophysics Data System (ADS)
Pooley, David
2017-09-01
Our previous work has shown the unique power of Chandra observations of quadruply gravitationally lensed quasars to address several fundamental astrophysical issues. We have used these observations to (1) determine the cause of flux ratio anomalies, (2) measure the sizes of quasar accretion disks, (3) determine the dark matter content of the lensing galaxies, and (4) measure the stellar mass-to-light ratio (in fact, this is the only way to measure the stellar mass-to-light ratio beyond the solar neighborhood). In all cases, the main source of uncertainty in our results is the small size of the sample of known quads; only 15 systems are available for study with Chandra. We propose Chandra observations of three newly discovered quads, increasing the sample size by 20%
Improved radiation dose efficiency in solution SAXS using a sheath flow sample environment
Kirby, Nigel; Cowieson, Nathan; Hawley, Adrian M.; Mudie, Stephen T.; McGillivray, Duncan J.; Kusel, Michael; Samardzic-Boban, Vesna; Ryan, Timothy M.
2016-01-01
Radiation damage is a major limitation to synchrotron small-angle X-ray scattering analysis of biomacromolecules. Flowing the sample during exposure helps to reduce the problem, but its effectiveness in the laminar-flow regime is limited by slow flow velocity at the walls of sample cells. To overcome this limitation, the coflow method was developed, where the sample flows through the centre of its cell surrounded by a flow of matched buffer. The method permits an order-of-magnitude increase of X-ray incident flux before sample damage, improves measurement statistics and maintains low sample concentration limits. The method also efficiently handles sample volumes of a few microlitres, can increase sample throughput, is intrinsically resistant to capillary fouling by sample and is suited to static samples and size-exclusion chromatography applications. The method unlocks further potential of third-generation synchrotron beamlines to facilitate new and challenging applications in solution scattering. PMID:27917826
Phylogenetic effective sample size.
Bartoszek, Krzysztof
2016-10-21
In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Probing the Magnetic Causes of CMEs: Free Magnetic Energy More Important Than Either Size Or Twist
NASA Technical Reports Server (NTRS)
Falconer, D. A.; Moore, R. L.; Gary, G. A.
2006-01-01
To probe the magnetic causes of CMEs, we have examined three types of magnetic measures: size, twist and total nonpotentiality (or total free magnetic energy) of an active region. Total nonpotentiality is roughly the product of size times twist. For predominately bipolar active regions, we have found that total nonpotentiality measures have the strongest correlation with future CME productivity (approx. 75% prediction success rate), while size and twist measures each have a weaker correlation with future CME productivity (approx. 65% prediction success rate) (Falconer, Moore, & Gary, ApJ, 644, 2006). For multipolar active regions, we find that the CME-prediction success rates for total nonpotentiality and size are about the same as for bipolar active regions. We also find that the size measure correlation with CME productivity is nearly all due to the contribution of size to total nonpotentiality. We have a total nonpotentiality measure that can be obtained from a line-of-sight magnetogram of the active region and that is as strongly correlated with CME productivity as are any of our total-nonpotentiality measures from deprojected vector magnetograms. We plan to further expand our sample by using MDI magnetograms of each active region in our sample to determine its total nonpotentiality and size on each day that the active region was within 30 deg. of disk center. The resulting increase in sample size will improve our statistics and allow us to investigate whether the nonpotentiality threshold for CME production is nearly the same or significantly different for multipolar regions than for bipolar regions. In addition, we will investigate the time rates of change of size and total nonpotentiality as additional causes of CME productivity.
Size distribution and growth rate of crystal nuclei near critical undercooling in small volumes
NASA Astrophysics Data System (ADS)
Kožíšek, Z.; Demo, P.
2017-11-01
Kinetic equations are numerically solved within standard nucleation model to determine the size distribution of nuclei in small volumes near critical undercooling. Critical undercooling, when first nuclei are detected within the system, depends on the droplet volume. The size distribution of nuclei reaches the stationary value after some time delay and decreases with nucleus size. Only a certain maximum size of nuclei is reached in small volumes near critical undercooling. As a model system, we selected recently studied nucleation in Ni droplet [J. Bokeloh et al., Phys. Rev. Let. 107 (2011) 145701] due to available experimental and simulation data. However, using these data for sample masses from 23 μg up to 63 mg (corresponding to experiments) leads to the size distribution of nuclei, when no critical nuclei in Ni droplet are formed (the number of critical nuclei < 1). If one takes into account the size dependence of the interfacial energy, the size distribution of nuclei increases to reasonable values. In lower volumes (V ≤ 10-9 m3) nucleus size reaches some maximum extreme size, which quickly increases with undercooling. Supercritical clusters continue their growth only if the number of critical nuclei is sufficiently high.
A study on magneto-optic properties of CoxMg1-xFe2O4 nanoferrofluids
NASA Astrophysics Data System (ADS)
Karthick, R.; Ramachandran, K.; Srinivasan, R.
2018-04-01
Nanoparticles of CoxMg1-xFe2O4 (x = 0.1, 0.5, 0.9) were synthesized using chemical co-precipitation method. Characterization by X-ray diffraction technique confirmed the formation of cubic crystalline structure and the crystallite size of the samples obtained using Debye-Scherrer approximation were found to increase with increasing cobalt substitution. Surface morphology and the Chemical composition of the samples were visualized using scanning electron microscope (SEM) with energy dispersive analysis of X-rays (EDAX). Room temperature magnetic parameters of the nanoparticles by vibrating sample magnetometer (VSM) revealed the magnetic properties such as Saturation magnetization (Ms), Remanent magnetization (Mr) and Coercive field (Hc) found to increase with increasing cobalt substitution. Faraday rotation measurements on CoxMg1-xFe2O4 ferrofluids exhibited increase in rotation with cobalt substitution. Further, there is an increase in Faraday rotation with increasing magnetic field for all the samples.
Mazloomi-Rezvani, Mahsa; Salami-Kalajahi, Mehdi; Roghani-Mamaqani, Hossein
2018-06-01
Different core-shell nanoparticles with Au as core and stimuli-responsive polymers such as poly(acrylic acid) (PAA), poly(methacrylic acid) (PMAA), poly(N-isopropylacrylamide) (PNIPAAm), poly(N,N'-methylenebis(acrylamide)) (PMBA), poly(2-hydroxyethyl methacrylate) (PHEMA) and poly((2-dimethylamino)ethyl methacrylate) (PDMAEMA) as shells were fabricated via inverse emulsion polymerization. Dynamic light scattering (DLS) was used to investigate particles sizes and particle size distributions and transmission electron microscopy (TEM) was applied to observe the core-shell structure of Au-polymer nanoparticles. Also, surface charge of all samples was studied by measurement of zeta potentials. Synthesized core-shell nanoparticles were utilized as nanocarriers of DOX as anti-cancer drug and drug release behaviors were investigated in dark room and under irradiation of near-infrared (NIR) light. Results showed that all core-shell samples have particle sizes less than 100 nm with narrow particle size distributions. Moreover, amount of drug loading decreased by increasing zeta potential. In dark room, lower pH resulted in higher cumulative drug release due to better solubility of DOX in acidic media. Also, NIR lighting on DOX-loaded samples led to increasing cumulative drug release significantly. However, DOX-loaded Au-PAA and Au-PMAA showed higher drug release at pH = 7.4 compared to 5.3 under NIR lighting. Copyright © 2018 Elsevier B.V. All rights reserved.
Online versus offline: The Web as a medium for response time data collection.
Chetverikov, Andrey; Upravitelev, Philipp
2016-09-01
The Internet provides a convenient environment for data collection in psychology. Modern Web programming languages, such as JavaScript or Flash (ActionScript), facilitate complex experiments without the necessity of experimenter presence. Yet there is always a question of how much noise is added due to the differences between the setups used by participants and whether it is compensated for by increased ecological validity and larger sample sizes. This is especially a problem for experiments that measure response times (RTs), because they are more sensitive (and hence more susceptible to noise) than, for example, choices per se. We used a simple visual search task with different set sizes to compare laboratory performance with Web performance. The results suggest that although the locations (means) of RT distributions are different, other distribution parameters are not. Furthermore, the effect of experiment setting does not depend on set size, suggesting that task difficulty is not important in the choice of a data collection method. We also collected an additional online sample to investigate the effects of hardware and software diversity on the accuracy of RT data. We found that the high diversity of browsers, operating systems, and CPU performance may have a detrimental effect, though it can partly be compensated for by increased sample sizes and trial numbers. In sum, the findings show that Web-based experiments are an acceptable source of RT data, comparable to a common keyboard-based setup in the laboratory.
NASA Astrophysics Data System (ADS)
Zykova, A. K.; Pantyukhov, P. V.; Kolesnikova, N. N.; Popov, A. A.; Olkhov, A. A.
2015-10-01
Biocomposites based on low density polyethylene (LDPE) and birch wood flour (WF) were investigated. The mechanical properties and water absorption capacity were examined depending on the particle size of a filler in biocomposites. The aim of the paper is the investigation of composite properties depending on the filler particle size. The filler particle sizes were 0-80 µm, 80-140 µm, 140-200 µm, and 0-200 µm. The tensile strength of composite samples varied within the range 5.7-8.2 MPa. Elongation at break of composites varied within the range 5.1-7.5%. Highest mechanical properties were found in composites with the lowest filler fraction. Highest water absorption was observed in composition with a complex fraction of the filler. The influence of the filler particle size on composite properties was shown. It was found that an increase of the filler particle size decreases mechanical parameters and increases water absorption.
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Self-organized nanostructure formation on the graphite surface induced by helium ion irradiation
NASA Astrophysics Data System (ADS)
Dutta, N. J.; Mohanty, S. R.; Buzarbaruah, N.; Ranjan, M.; Rawat, R. S.
2018-06-01
The effects of helium ion irradiation on the graphite surface are studied by employing a plasma focus device. The device emits helium ion pulse having energies in the range of a few keV to a few MeV and flux on the order of 1025 m-2 s-1 at 60 mm axial position from the anode tip. The field emission scanning electron microscopy confirms the formation of multi-modal spherical and elongated agglomerated structures on irradiated samples surface with increase in agglomerate size with increasing number of irradiation shots. The transient annealing in each irradiation was not enough to cause the Oswald ripening or sintering of particles into bigger particle or crystal size but only resulted in clustering. The atomic force micrographs reveal an increase in average surface roughness with increasing ion irradiation. The Raman study demonstrates increase in disordered D peak along with reduced crystallite size (La) with increasing number of irradiation shots.
Accuracy or precision: Implications of sample design and methodology on abundance estimation
Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.
2015-01-01
Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.
The effects of sample size on population genomic analyses--implications for the tests of neutrality.
Subramanian, Sankar
2016-02-20
One of the fundamental measures of molecular genetic variation is the Watterson's estimator (θ), which is based on the number of segregating sites. The estimation of θ is unbiased only under neutrality and constant population growth. It is well known that the estimation of θ is biased when these assumptions are violated. However, the effects of sample size in modulating the bias was not well appreciated. We examined this issue in detail based on large-scale exome data and robust simulations. Our investigation revealed that sample size appreciably influences θ estimation and this effect was much higher for constrained genomic regions than that of neutral regions. For instance, θ estimated for synonymous sites using 512 human exomes was 1.9 times higher than that obtained using 16 exomes. However, this difference was 2.5 times for the nonsynonymous sites of the same data. We observed a positive correlation between the rate of increase in θ estimates (with respect to the sample size) and the magnitude of selection pressure. For example, θ estimated for the nonsynonymous sites of highly constrained genes (dN/dS < 0.1) using 512 exomes was 3.6 times higher than that estimated using 16 exomes. In contrast this difference was only 2 times for the less constrained genes (dN/dS > 0.9). The results of this study reveal the extent of underestimation owing to small sample sizes and thus emphasize the importance of sample size in estimating a number of population genomic parameters. Our results have serious implications for neutrality tests such as Tajima D, Fu-Li D and those based on the McDonald and Kreitman test: Neutrality Index and the fraction of adaptive substitutions. For instance, use of 16 exomes produced 2.4 times higher proportion of adaptive substitutions compared to that obtained using 512 exomes (24% vs 10 %).
Broberg, Per
2013-07-19
One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.
Accounting for twin births in sample size calculations for randomised trials.
Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J
2018-05-04
Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.
The Response of Simple Polymer Structures Under Dynamic Loading
NASA Astrophysics Data System (ADS)
Proud, William; Ellison, Kay; Yapp, Su; Cole, Cloe; Galimberti, Stefano; Institute of Shock Physics Team
2017-06-01
The dynamic response of polymeric materials has been widely studied with the effects of degree of crystallinity, strain rate, temperature and sample size being commonly reported. This study uses a simple PMMA structure, a right cylindrical sample, with structural features such as holes. The features are added an varied in a systematic fashion. Samples were dynamically loaded using a Split Hopkinson Pressure Bar up to failure. The resulting stress-strain curves are presented showing the change in sample response. The strain to failure is shown to increase initially with the presence of holes, while failure stress is relatively unaffected. The fracture patterns seen in the failed samples change, with tensile cracks, Hertzian cones, shear effects being dominant for different holes sizes and geometries. The sample were prepared by laser cutting and checked for residual stress before experiment. The data is used to validate predictive model predictions where material, structure and damage are included.. The Institute of Shock Physics acknowledges the support of Imperial College London and the Atomic Weapons Establishment.
Air Flow and Pressure Drop Measurements Across Porous Oxides
NASA Technical Reports Server (NTRS)
Fox, Dennis S.; Cuy, Michael D.; Werner, Roger A.
2008-01-01
This report summarizes the results of air flow tests across eight porous, open cell ceramic oxide samples. During ceramic specimen processing, the porosity was formed using the sacrificial template technique, with two different sizes of polystyrene beads used for the template. The samples were initially supplied with thicknesses ranging from 0.14 to 0.20 in. (0.35 to 0.50 cm) and nonuniform backside morphology (some areas dense, some porous). Samples were therefore ground to a thickness of 0.12 to 0.14 in. (0.30 to 0.35 cm) using dry 120 grit SiC paper. Pressure drop versus air flow is reported. Comparisons of samples with thickness variations are made, as are pressure drop estimates. As the density of the ceramic material increases the maximum corrected flow decreases rapidly. Future sample sets should be supplied with samples of similar thickness and having uniform surface morphology. This would allow a more consistent determination of air flow versus processing parameters and the resulting porosity size and distribution.
Dahlberg, Suzanne E; Shapiro, Geoffrey I; Clark, Jeffrey W; Johnson, Bruce E
2014-07-01
Phase I trials have traditionally been designed to assess toxicity and establish phase II doses with dose-finding studies and expansion cohorts but are frequently exceeding the traditional sample size to further assess endpoints in specific patient subsets. The scientific objectives of phase I expansion cohorts and their evolving role in the current era of targeted therapies have yet to be systematically examined. Adult therapeutic phase I trials opened within Dana-Farber/Harvard Cancer Center (DF/HCC) from 1988 to 2012 were identified for sample size details. Statistical designs and study objectives of those submitted in 2011 were reviewed for expansion cohort details. Five hundred twenty-two adult therapeutic phase I trials were identified during the 25 years. The average sample size of a phase I study has increased from 33.8 patients to 73.1 patients over that time. The proportion of trials with planned enrollment of 50 or fewer patients dropped from 93.0% during the time period 1988 to 1992 to 46.0% between 2008 and 2012; at the same time, the proportion of trials enrolling 51 to 100 patients and more than 100 patients increased from 5.3% and 1.8%, respectively, to 40.5% and 13.5% (χ(2) test, two-sided P < .001). Sixteen of the 60 trials (26.7%) in 2011 enrolled patients to three or more sub-cohorts in the expansion phase. Sixty percent of studies provided no statistical justification of the sample size, although 91.7% of trials stated response as an objective. Our data suggest that phase I studies have dramatically changed in size and scientific scope within the last decade. Additional studies addressing the implications of this trend on research processes, ethical concerns, and resource burden are needed. © The Author 2014. Published by Oxford University Press. All rights reserved.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Non-Destructive Evaluation of Grain Structure Using Air-Coupled Ultrasonics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belvin, A. D.; Burrell, R. K.; Cole, E.G.
2009-08-01
Cast material has a grain structure that is relatively non-uniform. There is a desire to evaluate the grain structure of this material non-destructively. Traditionally, grain size measurement is a destructive process involving the sectioning and metallographic imaging of the material. Generally, this is performed on a representative sample on a periodic basis. Sampling is inefficient and costly. Furthermore, the resulting data may not provide an accurate description of the entire part's average grain size or grain size variation. This project is designed to develop a non-destructive acoustic scanning technique, using Chirp waveforms, to quantify average grain size and grain sizemore » variation across the surface of a cast material. A Chirp is a signal in which the frequency increases or decreases over time (frequency modulation). As a Chirp passes through a material, the material's grains reduce the signal (attenuation) by absorbing the signal energy. Geophysics research has shown a direct correlation with Chirp wave attenuation and mean grain size in geological structures. The goal of this project is to demonstrate that Chirp waveform attenuation can be used to measure grain size and grain variation in cast metals (uranium and other materials of interest). An off-axis ultrasonic inspection technique using air-coupled ultrasonics has been developed to determine grain size in cast materials. The technique gives a uniform response across the volume of the component. This technique has been demonstrated to provide generalized trends of grain variation over the samples investigated.« less
Public Opinion Polls, Chicken Soup and Sample Size
ERIC Educational Resources Information Center
Nguyen, Phung
2005-01-01
Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.
Accounting for Incomplete Species Detection in Fish Community Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less
Mechanisms of Laser-Induced Dissection and Transport of Histologic Specimens
Vogel, Alfred; Lorenz, Kathrin; Horneffer, Verena; Hüttmann, Gereon; von Smolinski, Dorthe; Gebert, Andreas
2007-01-01
Rapid contact- and contamination-free procurement of histologic material for proteomic and genomic analysis can be achieved by laser microdissection of the sample of interest followed by laser-induced transport (laser pressure catapulting). The dynamics of laser microdissection and laser pressure catapulting of histologic samples of 80 μm diameter was investigated by means of time-resolved photography. The working mechanism of microdissection was found to be plasma-mediated ablation initiated by linear absorption. Catapulting was driven by plasma formation when tightly focused pulses were used, and by photothermal ablation at the bottom of the sample when defocused pulses producing laser spot diameters larger than 35 μm were used. With focused pulses, driving pressures of several hundred MPa accelerated the specimen to initial velocities of 100–300 m/s before they were rapidly slowed down by air friction. When the laser spot was increased to a size comparable to or larger than the sample diameter, both driving pressure and flight velocity decreased considerably. Based on a characterization of the thermal and optical properties of the histologic specimens and supporting materials used, we calculated the evolution of the heat distribution in the sample. Selected catapulted samples were examined by scanning electron microscopy or analyzed by real-time reverse-transcriptase polymerase chain reaction. We found that catapulting of dissected samples results in little collateral damage when the laser pulses are either tightly focused or when the laser spot size is comparable to the specimen size. By contrast, moderate defocusing with spot sizes up to one-third of the specimen diameter may involve significant heat and ultraviolet exposure. Potential side effects are maximal when samples are catapulted directly from a glass slide without a supporting polymer foil. PMID:17766336
Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.
Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto
2007-07-01
To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.
Predictor sort sampling and one-sided confidence bounds on quantiles
Steve Verrill; Victoria L. Herian; David W. Green
2002-01-01
Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...
Anti-Depressants, Suicide, and Drug Regulation
ERIC Educational Resources Information Center
Ludwig, Jens; Marcotte, Dave E.
2005-01-01
Policymakers are increasingly concerned that a relatively new class of anti-depressant drugs, selective serotonin re-uptake inhibitors (SSRI), may increase the risk of suicide for at least some patients, particularly children. Prior randomized trials are not informative on this question because of small sample sizes and other limitations. Using…
NASA Astrophysics Data System (ADS)
Niu, Q.; Zhang, C.
2017-12-01
Archie's law is an important empirical relationship linking the electrical resistivity of geological materials to their porosity. It has been found experimentally that the porosity exponent m in Archie's law in sedimentary rocks might be related to the degree of cementation, and therefore m is termed as "cementation factor" in most literatures. Despite it has been known for many years, there is lack of well-accepted physical interpretations of the porosity exponent. Some theoretical and experimental evidences have also shown that m may be controlled by the particle and/or pore shape. In this study, we conduct a pore-scale modeling of the porosity exponent that incorporates different geological processes. The evolution of m of eight synthetic samples with different particle sizes and shapes are calculated during two geological processes, i.e., compaction and cementation. The numerical results show that in dilute conditions, m is controlled by the particle shape. As the samples deviate from dilute conditions, m increases gradually due to the strong interaction between particles. When the samples are at static equilibrium, m is noticeably larger than its values at dilution condition. The numerical simulation results also show that both geological compaction and cementation induce a significant increase in m. In addition, the geometric characteristics of these samples (e.g., pore space/throat size, and their distributions) during compaction and cementation are also calculated. Preliminary analysis shows a unique correlation between the pore size broadness and porosity exponent for all eight samples. However, such a correlation is not found between m and other geometric characteristics.
Sub-sampling genetic data to estimate black bear population size: A case study
Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.
2007-01-01
Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.
The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival
Kordjazi, Ziya; Frusher, Stewart; Buxton, Colin; Gardner, Caleb; Bird, Tomas
2016-01-01
Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters) were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day) and four levels of the number of sampling-days (2, 4, 6 and 7 days). The most parsimonious Cormack-Jolly-Seber (CJS) model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery. PMID:26990561
Bunnell, David B.; Hale, R. Scott; Vanni, Michael J.; Stein, Roy A.
2006-01-01
Stock-recruit models typically use only spawning stock size as a predictor of recruitment to a fishery. In this paper, however, we used spawning stock size as well as larval density and key environmental variables to predict recruitment of white crappies Pomoxis annularis and black crappies P. nigromaculatus, a genus notorious for variable recruitment. We sampled adults and recruits from 11 Ohio reservoirs and larvae from 9 reservoirs during 1998-2001. We sampled chlorophyll as an index of reservoir productivity and obtained daily estimates of water elevation to determine the impact of hydrology on recruitment. Akaike's information criterion (AIC) revealed that Ricker and Beverton-Holt stock-recruit models that included chlorophyll best explained the variation in larval density and age-2 recruits. Specifically, spawning stock catch per effort (CPE) and chlorophyll explained 63-64% of the variation in larval density. In turn, larval density and chlorophyll explained 43-49% of the variation in age-2 recruit CPE. Finally, spawning stock CPE and chlorophyll were the best predictors of recruit CPE (i.e., 74-86%). Although larval density and recruitment increased with chlorophyll, neither was related to seasonal water elevation. Also, the AIC generally did not distinguish between Ricker and Beverton-Holt models. From these relationships, we concluded that crappie recruitment can be limited by spawning stock CPE and larval production when spawning stock sizes are low (i.e., CPE , 5 crappies/net-night). At higher levels of spawning stock sizes, spawning stock CPE and recruitment were less clearly related. To predict recruitment in Ohio reservoirs, managers should assess spawning stock CPE with trap nets and estimate chlorophyll concentrations. To increase crappie recruitment in reservoirs where recruitment is consistently poor, managers should use regulations to increase spawning stock size, which, in turn, should increase larval production and recruits to the fishery.
Jones, Alfred Ndahi; Bridgeman, John
2016-10-15
The growth, breakage and re-growth of flocs formed using crude and purified seed extracts of Okra (OK), Sabdariffa (SB) and Kenaf (KE) as coagulants and coagulant aids was assessed. The results showed floc size increased from 300 μm when aluminium sulphate (AS) was used as a coagulant to between 696 μm and 722 μm with the addition of 50 mg/l of OK, KE and SB crude samples as coagulant aids. Similarly, an increase in floc size was observed when each of the purified proteins was used as coagulant aid at doses of between 0.123 and 0.74 mg/l. The largest floc sizes of 741 μm, 460 μm and 571 μm were obtained with a 0.123 mg/l dose of purified Okra protein (POP), purified Sabdariffa (PSP) and purified Kenaf (PKP) respectively. Further coagulant aid addition from 0.123 to 0.74 mg/l resulted in a decrease in floc size and strength in POP and PSP. However, an increase in floc strength and reduced d50 size was observed in PKP at a dose of 0.74 mg/l. Flocs produced when using purified and crude extract samples as coagulant aids exhibited high recovery factors and strength. However, flocs exhibited greater recovery post-breakage when the extracts were used as a primary coagulant. It was observed that the combination of purified proteins and AS improved floc size, strength and recovery factors. Therefore, the applications of Hibiscus seeds in either crude or purified form increases floc growth, strength, recoverability and can also reduce the cost associated with the import of AS in developing countries. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Uyei, Jennifer; Braithwaite, R Scott
2016-01-01
Despite the benefits of the placebo-controlled trial design, it is limited by its inability to quantify total benefits and harms. Such trials, for example, are not designed to detect an intervention's placebo or nocebo effects, which if detected could alter the benefit-to-harm balance and change a decision to adopt or reject an intervention. In this article, we explore scenarios in which alternative experimental trial designs, which differ in the type of control used, influence expected value across a range of pretest assumptions and study sample sizes. We developed a decision model to compare 3 trial designs and their implications for decision making: 2-arm placebo-controlled trial ("placebo-control"), 2-arm intervention v. do nothing trial ("null-control"), and an innovative 3-arm trial design: intervention v. do nothing v. placebo trial ("novel design"). Four scenarios were explored regarding particular attributes of a hypothetical intervention: 1) all benefits and no harm, 2) no biological effect, 3) only biological effects, and 4) surreptitious harm (no biological benefit or nocebo effect). Scenario 1: When sample sizes were very small, the null-control was preferred, but as sample sizes increased, expected value of all 3 designs converged. Scenario 2: The null-control was preferred regardless of sample size when the ratio of placebo to nocebo effect was >1; otherwise, the placebo-control was preferred. Scenario 3: When sample size was very small, the placebo-control was preferred when benefits outweighed harms, but the novel design was preferred when harms outweighed benefits. Scenario 4: The placebo-control was preferred when harms outweighed placebo benefits; otherwise, preference went to the null-control. Scenarios are hypothetical, study designs have not been tested in a real-world setting, blinding is not possible in all designs, and some may argue the novel design poses ethical concerns. We identified scenarios in which alternative experimental study designs would confer greater expected value than the placebo-controlled trial design. The likelihood and prevalence of such situations warrant further study. © The Author(s) 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aznar, Alexandra; Day, Megan; Doris, Elizabeth
The report analyzes and presents information learned from a sample of 20 cities across the United States, from New York City to Park City, Utah, including a diverse sample of population size, utility type, region, annual greenhouse gas reduction targets, vehicle use, and median household income. The report compares climate, sustainability, and energy plans to better understand where cities are taking energy-related actions and how they are measuring impacts. Some common energy-related goals focus on reducing city-wide carbon emissions, improving energy efficiency across sectors, increasing renewable energy, and increasing biking and walking.
Intercalated Nanocomposites Based on High-Temperature Superconducting Ceramics and Their Properties
Tonoyan, Anahit; Schiсk, Christoph; Davtyan, Sevan
2009-01-01
High temperature superconducting (SC) nanocomposites based on SC ceramics and various polymeric binders were prepared. Regardless of the size of the ceramics’ grains, the increase of their amount leads to an increase of resistance to rupture and modulus and a decrease in limiting deformation, whereas an increase in the average ceramic grain size worsens resistance properties. The SC, thermo-chemical, mechanical and dynamic-mechanical properties of the samples were investigated. Superconducting properties of the polymer ceramic nanocomposites are explained by intercalation of macromolecule fragments into the interstitial layer of the ceramics’ grains. This phenomenon leads to a change in the morphological structure of the superconducting nanocomposites.
2011-01-01
To obtain approval for the use vertebrate animals in research, an investigator must assure an ethics committee that the proposed number of animals is the minimum necessary to achieve a scientific goal. How does an investigator make that assurance? A power analysis is most accurate when the outcome is known before the study, which it rarely is. A ‘pilot study’ is appropriate only when the number of animals used is a tiny fraction of the numbers that will be invested in the main study because the data for the pilot animals cannot legitimately be used again in the main study without increasing the rate of type I errors (false discovery). Traditional significance testing requires the investigator to determine the final sample size before any data are collected and then to delay analysis of any of the data until all of the data are final. An investigator often learns at that point either that the sample size was larger than necessary or too small to achieve significance. Subjects cannot be added at this point in the study without increasing type I errors. In addition, journal reviewers may require more replications in quantitative studies than are truly necessary. Sequential stopping rules used with traditional significance tests allow incremental accumulation of data on a biomedical research problem so that significance, replicability, and use of a minimal number of animals can be assured without increasing type I errors. PMID:21838970
Microwave resonances in dielectric samples probed in Corbino geometry: simulation and experiment.
Felger, M Maximilian; Dressel, Martin; Scheffler, Marc
2013-11-01
The Corbino approach, where the sample of interest terminates a coaxial cable, is a well-established method for microwave spectroscopy. If the sample is dielectric and if the probe geometry basically forms a conductive cavity, this combination can sustain well-defined microwave resonances that are detrimental for broadband measurements. Here, we present detailed simulations and measurements to investigate the resonance frequencies as a function of sample and probe size and of sample permittivity. This allows a quantitative optimization to increase the frequency of the lowest-lying resonance.
NASA Astrophysics Data System (ADS)
Presley, Marsha A.; Craddock, Robert A.; Zolotova, Natalya
2009-11-01
A line-heat source apparatus was used to measure thermal conductivities of a lightly cemented fluvial sediment (salinity = 1.1 g · kg-1), and the same sample with the cement bonds almost completely disrupted, under low pressure, carbon dioxide atmospheres. The thermal conductivities of the cemented sample were approximately 3× higher, over the range of atmospheric pressures tested, than the thermal conductivities of the same sample after the cement bonds were broken. A thermal conductivity-derived particle size was determined for each sample by comparing these thermal conductivity measurements to previous data that demonstrated the dependence of thermal conductivity on particle size. Actual particle-size distributions were determined via physical separation through brass sieves. When uncemented, 87% of the particles were less than 125 μm in diameter, with 60% of the sample being less than 63 μm in diameter. As much as 35% of the cemented sample was composed of conglomerate particles with diameters greater than 500 μm. The thermal conductivities of the cemented sample were most similar to those of 500-μm glass beads, whereas the thermal conductivities of the uncemented sample were most similar to those of 75-μm glass beads. This study demonstrates that even a small amount of salt cement can significantly increase the thermal conductivity of particulate materials, as predicted by thermal modeling estimates by previous investigators.
Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.
Rochon, K; Scoles, G A; Lysyk, T J
2012-03-01
A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, M.; Eglinton, T. I.; Haghipour, N.; Montluçon, D. B.; Wacker, L.; Hou, P.; Zhao, M.
2016-12-01
The transport of organic carbon (OC) by rivers to coastal oceans is an important component of the global carbon cycle. The Yellow River (YR), the second largest river in China, transports large amounts of particulate organic carbon (POC) to the Chinese marginal seas, with fossil and pre-aged (ca, 1600 yr) OC comprising the dominant components. However, the influence of hydrodynamic processes on the origin, composition and age of POC exported by the YR remains poorly understood, yet these processes likely ultimately play an important role in determining OC fate in the Chinese marginal seas. We address this question through bulk, biomarker and carbon isotopic (δ13C and Δ14C) characterization of organic matter associated with different grain size fractions of total suspended particles (TSP) in the YR. Surface TSP samples were collected in the spring, summer, fall and during the Water-Sediment Regulation period (WSR, July) of 2015. TSP samples were separated into five grain-size fractions (<8μm, 8-16μm, 16-32μm, 32-63μm and >63μm) for organic geochemical and isotope analysis. Generally, the 16-32 and 32-63μm fractions contributed most of the TSP mass and the majority of OC resided in 16-32μm fraction. TOC% decreased with increasing grain size and 14C ages exhibited significant variability, ranging from 3,335 yr (<8μm fraction in summer) to 11,120 yr (>63μm fraction in autumn), but did not show any systematic trend among grain size fractions or across sampling times. In contrast, compound-specific 14C analysis of long-chain n-fatty acids (C26-30 FAs) revealed two clear patterns: first, C26-30 FAs age decreased with increasing grain size for all sampling times; second, the C26-30 FAs age difference was the largest among the different size fractions during the WSR period, and smallest after the WSR. These findings have important implications for our understanding of riverine POC transport mechanisms and their influence on the dispersal and burial efficiency of terrestrial OC in coastal oceans.
Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...
2016-05-25
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
Ultrasonic Spot Welding of a Rare-Earth Containing ZEK100 Magnesium Alloy: Effect of Welding Energy
NASA Astrophysics Data System (ADS)
Macwan, A.; Chen, D. L.
2016-04-01
Ultrasonic spot welding was used to join a low rare-earth containing ZEK100 Mg alloy at different levels of welding energy, and tensile lap shear tests were conducted to evaluate the failure strength in relation to the microstructural changes. It was observed that dynamic recrystallization occurred in the nugget zone; the grain size increased and microhardness decreased with increasing welding energy arising from the increasing interface temperature and strain rate. The weld interface experienced severe plastic deformation at a high strain rate from ~500 to ~2100 s-1 with increasing welding energy from 500 to 2000 J. A relationship between grain size and Zener-Hollomon parameter, and a Hall-Petch-type relationship between microhardness and grain size were established. The tensile lap shear strength and failure energy were observed to first increase with increasing welding energy, reach the maximum values at 1500 J, and then decrease with a further increase in the welding energy. The samples welded at a welding energy ≤1500 J exhibited an interfacial failure mode, while nugget pull-out occurred in the samples welded at a welding energy above 1500 J. The fracture surfaces showed typical shear failure. Low-temperature tests at 233 K (-40 °C) showed no significant effect on the strength and failure mode of joints welded at the optimal welding energy of 1500 J. Elevated temperature tests at 453 K (180 °C) revealed a lower failure load but a higher failure energy due to the increased deformability, and showed a mixed mode of partial interfacial failure and partial nugget pull-out.
Size distribution of rare earth elements in coal ash
Scott, Clinton T.; Deonarine, Amrika; Kolker, Allan; Adams, Monique; Holland, James F.
2015-01-01
Rare earth elements (REEs) are utilized in various applications that are vital to the automotive, petrochemical, medical, and information technology industries. As world demand for REEs increases, critical shortages are expected. Due to the retention of REEs during coal combustion, coal fly ash is increasingly considered a potential resource. Previous studies have demonstrated that coal fly ash is variably enriched in REEs relative to feed coal (e.g, Seredin and Dai, 2012) and that enrichment increases with decreasing size fractions (Blissett et al., 2014). In order to further explore the REE resource potential of coal ash, and determine the partitioning behavior of REE as a function of grain size, we studied whole coal and fly ash size-fractions collected from three U.S commercial-scale coal-fired generating stations burning Appalachian or Powder River Basin coal. Whole fly ash was separated into , 5 um, to 5 to 10 um and 10 to 100 um particle size fractions by mechanical shaking using trace-metal clean procedures. In these samples REE enrichments in whole fly ash ranges 5.6 to 18.5 times that of feedcoals. Partitioning results for size separates relative to whole coal and whole fly ash will also be reported.
Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.
McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M
2015-03-01
Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.
Olsen, Kim Rose; Sørensen, Torben Højmark; Gyrd-Hansen, Dorte
2010-04-19
Due to shortage of general practitioners, it may be necessary to improve productivity. We assess the association between productivity, list size and patient- and practice characteristics. A regression approach is used to perform productivity analysis based on national register data and survey data for 1,758 practices. Practices are divided into four groups according to list size and productivity. Statistical tests are used to assess differences in patient- and practice characteristics. There is a significant, positive correlation between list size and productivity (p < 0.01). Nevertheless, 19% of the practices have a list size below and a productivity above mean sample values. These practices have relatively demanding patients (older, low socioeconomic status, high use of pharmaceuticals) and they are frequently located in areas with limited access to specialized care and have a low use of assisting personnel. 13% of the practices have a list size above and a productivity below mean sample values. These practices have relatively less demanding patients, are located in areas with good access to specialized care, and have a high use of assisting personnel. Lists and practice characteristics have substantial influence on both productivity and list size. Adjusting list size to external factors seems to be an effective tool to increase productivity in general practice.
NASA Astrophysics Data System (ADS)
Mahmoudi, Soulmaz; Gholizadeh, Ahmad
2018-06-01
In this work, Y3-xSrxZrxFe5O12 (0.0 ≤ x ≤ 0.7) were synthesized by citrate precursor method at 1050 °C. The structural and magnetic properties of Y3-xSrxFe5-xZrxO12 were studied by using the X-ray diffraction technique, scanning electron microscopy, transmission electron microscopy, the Fourier transform infrared spectroscopy and vibrating sample magnetometer. XRD analysis using X'Pert package show a pure garnet phase with cubic structure (space group Ia-3d) and the impurity phase SrZrO3 is observed when the range of x value is exceeded from 0.6. Rietveld refinement using Fullprof program shows the lattice volume expansion with increasing the degree of Sr/Zr substitution. The crystallite sizes remain constant in the range of x = 0.0 - 0.5 and then increase. The different morphology observed in SEM micrographs of the samples can be related to different values of the microstrain in the samples. The hysteresis loops of the samples reveal a superparamagnetic behaviour. Also, the drop in coercivity with increasing of the substitution is mainly originated from a reduction in the magneto-elastic anisotropy energy. The values of the saturation magnetization (MS) indicate a non-monotonically variant with increasing the Sr/Zr substitution and reach a maximum 26.14 emu/g for the sample x = 0.1 and a minimum 17.64 emu/g for x = 0.0 and x = 0.2. The variation of MS, in these samples results from a superposition of three factors; reduction of Fe3+ in a-site, change in angle FeT-O-FeO, and magnetic core size.
Dibble, Clare J; Shatova, Tatyana A; Jorgenson, Jennie L; Stickel, Jonathan J
2011-01-01
An improved understanding of how particle size distribution relates to enzymatic hydrolysis performance and rheological properties could enable enhanced biochemical conversion of lignocellulosic feedstocks. Particle size distribution can change as a result of either physical or chemical manipulation of a biomass sample. In this study, we employed image processing techniques to measure slurry particle size distribution and validated the results by showing that they are comparable to those from laser diffraction and sieving. Particle size and chemical changes of biomass slurries were manipulated independently and the resulting yield stress and enzymatic digestibility of slurries with different size distributions were measured. Interestingly, reducing particle size by mechanical means from about 1 mm to 100 μm did not reduce the yield stress of the slurries over a broad range of concentrations or increase the digestibility of the biomass over the range of size reduction studied here. This is in stark contrast to the increase in digestibility and decrease in yield stress when particle size is reduced by dilute-acid pretreatment over similar size ranges. Copyright © 2011 American Institute of Chemical Engineers (AIChE).
NASA Astrophysics Data System (ADS)
Sordillo, Laura A.; Pu, Yang; Sordillo, Peter P.; Budansky, Yury; Alfano, R. R.
2014-05-01
Spectral profiles of tissues from patients with breast carcinoma, malignant carcinoid and non-small cell lung carcinoma were acquired using native fluorescence spectroscopy. A novel spectroscopic ratiometer device (S3-LED) with selective excitation wavelengths at 280 nm and 335 nm was used to produce the emission spectra of the key biomolecules, tryptophan and NADH, in the tissue samples. In each of the samples, analysis of emission intensity peaks from biomolecules showed increased 340 nm/440 nm and 340 nm/460 nm ratios in the malignant samples compared to their paired normal samples. This most likely represented increased tryptophan to NADH ratios in the malignant tissue samples compared to their paired normal samples. Among the non-small cell lung carcinoma and breast carcinomas, it appeared that tumors of very large size or poor differentiation had an even greater increase in the 340 nm/440 nm and 340 nm/460 nm ratios. In the samples of malignant carcinoid, which is known to be a highly metabolically active tumor, a marked increase in these ratios was also seen.
Effect of microfluidization on casein micelle size of bovine milk
NASA Astrophysics Data System (ADS)
Sinaga, H.; Deeth, H.; Bhandari, B.
2018-02-01
The properties of milk are likely to be dependent on the casein micelle size, and various processing technologies produce particular change in the average size of casein micelles. The main objective of this study was to manipulate casein micelle size by subjecting milk to microfluidizer. The experiment was performed as a complete block randomised design with three replications. The sample was passed through the microfluidizer at the set pressure of 83, 97, 112 and 126 MPa for one, two, three, four, five and six cycles, except for the 112 MPa. The results showed that microfluidized milk has smaller size by 3% with pressure up to 126 MPa. However, at each pressure, no further reduction was observed after increasing the passed up to 6 cycles. Although the average casein micelle size was similar, elevating pressure resulted in narrower size distribution. In contrast, increasing the number of cycles had little effect on casein micelle distribution. The finding from this study can be applied for future work to characterize the fundamental and functional properties of the treated milk.
Ahrenstorff, Tyler D.; Diana, James S.; Fetzer, William W.; Jones, Thomas S.; Lawson, Zach J.; McInerny, Michael C.; Santucci, Victor J.; Vander Zanden, M. Jake
2018-01-01
Body size governs predator-prey interactions, which in turn structure populations, communities, and food webs. Understanding predator-prey size relationships is valuable from a theoretical perspective, in basic research, and for management applications. However, predator-prey size data are limited and costly to acquire. We quantified predator-prey total length and mass relationships for several freshwater piscivorous taxa: crappie (Pomoxis spp.), largemouth bass (Micropterus salmoides), muskellunge (Esox masquinongy), northern pike (Esox lucius), rock bass (Ambloplites rupestris), smallmouth bass (Micropterus dolomieu), and walleye (Sander vitreus). The range of prey total lengths increased with predator total length. The median and maximum ingested prey total length varied with predator taxon and length, but generally ranged from 10–20% and 32–46% of predator total length, respectively. Predators tended to consume larger fusiform prey than laterally compressed prey. With the exception of large muskellunge, predators most commonly consumed prey between 16 and 73 mm. A sensitivity analysis indicated estimates can be very accurate at sample sizes greater than 1,000 diet items and fairly accurate at sample sizes greater than 100. However, sample sizes less than 50 should be evaluated with caution. Furthermore, median log10 predator-prey body mass ratios ranged from 1.9–2.5, nearly 50% lower than values previously reported for freshwater fishes. Managers, researchers, and modelers could use our findings as a tool for numerous predator-prey evaluations from stocking size optimization to individual-based bioenergetics analyses identifying prey size structure. To this end, we have developed a web-based user interface to maximize the utility of our models that can be found at www.LakeEcologyLab.org/pred_prey. PMID:29543856
Gaeta, Jereme W; Ahrenstorff, Tyler D; Diana, James S; Fetzer, William W; Jones, Thomas S; Lawson, Zach J; McInerny, Michael C; Santucci, Victor J; Vander Zanden, M Jake
2018-01-01
Body size governs predator-prey interactions, which in turn structure populations, communities, and food webs. Understanding predator-prey size relationships is valuable from a theoretical perspective, in basic research, and for management applications. However, predator-prey size data are limited and costly to acquire. We quantified predator-prey total length and mass relationships for several freshwater piscivorous taxa: crappie (Pomoxis spp.), largemouth bass (Micropterus salmoides), muskellunge (Esox masquinongy), northern pike (Esox lucius), rock bass (Ambloplites rupestris), smallmouth bass (Micropterus dolomieu), and walleye (Sander vitreus). The range of prey total lengths increased with predator total length. The median and maximum ingested prey total length varied with predator taxon and length, but generally ranged from 10-20% and 32-46% of predator total length, respectively. Predators tended to consume larger fusiform prey than laterally compressed prey. With the exception of large muskellunge, predators most commonly consumed prey between 16 and 73 mm. A sensitivity analysis indicated estimates can be very accurate at sample sizes greater than 1,000 diet items and fairly accurate at sample sizes greater than 100. However, sample sizes less than 50 should be evaluated with caution. Furthermore, median log10 predator-prey body mass ratios ranged from 1.9-2.5, nearly 50% lower than values previously reported for freshwater fishes. Managers, researchers, and modelers could use our findings as a tool for numerous predator-prey evaluations from stocking size optimization to individual-based bioenergetics analyses identifying prey size structure. To this end, we have developed a web-based user interface to maximize the utility of our models that can be found at www.LakeEcologyLab.org/pred_prey.