The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.
Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S
2016-10-01
The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Mayer, B; Muche, R
2013-01-01
Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.
Synthesis And Characterization Of Reduced Size Ferrite Reinforced Polymer Composites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borah, Subasit; Bhattacharyya, Nidhi S.
2008-04-24
Small sized Co{sub 1-x}Ni{sub x}Fe{sub 2}O{sub 4} ferrite particles are synthesized by chemical route. The precursor materials are annealed at 400, 600 and 800 C. The crystallographic structure and phases of the samples are characterized by X-ray diffraction (XRD). The annealed ferrite samples crystallized into cubic spinel structure. Transmission Electron Microscopy (TEM) micrographs show that the average particle size of the samples are <20 nm. Particulate magneto-polymer composite materials are fabricated by reinforcing low density polyethylene (LDPE) matrix with the ferrite samples. The B-H loop study conducted at 10 kHz on the toroid shaped composite samples shows reduction in magneticmore » losses with decrease in size of the filler sample. Magnetic losses are detrimental for applications of ferrite at high powers. The reduction in magnetic loss shows a possible application of Co-Ni ferrites at high microwave power levels.« less
An integrated approach to piezoactuator positioning in high-speed atomic force microscope imaging
NASA Astrophysics Data System (ADS)
Yan, Yan; Wu, Ying; Zou, Qingze; Su, Chanmin
2008-07-01
In this paper, an integrated approach to achieve high-speed atomic force microscope (AFM) imaging of large-size samples is proposed, which combines the enhanced inversion-based iterative control technique to drive the piezotube actuator control for lateral x-y axis positioning with the use of a dual-stage piezoactuator for vertical z-axis positioning. High-speed, large-size AFM imaging is challenging because in high-speed lateral scanning of the AFM imaging at large size, large positioning error of the AFM probe relative to the sample can be generated due to the adverse effects—the nonlinear hysteresis and the vibrational dynamics of the piezotube actuator. In addition, vertical precision positioning of the AFM probe is even more challenging (than the lateral scanning) because the desired trajectory (i.e., the sample topography profile) is unknown in general, and the probe positioning is also effected by and sensitive to the probe-sample interaction. The main contribution of this article is the development of an integrated approach that combines advanced control algorithm with an advanced hardware platform. The proposed approach is demonstrated in experiments by imaging a large-size (50μm ) calibration sample at high-speed (50Hz scan rate).
Rasch fit statistics and sample size considerations for polytomous data.
Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael
2008-05-29
Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire - 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges.
Rasch fit statistics and sample size considerations for polytomous data
Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael
2008-01-01
Background Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Methods Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire – 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. Results The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. Conclusion It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges. PMID:18510722
Electrical and magnetic properties of nano-sized magnesium ferrite
NASA Astrophysics Data System (ADS)
T, Smitha; X, Sheena; J, Binu P.; Mohammed, E. M.
2015-02-01
Nano-sized magnesium ferrite was synthesized using sol-gel techniques. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. Vibration Sample Magnetometer was used to record the magnetic measurements. XRD analysis reveals the prepared sample is single phasic without any impurity. Particle size calculation shows the average crystallite size of the sample is 19nm. FTIR analysis confirmed spinel structure of the prepared samples. Magnetic measurement study shows that the sample is ferromagnetic with high degree of isotropy. Hysterisis loop was traced at temperatures 100K and 300K. DC electrical resistivity measurements show semiconducting nature of the sample.
Breaking Free of Sample Size Dogma to Perform Innovative Translational Research
Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.
2011-01-01
Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197
Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
NASA Astrophysics Data System (ADS)
Dadras, Sedigheh; Davoudiniya, Masoumeh
2018-05-01
This paper sets out to investigate and compare the effects of Ag nanoparticles and carbon nanotubes (CNTs) doping on the mechanical properties of Y1Ba2Cu3O7-δ (YBCO) high temperature superconductor. For this purpose, the pure and doped YBCO samples were synthesized by sol-gel method. The microstructural analysis of the samples is performed using X-ray diffraction (XRD). The crystalline size, lattice strain and stress of the pure and doped YBCO samples were estimated by modified forms of Williamson-Hall analysis (W-H), namely, uniform deformation model (UDM), uniform deformation stress model (UDSM) and the size-strain plot method (SSP). These results show that the crystalline size, lattice strain and stress of the YBCO samples declined by Ag nanoparticles and CNTs doping.
Sirugudu, Roopas Kiran; Vemuri, Rama Krishna Murthy; Venkatachalam, Subramanian; Gopalakrishnan, Anisha; Budaraju, Srinivasa Murty
2011-01-01
Microwave sintering of materials significantly depends on dielectric, magnetic and conductive Losses. Samples with high dielectric and magnetic loss such as ferrites could be sintered easily. But low dielectric loss material such as dielectric resonators (paraelectrics) finds difficulty in generation of heat during microwave interaction. Microwave sintering of materials of these two classes helps in understanding the variation in dielectric and magnetic characteristics with respect to the change in grain size. High-energy ball milled Ni0.6Cu0.2Zn0.2Fe1.98O4-delta and ZnTiO3 are sintered in conventional and microwave methods and characterized for respective dielectric and magnetic characteristics. The grain size variation with higher copper content is also observed with conventional and microwave sintering. The grain size in microwave sintered Ni0.6Cu0.2Zn0.2Fe1.98O4-delta is found to be much small and uniform in comparison with conventional sintered sample. However, the grain size of microwave sintered sample is almost equal to that of conventional sintered sample of Ni0.3Cu0.5Zn0.2Fe1.98O4-delta. In contrast to these high dielectric and magnetic loss ferrites, the paraelectric materials are observed to sinter in presence of microwaves. Although microwave sintered zinc titanate sample showed finer and uniform grains with respect to conventional samples, the dielectric characteristics of microwave sintered sample are found to be less than that of conventional sample. Low dielectric constant is attributed to the low density. Smaller grain size is found to be responsible for low quality factor and the presence of small percentage of TiO2 is observed to achieve the temperature stable resonant frequency.
Nomogram for sample size calculation on a straightforward basis for the kappa statistic.
Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo
2014-09-01
Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.
Size Matters: FTIR Spectral Analysis of Apollo Regolith Samples Exhibits Grain Size Dependence.
NASA Astrophysics Data System (ADS)
Martin, Dayl; Joy, Katherine; Pernet-Fisher, John; Wogelius, Roy; Morlok, Andreas; Hiesinger, Harald
2017-04-01
The Mercury Thermal Infrared Spectrometer (MERTIS) on the upcoming BepiColombo mission is designed to analyse the surface of Mercury in thermal infrared wavelengths (7-14 μm) to investigate the physical properties of the surface materials [1]. Laboratory analyses of analogue materials are useful for investigating how various sample properties alter the resulting infrared spectrum. Laboratory FTIR analysis of Apollo fine (<1mm) soil samples 14259,672, 15401,147, and 67481,96 have provided an insight into how grain size, composition, maturity (i.e., exposure to space weathering processes), and proportion of glassy material affect their average infrared spectra. Each of these samples was analysed as a bulk sample and five size fractions: <25, 25-63, 63-125, 125-250, and <250 μm. Sample 14259,672 is a highly mature highlands regolith with a large proportion of agglutinates [2]. The high agglutinate content (>60%) causes a 'flattening' of the spectrum, with reduced reflectance in the Reststrahlen Band region (RB) as much as 30% in comparison to samples that are dominated by a high proportion of crystalline material. Apollo 15401,147 is an immature regolith with a high proportion of volcanic glass pyroclastic beads [2]. The high mafic mineral content results in a systematic shift in the Christiansen Feature (CF - the point of lowest reflectance) to longer wavelength: 8.6 μm. The glass beads dominate the spectrum, displaying a broad peak around the main Si-O stretch band (at 10.8 μm). As such, individual mineral components of this sample cannot be resolved from the average spectrum alone. Apollo 67481,96 is a sub-mature regolith composed dominantly of anorthite plagioclase [2]. The CF position of the average spectrum is shifted to shorter wavelengths (8.2 μm) due to the higher proportion of felsic minerals. Its average spectrum is dominated by anorthite reflectance bands at 8.7, 9.1, 9.8, and 10.8 μm. The average reflectance is greater than the other samples due to a lower proportion of glassy material. In each soil, the smallest fractions (0-25 and 25-63 μm) have CF positions 0.1-0.4 μm higher than the larger grain sizes. Also, the bulk-sample spectra mostly closely resemble the 0-25 μm sieved size fraction spectrum, indicating that this size fraction of each sample dominates the bulk spectrum regardless of other physical properties. This has implications for surface analyses of other Solar System bodies where some mineral phases or components could be concentrated in a particular size fraction. For example, the anorthite grains in 67481,96 are dominantly >25 μm in size and therefore may not contribute proportionally to the bulk average spectrum (compared to the <25 μm fraction). The resulting bulk spectrum of 67481,96 has a CF position 0.2 μm higher than all size fractions >25 microns and therefore does not represent a true average composition of the sample. Further investigation of how grain size and composition alters the average spectrum is required to fully understand infrared spectra of planetary surfaces. [1] - Hiesinger H., Helbert J., and MERTIS Co-I Team. (2010). The Mercury Radiometer and Thermal Infrared Spectrometer (MERTIS) for the BepiColombo Mission. Planetary and Space Science. 58, 144-165. [2] - NASA Lunar Sample Compendium. https://curator.jsc.nasa.gov/lunar/lsc/
How large a training set is needed to develop a classifier for microarray data?
Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M
2008-01-01
A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.
A multi-stage drop-the-losers design for multi-arm clinical trials.
Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher
2017-02-01
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.
Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.
McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M
2015-03-01
Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Bony pelvic canal size and shape in relation to body proportionality in humans.
Kurki, Helen K
2013-05-01
Obstetric selection acts on the female pelvic canal to accommodate the human neonate and contributes to pelvic sexual dimorphism. There is a complex relationship between selection for obstetric sufficiency and for overall body size in humans. The relationship between selective pressures may differ among populations of different body sizes and proportions, as pelvic canal dimensions vary among populations. Size and shape of the pelvic canal in relation to body size and shape were examined using nine skeletal samples (total female n = 57; male n = 84) from diverse geographical regions. Pelvic, vertebral, and lower limb bone measurements were collected. Principal component analyses demonstrate pelvic canal size and shape differences among the samples. Male multivariate variance in pelvic shape is greater than female variance for North and South Africans. High-latitude samples have larger and broader bodies, and pelvic canals of larger size and, among females, relatively broader medio-lateral dimensions relative to low-latitude samples, which tend to display relatively expanded inlet antero-posterior (A-P) and posterior canal dimensions. Differences in canal shape exist among samples that are not associated with latitude or body size, suggesting independence of some canal shape characteristics from body size and shape. The South Africans are distinctive with very narrow bodies and small pelvic inlets relative to an elongated lower canal in A-P and posterior lengths. Variation in pelvic canal geometry among populations is consistent with a high degree of evolvability in the human pelvis. Copyright © 2013 Wiley Periodicals, Inc.
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
Thanh Noi, Phan; Kappas, Martin
2017-01-01
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km2 within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets. PMID:29271909
Thanh Noi, Phan; Kappas, Martin
2017-12-22
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km² within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir
Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less
High energy ball milling study of Fe{sub 2}MnSn Heusler alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Vivek Kumar, E-mail: vivek.jain129@gmail.com; Lakshmi, N.; Jain, Vishal
The structural and magnetic properties of as-melted and high energy ball milled alloy samples have been studied by X-ray diffraction, DC magnetization and electronic structure calculations by means of density functional theory. The observed properties are compared to that of the bulk sample. There is a very good enhancement of saturation magnetization and coercivity in the nano-sized samples as compared to bulk which is explained in terms of structural disordering and size effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asgari, H., E-mail: hamed.asgari@usask.ca; Odeshi, A.G.; Szpunar, J.A.
2015-08-15
The effects of grain size on the dynamic deformation behavior of rolled AZ31B alloy at high strain rates were investigated. Rolled AZ31B alloy samples with grain sizes of 6, 18 and 37 μm, were subjected to shock loading tests using Split Hopkinson Pressure Bar at room temperature and at a strain rate of 1100 s{sup −} {sup 1}. It was found that a double-peak basal texture formed in the shock loaded samples. The strength and ductility of the alloy under the high strain-rate compressive loading increased with decreasing grain size. However, twinning fraction and strain hardening rate were found tomore » decrease with decreasing grain size. In addition, orientation imaging microscopy showed a higher contribution of double and contraction twins in the deformation process of the coarse-grained samples. Using transmission electron microscopy, pyramidal dislocations were detected in the shock loaded sample, proving the activation of pyramidal slip system under dynamic impact loading. - Highlights: • A double-peak basal texture developed in all shock loaded samples. • Both strength and ductility increased with decreasing grain size. • Twinning fraction and strain hardening rate decreased with decreasing grain size. • ‘g.b’ analysis confirmed the presence of dislocations in shock loaded alloy.« less
Kohigashi, Tsuyoshi; Otsuka, Yoichi; Shimazu, Ryo; Matsumoto, Takuya; Iwata, Futoshi; Kawasaki, Hideya; Arakawa, Ryuichi
2016-01-01
Mass spectrometry imaging (MSI) with ambient sampling and ionization can rapidly and easily capture the distribution of chemical components in a solid sample. Because the spatial resolution of MSI is limited by the size of the sampling area, reducing sampling size is an important goal for high resolution MSI. Here, we report the first use of a nanopipette for sampling and ionization by tapping-mode scanning probe electrospray ionization (t-SPESI). The spot size of the sampling area of a dye molecular film on a glass substrate was decreased to 6 μm on average by using a nanopipette. On the other hand, ionization efficiency increased with decreasing solvent flow rate. Our results indicate the compatibility between a reduced sampling area and the ionization efficiency using a nanopipette. MSI of micropatterns of ink on a glass and a polymer substrate were also demonstrated. PMID:28101441
NASA Astrophysics Data System (ADS)
Burritt, Rosemary; Francois, Elizabeth; Windler, Gary; Chavez, David
2017-06-01
Diaminoazoxyfurazan (DAAF) has many of the safety characteristics of an insensitive high explosive (IHE): it is extremely insensitive to impact and friction and is comparable to triaminotrinitrobezene (TATB) in this way. Conversely, it demonstrates many performance characteristics of a Conventional High Explosive (CHE). DAAF has a small failure diameter of about 1.25 mm and can be sensitive to shock under the right conditions. Large particle sized DAAF will not initiate in a typical exploding foil initiator (EFI) configuration but smaller particle sizes will. Large particle sized DAAF, of 40 μm, was crash precipitated and ball milled into six distinct samples and pressed into pellets with a density of 1.60 g/cc (91% TMD). To investigate the effect of particle size and surface area on the direct initiation on DAAF multiple threshold tests were preformed on each sample of DAAF in different EFI configurations, which varied in flyer thickness and/or bridge size. Comparative tests were performed examining threshold voltage and correlated to Photon Doppler Velocimetry (PDV) results. The samples with larger particle sizes and surface area required more energy to initiate while the smaller particle sizes required less energy and could be initiated with smaller diameter flyers.
ZnFe2O4 nanoparticles dispersed in a highly porous silica aerogel matrix: a magnetic study.
Bullita, S; Casu, A; Casula, M F; Concas, G; Congiu, F; Corrias, A; Falqui, A; Loche, D; Marras, C
2014-03-14
We report the detailed structural characterization and magnetic investigation of nanocrystalline zinc ferrite nanoparticles supported on a silica aerogel porous matrix which differ in size (in the range 4-11 nm) and the inversion degree (from 0.4 to 0.2) as compared to bulk zinc ferrite which has a normal spinel structure. The samples were investigated by zero-field-cooling-field-cooling, thermo-remnant DC magnetization measurements, AC magnetization investigation and Mössbauer spectroscopy. The nanocomposites are superparamagnetic at room temperature; the temperature of the superparamagnetic transition in the samples decreases with the particle size and therefore it is mainly determined by the inversion degree rather than by the particle size, which would give an opposite effect on the blocking temperature. The contribution of particle interaction to the magnetic behavior of the nanocomposites decreases significantly in the sample with the largest particle size. The values of the anisotropy constant give evidence that the anisotropy constant decreases upon increasing the particle size of the samples. All these results clearly indicate that, even when dispersed with low concentration in a non-magnetic and highly porous and insulating matrix, the zinc ferrite nanoparticles show a magnetic behavior similar to that displayed when they are unsupported or dispersed in a similar but denser matrix, and with higher loading. The effective anisotropy measured for our samples appears to be systematically higher than that measured for supported zinc ferrite nanoparticles of similar size, indicating that this effect probably occurs as a consequence of the high inversion degree.
Pye, Kenneth; Blott, Simon J
2004-08-11
Particle size is a fundamental property of any sediment, soil or dust deposit which can provide important clues to nature and provenance. For forensic work, the particle size distribution of sometimes very small samples requires precise determination using a rapid and reliable method with a high resolution. The Coulter trade mark LS230 laser granulometer offers rapid and accurate sizing of particles in the range 0.04-2000 microm for a variety of sample types, including soils, unconsolidated sediments, dusts, powders and other particulate materials. Reliable results are possible for sample weights of just 50 mg. Discrimination between samples is performed on the basis of the shape of the particle size curves and statistical measures of the size distributions. In routine forensic work laser granulometry data can rarely be used in isolation and should be considered in combination with results from other techniques to reach an overall conclusion.
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.
2014-01-01
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150
Sample sizes to control error estimates in determining soil bulk density in California forest soils
Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber
2016-01-01
Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...
NASA Astrophysics Data System (ADS)
Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.
2015-11-01
Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM samples. Some of the day to night difference may have been caused also by differing wind directions transporting air masses from different emission sources during the day and the night. The present findings indicate the important role of the local particle sources and atmospheric processes on the health related toxicological properties of the PM. The varying toxicological responses evoked by the PM samples showed the importance of examining various particle sizes. Especially the detected considerable toxicological activity by PM0.2 size range suggests they're attributable to combustion sources, new particle formation and atmospheric processes.
Jalava, Pasi I; Salonen, Raimo O; Hälinen, Arja I; Penttinen, Piia; Pennanen, Arto S; Sillanpää, Markus; Sandell, Erik; Hillamo, Risto; Hirvonen, Maija-Riitta
2006-09-15
The impact of long-range transport (LRT) episodes of wildfire smoke on the inflammogenic and cytotoxic activity of urban air particles was investigated in the mouse RAW 264.7 macrophages. The particles were sampled in four size ranges using a modified Harvard high-volume cascade impactor, and the samples were chemically characterized for identification of different emission sources. The particulate mass concentration in the accumulation size range (PM(1-0.2)) was highly increased during two LRT episodes, but the contents of total and genotoxic polycyclic aromatic hydrocarbons (PAH) in collected particulate samples were only 10-25% of those in the seasonal average sample. The ability of coarse (PM(10-2.5)), intermodal size range (PM(2.5-1)), PM(1-0.2) and ultrafine (PM(0.2)) particles to cause cytokine production (TNFalpha, IL-6, MIP-2) reduced along with smaller particle size, but the size range had a much smaller impact on induced nitric oxide (NO) production and cytotoxicity or apoptosis. The aerosol particles collected during LRT episodes had a substantially lower activity in cytokine production than the corresponding particles of the seasonal average period, which is suggested to be due to chemical transformation of the organic fraction during aging. However, the episode events were associated with enhanced inflammogenic and cytotoxic activities per inhaled cubic meter of air due to the greatly increased particulate mass concentration in the accumulation size range, which may have public health implications.
NASA Technical Reports Server (NTRS)
Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.
2003-01-01
This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.
Influence of sampling window size and orientation on parafoveal cone packing density
Lombardo, Marco; Serrao, Sebastiano; Ducoli, Pietro; Lombardo, Giuseppe
2013-01-01
We assessed the agreement between sampling windows of different size and orientation on packing density estimates in images of the parafoveal cone mosaic acquired using a flood-illumination adaptive optics retinal camera. Horizontal and vertical oriented sampling windows of different size (320x160 µm, 160x80 µm and 80x40 µm) were selected in two retinal locations along the horizontal meridian in one eye of ten subjects. At each location, cone density tended to decline with decreasing sampling area. Although the differences in cone density estimates were not statistically significant, Bland-Altman plots showed that the agreement between cone density estimated within the different sampling window conditions was moderate. The percentage of the preferred packing arrangements of cones by Voronoi tiles was slightly affected by window size and orientation. The results illustrated the high importance of specifying the size and orientation of the sampling window used to derive cone metric estimates to facilitate comparison of different studies. PMID:24009995
Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.
2007-01-01
This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than adequate for the majority of sedimentological applications, especially considering that the autocorrelation technique is estimated to be at least 100 times faster than traditional methods.
Low Cost High Value Mars Sample to Orbit
NASA Astrophysics Data System (ADS)
Adler, M.; Guernsey, C.; Sell, S.; Sengupta, A.; Shiraishi, L.
2012-06-01
A mid-size lander, rover, and MAV using the MSL CEDL architecture and a 3-stage Falcon 9 can collect scientifically high-quality Mars surface samples consisting of rock cores collected by a roving platform, and deliver those samples to Mars orbit.
The cost of large numbers of hypothesis tests on power, effect size and sample size.
Lazzeroni, L C; Ray, A
2012-01-01
Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.
Hero, Alfred O; Rajaratnam, Bala
2016-01-01
When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.
Water quality monitoring: A comparative case study of municipal and Curtin Sarawak's lake samples
NASA Astrophysics Data System (ADS)
Anand Kumar, A.; Jaison, J.; Prabakaran, K.; Nagarajan, R.; Chan, Y. S.
2016-03-01
In this study, particle size distribution and zeta potential of the suspended particles in municipal water and lake surface water of Curtin Sarawak's lake were compared and the samples were analysed using dynamic light scattering method. High concentration of suspended particles affects the water quality as well as suppresses the aquatic photosynthetic systems. A new approach has been carried out in the current work to determine the particle size distribution and zeta potential of the suspended particles present in the water samples. The results for the lake samples showed that the particle size ranges from 180nm to 1345nm and the zeta potential values ranges from -8.58 mV to -26.1 mV. High zeta potential value was observed in the surface water samples of Curtin Sarawak's lake compared to the municipal water. The zeta potential values represent that the suspended particles are stable and chances of agglomeration is lower in lake water samples. Moreover, the effects of physico-chemical parameters on zeta potential of the water samples were also discussed.
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.
Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham
2017-12-01
During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.
2014-04-15
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less
NMR/MRI with hyperpolarized gas and high Tc SQUID
Schlenga, Klaus; de Souza, Ricardo E.; Wong-Foy, Annjoe; Clarke, John; Pines, Alexander
2000-01-01
A method and apparatus for the detection of nuclear magnetic resonance (NMR) signals and production of magnetic resonance imaging (MRI) from samples combines the use of hyperpolarized inert gases to enhance the NMR signals from target nuclei in a sample and a high critical temperature (Tc) superconducting quantum interference device (SQUID) to detect the NMR signals. The system operates in static magnetic fields of 3 mT or less (down to 0.1 mT), and at temperatures from liquid nitrogen (77K) to room temperature. Sample size is limited only by the size of the magnetic field coils and not by the detector. The detector is a high Tc SQUID magnetometer designed so that the SQUID detector can be very close to the sample, which can be at room temperature.
Reproducibility of preclinical animal research improves with heterogeneity of study samples
Vogt, Lucile; Sena, Emily S.; Würbel, Hanno
2018-01-01
Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495
NASA Astrophysics Data System (ADS)
Romagnan, Jean Baptiste; Aldamman, Lama; Gasparini, Stéphane; Nival, Paul; Aubert, Anaïs; Jamet, Jean Louis; Stemmann, Lars
2016-10-01
The present work aims to show that high throughput imaging systems can be useful to estimate mesozooplankton community size and taxonomic descriptors that can be the base for consistent large scale monitoring of plankton communities. Such monitoring is required by the European Marine Strategy Framework Directive (MSFD) in order to ensure the Good Environmental Status (GES) of European coastal and offshore marine ecosystems. Time and cost-effective, automatic, techniques are of high interest in this context. An imaging-based protocol has been applied to a high frequency time series (every second day between April 2003 to April 2004 on average) of zooplankton obtained in a coastal site of the NW Mediterranean Sea, Villefranche Bay. One hundred eighty four mesozooplankton net collected samples were analysed with a Zooscan and an associated semi-automatic classification technique. The constitution of a learning set designed to maximize copepod identification with more than 10,000 objects enabled the automatic sorting of copepods with an accuracy of 91% (true positives) and a contamination of 14% (false positives). Twenty seven samples were then chosen from the total copepod time series for detailed visual sorting of copepods after automatic identification. This method enabled the description of the dynamics of two well-known copepod species, Centropages typicus and Temora stylifera, and 7 other taxonomically broader copepod groups, in terms of size, biovolume and abundance-size distributions (size spectra). Also, total copepod size spectra underwent significant changes during the sampling period. These changes could be partially related to changes in the copepod assemblage taxonomic composition and size distributions. This study shows that the use of high throughput imaging systems is of great interest to extract relevant coarse (i.e. total abundance, size structure) and detailed (i.e. selected species dynamics) descriptors of zooplankton dynamics. Innovative zooplankton analyses are therefore proposed and open the way for further development of zooplankton community indicators of changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jalava, Pasi I.; Salonen, Raimo O.; Haelinen, Arja I.
2006-09-15
The impact of long-range transport (LRT) episodes of wildfire smoke on the inflammogenic and cytotoxic activity of urban air particles was investigated in the mouse RAW 264.7 macrophages. The particles were sampled in four size ranges using a modified Harvard high-volume cascade impactor, and the samples were chemically characterized for identification of different emission sources. The particulate mass concentration in the accumulation size range (PM{sub 1-0.2}) was highly increased during two LRT episodes, but the contents of total and genotoxic polycyclic aromatic hydrocarbons (PAH) in collected particulate samples were only 10-25% of those in the seasonal average sample. The abilitymore » of coarse (PM{sub 10-2.5}), intermodal size range (PM{sub 2.5-1}), PM{sub 1-0.2} and ultrafine (PM{sub 0.2}) particles to cause cytokine production (TNF{alpha}, IL-6, MIP-2) reduced along with smaller particle size, but the size range had a much smaller impact on induced nitric oxide (NO) production and cytotoxicity or apoptosis. The aerosol particles collected during LRT episodes had a substantially lower activity in cytokine production than the corresponding particles of the seasonal average period, which is suggested to be due to chemical transformation of the organic fraction during aging. However, the episode events were associated with enhanced inflammogenic and cytotoxic activities per inhaled cubic meter of air due to the greatly increased particulate mass concentration in the accumulation size range, which may have public health implications.« less
Rakow, Tobias; El Deeb, Sami; Hahne, Thomas; El-Hady, Deia Abd; AlBishri, Hassan M; Wätzig, Hermann
2014-09-01
In this study, size-exclusion chromatography and high-resolution atomic absorption spectrometry methods have been developed and evaluated to test the stability of proteins during sample pretreatment. This especially includes different storage conditions but also adsorption before or even during the chromatographic process. For the development of the size exclusion method, a Biosep S3000 5 μm column was used for investigating a series of representative model proteins, namely bovine serum albumin, ovalbumin, monoclonal immunoglobulin G antibody, and myoglobin. Ambient temperature storage was found to be harmful to all model proteins, whereas short-term storage up to 14 days could be done in an ordinary refrigerator. Freezing the protein solutions was always complicated and had to be evaluated for each protein in the corresponding solvent. To keep the proteins in their native state a gentle freezing temperature should be chosen, hence liquid nitrogen should be avoided. Furthermore, a high-resolution continuum source atomic absorption spectrometry method was developed to observe the adsorption of proteins on container material and chromatographic columns. Adsorption to any container led to a sample loss and lowered the recovery rates. During the pretreatment and high-performance size-exclusion chromatography, adsorption caused sample losses of up to 33%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.
Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J
2018-07-01
This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Magnetic properties of Apollo 14 breccias and their correlation with metamorphism.
NASA Technical Reports Server (NTRS)
Gose, W. A.; Pearce, G. W.; Strangway, D. W.; Larson, E. E.
1972-01-01
The magnetic properties of Apollo 14 breccias can be explained in terms of the grain size distribution of the interstitial iron which is directly related to the metamorphic grade of the sample. In samples 14049 and 14313 iron grains less than 500 A in diameter are dominant as evidenced by a Richter-type magnetic aftereffect and hysteresis measurements. Both samples are of lowest metamorphic grade. The medium metamorphic-grade sample 14321 and the high-grade sample 14312 both show a logarithmic time-dependence of the magnetization indicative of a wide range of relaxation times and thus grain sizes, but sample 14321 contains a stable remanent magnetization whereas sample 14312 does not. This suggests that small multidomain particles (less than 1 micron) are most abundant in sample 14321 while sample 14312 is magnetically controlled by grains greater than 1 micron. The higher the metamorphic grade, the larger the grain size of the iron controlling the magnetic properties.
Phase transformations in a Cu−Cr alloy induced by high pressure torsion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korneva, Anna, E-mail: a.korniewa@imim.pl; Straumal, Boris; Institut für Nanotechnologie, Karlsruher Institut für Technologie, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen
2016-04-15
Phase transformations induced by high pressure torsion (HPT) at room temperature in two samples of the Cu-0.86 at.% Cr alloy, pre-annealed at 550 °C and 1000 °C, were studied in order to obtain two different initial states for the HPT procedure. Observation of microstructure of the samples before HPT revealed that the sample annealed at 550 °C contained two types of Cr precipitates in the Cu matrix: large particles (size about 500 nm) and small ones (size about 70 nm). The sample annealed at 1000 °C showed only a little fraction of Cr precipitates (size about 2 μm). The subsequentmore » HPT process resulted in the partial dissolution of Cr precipitates in the first sample and dissolution of Cr precipitates with simultaneous decomposition of the supersaturated solid solution in another. However, the resulting microstructure of the samples after HPT was very similar from the standpoint of grain size, phase composition, texture analysis and hardness measurements. - Highlights: • Cu−Cr alloy with two different initial states was deformed by HPT. • Phase transformations in the deformed materials were studied. • SEM, TEM and X-ray diffraction techniques were used for microstructure analysis. • HPT leads to formation the same microstructure independent of the initial state.« less
Robust gene selection methods using weighting schemes for microarray data analysis.
Kang, Suyeon; Song, Jongwoo
2017-09-02
A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.
The large sample size fallacy.
Lantz, Björn
2013-06-01
Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Synthesis and characterization of nanocrystalline mesoporous zirconia using supercritical drying.
Tyagi, Beena; Sidhpuria, Kalpesh; Shaik, Basha; Jasra, Raksh Vir
2006-06-01
Synthesis of nano-crystalline zirconia aerogel was done by sol-gel technique and supercritical drying using n-propanol solvent at and above supercritical temperature (235-280 degrees C) and pressure (48-52 bar) of n-propanol. Zirconia xerogel samples have also been prepared by conventional thermal drying method to compare with the super critically dried samples. Crystalline phase, crystallite size, surface area, pore volume, and pore size distribution were determined for all the samples in detail to understand the effect of gel drying methods on these properties. Supercritical drying of zirconia gel was observed to give thermally stable, nano-crystalline, tetragonal zirconia aerogels having high specific surface area and porosity with narrow and uniform pore size distribution as compared to thermally dried zirconia. With supercritical drying, zirconia samples show the formation of only mesopores whereas in thermally dried samples, substantial amount of micropores are observed along with mesopores. The samples prepared using supercritical drying yield nano-crystalline zirconia with smaller crystallite size (4-6 nm) as compared to higher crystallite size (13-20 nm) observed with thermally dried zirconia.
Mächtle, W
1999-01-01
Sedimentation velocity is a powerful tool for the analysis of complex solutions of macromolecules. However, sample turbidity imposes an upper limit to the size of molecular complexes currently amenable to such analysis. Furthermore, the breadth of the particle size distribution, combined with possible variations in the density of different particles, makes it difficult to analyze extremely complex mixtures. These same problems are faced in the polymer industry, where dispersions of latices, pigments, lacquers, and emulsions must be characterized. There is a rich history of methods developed for the polymer industry finding use in the biochemical sciences. Two such methods are presented. These use analytical ultracentrifugation to determine the density and size distributions for submicron-sized particles. Both methods rely on Stokes' equations to estimate particle size and density, whereas turbidity, corrected using Mie's theory, provides the concentration measurement. The first method uses the sedimentation time in dispersion media of different densities to evaluate the particle density and size distribution. This method works provided the sample is chemically homogeneous. The second method splices together data gathered at different sample concentrations, thus permitting the high-resolution determination of the size distribution of particle diameters ranging from 10 to 3000 nm. By increasing the rotor speed exponentially from 0 to 40,000 rpm over a 1-h period, size distributions may be measured for extremely broadly distributed dispersions. Presented here is a short history of particle size distribution analysis using the ultracentrifuge, along with a description of the newest experimental methods. Several applications of the methods are provided that demonstrate the breadth of its utility, including extensions to samples containing nonspherical and chromophoric particles. PMID:9916040
Liu, Keshun
2008-11-01
Eleven distillers dried grains with solubles (DDGS), processed from yellow corn, were collected from different ethanol processing plants in the US Midwest area. Particle size distribution (PSD) by mass of each sample was determined using a series of six selected US standard sieves: Nos. 8, 12, 18, 35, 60, and 100, and a pan. The original sample and sieve sized fractions were measured for surface color and contents of moisture, protein, oil, ash, and starch. Total carbohydrate (CHO) and total non-starch CHO were also calculated. Results show that there was a great variation in composition and color among DDGS from different plants. Surprisingly, a few DDGS samples contained unusually high amounts of residual starch (11.1-17.6%, dry matter basis, vs. about 5% of the rest), presumably resulting from modified processing methods. Particle size of DDGS varied greatly within a sample and PSD varied greatly among samples. The 11 samples had a mean value of 0.660mm for the geometric mean diameter (dgw) of particles and a mean value of 0.440mm for the geometric standard deviation (Sgw) of particle diameters by mass. The majority had a unimodal PSD, with a mode in the size class between 0.5 and 1.0mm. Although PSD and color parameters had little correlation with composition of whole DDGS samples, distribution of nutrients as well as color attributes correlated well with PSD. In sieved fractions, protein content, L and a color values negatively while contents of oil and total CHO positively correlated with particle size. It is highly feasible to fractionate DDGS for compositional enrichment based on particle size, while the extent of PSD can serve as an index for potential of DDGS fractionation. The above information should be a vital addition to quality and baseline data of DDGS.
Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian; Tiedje, James M.
2016-01-01
We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation. PMID:27313569
Atomically precise (catalytic) particles synthesized by a novel cluster deposition instrument
Yin, C.; Tyo, E.; Kuchta, K.; ...
2014-05-06
Here, we report a new high vacuum instrument which is dedicated to the preparation of well-defined clusters supported on model and technologically relevant supports for catalytic and materials investigations. The instrument is based on deposition of size selected metallic cluster ions that are produced by a high flux magnetron cluster source. Furthermore, we maximize the throughput of the apparatus by collecting and focusing ions utilizing a conical octupole ion guide and a linear ion guide. The size selection is achieved by a quadrupole mass filter. The new design of the sample holder provides for the preparation of multiple samples onmore » supports of various sizes and shapes in one session. After cluster deposition onto the support of interest, samples will be taken out of the chamber for a variety of testing and characterization.« less
NASA Astrophysics Data System (ADS)
Venero, I. M.; Mayol-Bracero, O. L.; Anderson, J. R.
2012-12-01
As part of the Puerto Rican African Dust and Cloud Study (PRADACS) and the Ice in Clouds Experiment - Tropical (ICE-T), we sampled giant airborne particles to study their elemental composition, morphology, and size distributions. Samples were collected in July 2011 during field measurements performed by NCAR's C-130 aircraft based on St Croix, U.S Virgin Island. The results presented here correspond to the measurements done during research flight #8 (RF8). Aerosol particles with Dp > 1 um were sampled with the Giant Nuclei Impactor and particles with Dp < 1 um were collected with the Wyoming Inlet. Collected particles were later analyzed using an automated scanning electron microscope (SEM) and manual observation by field emission SEM. We identified the chemical composition and morphology of major particle types in filter samples collected at different altitudes (e.g., 300 ft, 1000 ft, and 4500ft). Results from the flight upwind of Puerto Rico show that particles in the giant nuclei size range are dominated by sea salt. Samples collected at altitudes 300 ft and 1000 ft showed the highest number of sea salt particles and the samples collected at higher altitudes (> 4000 ft) showed the highest concentrations of clay material. HYSPLIT back trajectories for all samples showed that the low altitude samples initiated in the free troposphere in the Atlantic Ocean, which may account for the high sea salt content and that the source of the high altitude samples was closer to the Saharan - Sahel desert region and, therefore, these samples possibly had the influence of African dust. Size distribution results for quartz and unreacted sea-salt aerosols collected on the Giant Nuclei Impactor showed that sample RF08 - 12:05 UTM (300 ft) had the largest size value (mean = 2.936 μm) than all the other samples. Additional information was also obtained from the Wyoming Inlet present at the C - 130 aircraft which showed that size distribution results for all particles were smaller in size. The different mineral components of the dust have different size distributions so that a fractionation process could occur during transport. Also, the presence of supermicron sea salt at altitude is important for cloud processes.
NASA Astrophysics Data System (ADS)
Ahmed, Yasser M. Z.; El-Sheikh, Said M.; Ewais, Emad M. M.; Abd-Allah, Asmaa A.; Sayed, Said A.
2017-03-01
Boron carbide powder was synthesized from boric acid and lactose mixtures via easy procedure. Boric acid and lactose solution mixtures were roasted in stainless steel pot at 280 °C for 24 h. Boron carbide was obtained by heating the roasted samples under flowing of industrial argon gas at 1500 °C for 3 h. The amount of borate ester compound in the roasted samples was highly influenced by the boron/carbon ratio in the starting mixtures and plays a versatile role in the produced boron carbide. The high-purity boron carbide powder was produced with a sample composed of lowest boron/carbon ratio of 1:1 without calcination step. Particle morphology was changed from nano-needles like structure of 8-10 nm size with highest carbon ratio mixture to spherical shape of >150 nm size with lowest one. The oxidation resistance performance of boron carbide is highly dependent on the morphology and grain size of the synthesized powder.
Sepúlveda, Nuno; Drakeley, Chris
2015-04-03
In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hero, Alfred O.; Rajaratnam, Bala
When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining
Hero, Alfred O.; Rajaratnam, Bala
2015-01-01
When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining
Hero, Alfred O.; Rajaratnam, Bala
2015-12-09
When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less
Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz
2013-01-01
Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065
Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz
2013-03-01
The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.
Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests
Duncanson, L.; Rourke, O.; Dubayah, R.
2015-01-01
Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from −4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation. PMID:26598233
Thermal conductivity of graphene mediated by strain and size
Kuang, Youdi; Shi, Sanqiang; Wang, Xinjiang; ...
2016-06-09
Based on first-principles calculations and full iterative solution of the linearized Boltzmann–Peierls transport equation for phonons, we systematically investigate effects of strain, size and temperature on the thermal conductivity k of suspended graphene. The calculated size-dependent and temperature-dependent k for finite samples agree well with experimental data. The results show that, contrast to the convergent room-temperature k = 5450 W/m-K of unstrained graphene at a sample size ~8 cm, k of strained graphene diverges with increasing the sample size even at high temperature. Out-of-plane acoustic phonons are responsible for the significant size effect in unstrained and strained graphene due tomore » their ultralong mean free path and acoustic phonons with wavelength smaller than 10 nm contribute 80% to the intrinsic room temperature k of unstrained graphene. Tensile strain hardens the flexural modes and increases their lifetimes, causing interesting dependence of k on sample size and strain due to the competition between boundary scattering and intrinsic phonon–phonon scattering. k of graphene can be tuned within a large range by strain for the size larger than 500 μm. These findings shed light on the nature of thermal transport in two-dimensional materials and may guide predicting and engineering k of graphene by varying strain and size.« less
Hower, J.C.; Trimble, A.S.; Eble, C.F.; Palmer, C.A.; Kolker, A.
1999-01-01
Fly ash samples were collected in November and December of 1994, from generating units at a Kentucky power station using high- and low-sulfur feed coals. The samples are part of a two-year study of the coal and coal combustion byproducts from the power station. The ashes were wet screened at 100, 200, 325, and 500 mesh (150, 75, 42, and 25 ??m, respectively). The size fractions were then dried, weighed, split for petrographic and chemical analysis, and analyzed for ash yield and carbon content. The low-sulfur "heavy side" and "light side" ashes each have a similar size distribution in the November samples. In contrast, the December fly ashes showed the trend observed in later months, the light-side ash being finer (over 20 % more ash in the -500 mesh [-25 ??m] fraction) than the heavy-side ash. Carbon tended to be concentrated in the coarse fractions in the December samples. The dominance of the -325 mesh (-42 ??m) fractions in the overall size analysis implies, though, that carbon in the fine sizes may be an important consideration in the utilization of the fly ash. Element partitioning follows several patterns. Volatile elements, such as Zn and As, are enriched in the finer sizes, particularly in fly ashes collected at cooler, light-side electrostatic precipitator (ESP) temperatures. The latter trend is a function of precipitation at the cooler-ESP temperatures and of increasing concentration with the increased surface area of the finest fraction. Mercury concentrations are higher in high-carbon fly ashes, suggesting Hg adsorption on the fly ash carbon. Ni and Cr are associated, in part, with the spinel minerals in the fly ash. Copyright ?? 1999 Taylor & Francis.
Effect of high pressure processing on dispersive and aggregative properties of almond milk.
Dhakal, Santosh; Giusti, M Monica; Balasubramaniam, V M
2016-08-01
A study was conducted to investigate the impact of high pressure (450 and 600 MPa at 30 °C) and thermal (72, 85 and 99 °C at 0.1 MPa) treatments on dispersive and aggregative characteristics of almond milk. Experiments were conducted using a kinetic pressure testing unit and water bath. Particle size distribution, microstructure, UV absorption spectra, pH and color changes of processed and unprocessed samples were analyzed. Raw almond milk represented the mono model particle size distribution with average particle diameters of 2 to 3 µm. Thermal or pressure treatment of almond milk shifted the particle size distribution towards right and increased particle size by five- to six-fold. Micrographs confirmed that both the treatments increased particle size due to aggregation of macromolecules. Pressure treatment produced relatively more and larger aggregates than those produced by heat treated samples. The apparent aggregation rate constant for 450 MPa and 600 MPa processed samples were k450MPa,30°C = 0.0058 s(-1) and k600MPa,30°C = 0.0095 s(-1) respectively. This study showed that dispersive and aggregative properties of high pressure and heat-treated almond milk were different due to differences in protein denaturation, particles coagulation and aggregates morphological characteristics. Knowledge gained from the study will help food processors to formulate novel plant-based beverages treated with high pressure. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Van Berkel, Gary J.
2015-10-06
A system and method for analyzing a chemical composition of a specimen are described. The system can include at least one pin; a sampling device configured to contact a liquid with a specimen on the at least one pin to form a testing solution; and a stepper mechanism configured to move the at least one pin and the sampling device relative to one another. The system can also include an analytical instrument for determining a chemical composition of the specimen from the testing solution. In particular, the systems and methods described herein enable chemical analysis of specimens, such as tissue, to be evaluated in a manner that the spatial-resolution is limited by the size of the pins used to obtain tissue samples, not the size of the sampling device used to solubilize the samples coupled to the pins.
NASA Astrophysics Data System (ADS)
Reitz, M. A.; Seeber, L.; Schaefer, J. M.; Ferguson, E. K.
2012-12-01
Early studies pioneering the method for catchment wide erosion rates by measuring 10Be in alluvial sediment were taken at river mouths and used the sand size grain fraction from the riverbeds in order to average upstream erosion rates and measure erosion patterns. Finer particles (<0.0625 mm) were excluded to reduce the possibility of a wind-blown component of sediment and coarser particles (>2 mm) were excluded to better approximate erosion from the entire upstream catchment area (coarse grains are generally found near the source). Now that the sensitivity of 10Be measurements is rapidly increasing, we can precisely measure erosion rates from rivers eroding active tectonic regions. These active regions create higher energy drainage systems that erode faster and carry coarser sediment. In these settings, does the sand-sized fraction fully capture the average erosion of the upstream drainage area? Or does a different grain size fraction provide a more accurate measure of upstream erosion? During a study of the Neto River in Calabria, southern Italy, we took 8 samples along the length of the river, focusing on collecting samples just below confluences with major tributaries, in order to use the high-resolution erosion rate data to constrain tectonic motion. The samples we measured were sieved to either a 0.125 mm - 0.710 mm fraction or the 0.125 mm - 4 mm fraction (depending on how much of the former was available). After measuring these 8 samples for 10Be and determining erosion rates, we used the approach by Granger et al. [1996] to calculate the subcatchment erosion rates between each sample point. In the subcatchments of the river where we used grain sizes up to 4 mm, we measured very low 10Be concentrations (corresponding to high erosion rates) and calculated nonsensical subcatchment erosion rates (i.e. negative rates). We, therefore, hypothesize that the coarser grain sizes we included are preferentially sampling a smaller upstream area, and not the entire upstream catchment, which is assumed when measurements are based solely on the sand sized fraction. To test this hypothesis, we used samples with a variety of grain sizes from the Shillong Plateau. We sieved 5 samples into three grain size fractions: 0.125 mm - 710 mm, 710 mm - 4 mm, and >4 mm and measured 10Be concentrations in each fraction. Although there is some variation in the grain size fraction that yields the highest erosion rate, generally, the coarser grain size fractions have higher erosion rates. More significant are the results when calculating the subcatchment erosion rates, which suggest that even medium sized grains (710 mm - 4 mm) are sampling an area smaller than the entire upstream area; this finding is consistent with the nonsensical results from the Neto River study. This result has numerous implications for the interpretations of 10Be erosion rates: most importantly, an alluvial sample may not be averaging the entire upstream area, even when using the sand size fraction - resulting erosion rates more pertinent for that sample point rather than the entire catchment.
Daaboul, George G; Lopez, Carlos A; Chinnala, Jyothsna; Goldberg, Bennett B; Connor, John H; Ünlü, M Selim
2014-06-24
Rapid, sensitive, and direct label-free capture and characterization of nanoparticles from complex media such as blood or serum will broadly impact medicine and the life sciences. We demonstrate identification of virus particles in complex samples for replication-competent wild-type vesicular stomatitis virus (VSV), defective VSV, and Ebola- and Marburg-pseudotyped VSV with high sensitivity and specificity. Size discrimination of the imaged nanoparticles (virions) allows differentiation between modified viruses having different genome lengths and facilitates a reduction in the counting of nonspecifically bound particles to achieve a limit-of-detection (LOD) of 5 × 10(3) pfu/mL for the Ebola and Marburg VSV pseudotypes. We demonstrate the simultaneous detection of multiple viruses in a single sample (composed of serum or whole blood) for screening applications and uncompromised detection capabilities in samples contaminated with high levels of bacteria. By employing affinity-based capture, size discrimination, and a "digital" detection scheme to count single virus particles, we show that a robust and sensitive virus/nanoparticle sensing assay can be established for targets in complex samples. The nanoparticle microscopy system is termed the Single Particle Interferometric Reflectance Imaging Sensor (SP-IRIS) and is capable of high-throughput and rapid sizing of large numbers of biological nanoparticles on an antibody microarray for research and diagnostic applications.
Study of structural and magnetic properties of melt spun Nd2Fe13.6Zr0.4B ingot and ribbon
NASA Astrophysics Data System (ADS)
Amin, Muhammad; Siddiqi, Saadat A.; Ashfaq, Ahmad; Saleem, Murtaza; Ramay, Shahid M.; Mahmood, Asif; Al-Zaghayer, Yousef S.
2015-12-01
Nd2Fe13.6Zr0.4B hard magnetic material were prepared using arc-melting technique on a water-cooled copper hearth kept under argon gas atmosphere. The prepared samples, Nd2Fe13.6Zr0.4B ingot and ribbon are characterized using X-ray diffraction (XRD), scanning electron microscopy (SEM) for crystal structure determination and morphological studies, respectively. The magnetic properties of the samples have been explored using vibrating sample magnetometer (VSM). The lattice constants slightly increased due to the difference in the ionic radii of Fe and that of Zr. The bulk density decreased due to smaller molar weight and low density of Zr as compared to that of Fe. Ingot sample shows almost single crystalline phase with larger crystallite sizes whereas ribbon sample shows a mixture of amorphous and crystalline phases with smaller crystallite sizes. The crystallinity of the material was highly affected with high thermal treatments. Magnetic measurements show noticeable variation in magnetic behavior with the change in crystallite size. The sample prepared in ingot type shows soft while ribbon shows hard magnetic behavior.
Willan, Andrew R
2016-07-05
The Pessary for the Prevention of Preterm Birth Study (PS3) is an international, multicenter, randomized clinical trial designed to examine the effectiveness of the Arabin pessary in preventing preterm birth in pregnant women with a short cervix. During the design of the study two methodological issues regarding power and sample size were raised. Since treatment in the Standard Arm will vary between centers, it is anticipated that so too will the probability of preterm birth in that arm. This will likely result in a treatment by center interaction, and the issue of how this will affect the sample size requirements was raised. The sample size requirements to examine the effect of the pessary on the baby's clinical outcome was prohibitively high, so the second issue is how best to examine the effect on clinical outcome. The approaches taken to address these issues are presented. Simulation and sensitivity analysis were used to address the sample size issue. The probability of preterm birth in the Standard Arm was assumed to vary between centers following a Beta distribution with a mean of 0.3 and a coefficient of variation of 0.3. To address the second issue a Bayesian decision model is proposed that combines the information regarding the between-treatment difference in the probability of preterm birth from PS3 with the data from the Multiple Courses of Antenatal Corticosteroids for Preterm Birth Study that relate preterm birth and perinatal mortality/morbidity. The approach provides a between-treatment comparison with respect to the probability of a bad clinical outcome. The performance of the approach was assessed using simulation and sensitivity analysis. Accounting for a possible treatment by center interaction increased the sample size from 540 to 700 patients per arm for the base case. The sample size requirements increase with the coefficient of variation and decrease with the number of centers. Under the same assumptions used for determining the sample size requirements, the simulated mean probability that pessary reduces the risk of perinatal mortality/morbidity is 0.98. The simulated mean decreased with coefficient of variation and increased with the number of clinical sites. Employing simulation and sensitivity analysis is a useful approach for determining sample size requirements while accounting for the additional uncertainty due to a treatment by center interaction. Using a surrogate outcome in conjunction with a Bayesian decision model is an efficient way to compare important clinical outcomes in a randomized clinical trial in situations where the direct approach requires a prohibitively high sample size.
Controlled synthesis and luminescence properties of CaMoO4:Eu3+ microcrystals
NASA Astrophysics Data System (ADS)
Xie, Ying; Ma, Siming; Wang, Yu; Xu, Mai; Lu, Chengxi; Xiao, Linjiu; Deng, Shuguang
2018-03-01
Pure tetragonal-phased Ca0.9MoO4:0.1Eu3+ (CaMoO4:Eu3+) microcrystals with varying particle sizes were prepared via a co-deposition in water/oil (w/o) phase method. The particle sizes of as-prepared samples were controlled by calcination temperature and calcination time, and the crystallinity of the samples enhances with increasing particle size. The luminescence properties of CaMoO4:Eu3+ microcrystals were studied with varying particle size. The results reveal that the intensity of emission spectra of the CaMoO4:Eu3+ samples increases with increasing particle size, and they have closely correlation with each other. It is the same with the luminescence lifetime. The luminescence lifetime of the CaMoO4:Eu3+ samples decreases from 0.637 ms to 0.447 ms with increasing particle size from 0.12 μm to 1.79 μm, respectively. This study not only provides information for size-dependent luminescence properties of CaMoO4:Eu3+ but also gives a reference for potential applications in high voltage electric porcelain material.
Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian; ...
2016-06-02
We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungalmore » community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian
We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungalmore » community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.« less
Huang, Tao; He, Jiang
2017-01-01
Extracellular vesicles (EVs) have recently attracted substantial attention due to the potential diagnostic and therapeutic relevance. Although a variety of techniques have been used to isolate and analyze EVs, it is still far away from satisfaction. Size-exclusion chromatography (SEC), which separates subjects by size, has been widely applied in protein purification and analysis. The purpose of this chapter is to show the applications of size-exclusion high-performance liquid chromatography (HPLC) as methods for EV characterization of impurities or contaminants of small size, and thus for quality assay for the purity of the samples of EVs.
Contrasting Size Distributions of Chondrules and Inclusions in Allende CV3
NASA Technical Reports Server (NTRS)
Fisher, Kent R.; Tait, Alastair W.; Simon, Jusin I.; Cuzzi, Jeff N.
2014-01-01
There are several leading theories on the processes that led to the formation of chondrites, e.g., sorting by mass, by X-winds, turbulent concentration, and by photophoresis. The juxtaposition of refractory inclusions (CAIs) and less refractory chondrules is central to these theories and there is much to be learned from their relative size distributions. There have been a number of studies into size distributions of particles in chondrites but only on relatively small scales primarily for chondrules, and rarely for both Calcium Aluminum-rich Inclusions (CAIs) and chondrules in the same sample. We have implemented macro-scale (25 cm diameter sample) and high-resolution microscale sampling of the Allende CV3 chondrite to create a complete data set of size frequencies for CAIs and chondrules.
Quantitative characterisation of sedimentary grains
NASA Astrophysics Data System (ADS)
Tunwal, Mohit; Mulchrone, Kieran F.; Meere, Patrick A.
2016-04-01
Analysis of sedimentary texture helps in determining the formation, transportation and deposition processes of sedimentary rocks. Grain size analysis is traditionally quantitative, whereas grain shape analysis is largely qualitative. A semi-automated approach to quantitatively analyse shape and size of sand sized sedimentary grains is presented. Grain boundaries are manually traced from thin section microphotographs in the case of lithified samples and are automatically identified in the case of loose sediments. Shape and size paramters can then be estimated using a software package written on the Mathematica platform. While automated methodology already exists for loose sediment analysis, the available techniques for the case of lithified samples are limited to cases of high definition thin section microphotographs showing clear contrast between framework grains and matrix. Along with the size of grain, shape parameters such as roundness, angularity, circularity, irregularity and fractal dimension are measured. A new grain shape parameter developed using Fourier descriptors has also been developed. To test this new approach theoretical examples were analysed and produce high quality results supporting the accuracy of the algorithm. Furthermore sandstone samples from known aeolian and fluvial environments from the Dingle Basin, County Kerry, Ireland were collected and analysed. Modern loose sediments from glacial till from County Cork, Ireland and aeolian sediments from Rajasthan, India have also been collected and analysed. A graphical summary of the data is presented and allows for quantitative distinction between samples extracted from different sedimentary environments.
EPICS Controlled Collimator for Controlling Beam Sizes in HIPPO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Napolitano, Arthur Soriano; Vogel, Sven C.
2017-08-03
Controlling the beam spot size and shape in a diffraction experiment determines the probed sample volume. The HIPPO - High-Pressure-Preferred Orientation– neutron time-offlight diffractometer is located at the Lujan Neutron Scattering Center in Los Alamos National Laboratories. HIPPO characterizes microstructural parameters, such as phase composition, strains, grain size, or texture, of bulk (cm-sized) samples. In the current setup, the beam spot has a 10 mm diameter. Using a collimator, consisting of two pairs of neutron absorbing boron-nitride slabs, horizontal and vertical dimensions of a rectangular beam spot can be defined. Using the HIPPO robotic sample changer for sample motion, themore » collimator would enable scanning of e.g. cylindrical samples along the cylinder axis by probing slices of such samples. The project presented here describes implementation of such a collimator, in particular the motion control software. We utilized the EPICS (Experimental Physics Interface and Control System) software interface to integrate the collimator control into the HIPPO instrument control system. Using EPICS, commands are sent to commercial stepper motors that move the beam windows.« less
ERIC Educational Resources Information Center
Seo, Dong Gi; Hao, Shiqi
2016-01-01
Differential item/test functioning (DIF/DTF) are routine procedures to detect item/test unfairness as an explanation for group performance difference. However, unequal sample sizes and small sample sizes have an impact on the statistical power of the DIF/DTF detection procedures. Furthermore, DIF/DTF cannot be used for two test forms without…
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Moreira, M. A.; Chen, S. C.; Batista, G. T.
1984-01-01
A procedure to estimate wheat (Triticum aestivum L) area using sampling technique based on aerial photographs and digital LANDSAT MSS data is developed. Aerial photographs covering 720 square km are visually analyzed. To estimate wheat area, a regression approach is applied using different sample sizes and various sampling units. As the size of sampling unit decreased, the percentage of sampled area required to obtain similar estimation performance also decreased. The lowest percentage of the area sampled for wheat estimation with relatively high precision and accuracy through regression estimation is 13.90% using 10 square km as the sampling unit. Wheat area estimation using only aerial photographs is less precise and accurate than those obtained by regression estimation.
Shen, You-xin; Liu, Wei-li; Li, Yu-hui; Guan, Hui-lin
2014-01-01
A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1) Does this conventional sampling strategy limit the detection of seeds of woody species? 2) Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length) × 10 cm (width) × 10 cm (depth), referred to as larger number of small-sized samples (LNSS) in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS) and placed them (10 each) in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.
Galloway, Joel M.; Blanchard, Robert A.; Ellison, Christopher A.
2011-01-01
Most of the bedload samples had particle sizes in the 0.5 to 1 millimeter and 0.25 to 0.5 millimeter ranges from the Maple River, Wild Rice River, Rush River, Buffalo River, and Red River sites. The Rush and Lower Branch Rush Rivers also had a greater portion of larger particle sizes in the 1 to 2 millimeter range. The Sheyenne River sites had a greater portion of smaller particle sizes in the bedload in the 0.125 to 0.5 millimeter range compared to the other sites. The bed material in samples collected during the 2011 spring high-flow event demonstrated a wider distribution of particle sizes than were observed in the bedload; the coarsest material was found at the Red River near Christine and the Lower Branch Rush River and the finest material at the Sheyenne River sites.
NASA Astrophysics Data System (ADS)
Liu, Yichi; Liu, Debao; You, Chen; Chen, Minfang
2015-09-01
The aim of this study was to investigate the effect of grain size on the corrosion resistance of pure magnesium developed for biomedical applications. High-purity magnesium samples with different grain size were prepared by the cooling rate-controlled solidification. Electrochemical and immersion tests were employed to measure the corrosion resistance of pure magnesium with different grain size. The electrochemical polarization curves indicated that the corrosion susceptibility increased as the grain size decrease. However, the electrochemical impedance spectroscopy (EIS) and immersion tests indicated that the corrosion resistance of pure magnesium is improved as the grain size decreases. The improvement in the corrosion resistance is attributed to refine grain can produce more uniform and density film on the surface of sample.
Extraction of hydrocarbons from high-maturity Marcellus Shale using supercritical carbon dioxide
Jarboe, Palma B.; Philip A. Candela,; Wenlu Zhu,; Alan J. Kaufman,
2015-01-01
Shale is now commonly exploited as a hydrocarbon resource. Due to the high degree of geochemical and petrophysical heterogeneity both between shale reservoirs and within a single reservoir, there is a growing need to find more efficient methods of extracting petroleum compounds (crude oil, natural gas, bitumen) from potential source rocks. In this study, supercritical carbon dioxide (CO2) was used to extract n-aliphatic hydrocarbons from ground samples of Marcellus shale. Samples were collected from vertically drilled wells in central and western Pennsylvania, USA, with total organic carbon (TOC) content ranging from 1.5 to 6.2 wt %. Extraction temperature and pressure conditions (80 °C and 21.7 MPa, respectively) were chosen to represent approximate in situ reservoir conditions at sample depth (1920−2280 m). Hydrocarbon yield was evaluated as a function of sample matrix particle size (sieve size) over the following size ranges: 1000−500 μm, 250−125 μm, and 63−25 μm. Several methods of shale characterization including Rock-Eval II pyrolysis, organic petrography, Brunauer−Emmett−Teller surface area, and X-ray diffraction analyses were also performed to better understand potential controls on extraction yields. Despite high sample thermal maturity, results show that supercritical CO2 can liberate diesel-range (n-C11 through n-C21) n-aliphatic hydrocarbons. The total quantity of extracted, resolvable n-aliphatic hydrocarbons ranges from approximately 0.3 to 12 mg of hydrocarbon per gram of TOC. Sieve size does have an effect on extraction yield, with highest recovery from the 250−125 μm size fraction. However, the significance of this effect is limited, likely due to the low size ranges of the extracted shale particles. Additional trends in hydrocarbon yield are observed among all samples, regardless of sieve size: 1) yield increases as a function of specific surface area (r2 = 0.78); and 2) both yield and surface area increase with increasing TOC content (r2 = 0.97 and 0.86, respectively). Given that supercritical CO2 is able to mobilize residual organic matter present in overmature shales, this study contributes to a better understanding of the extent and potential factors affecting the extraction process.
Crack identification and evolution law in the vibration failure process of loaded coal
NASA Astrophysics Data System (ADS)
Li, Chengwu; Ai, Dihao; Sun, Xiaoyuan; Xie, Beijing
2017-08-01
To study the characteristics of coal cracks produced in the vibration failure process, we set up a static load and static and dynamic combination load failure test simulation system, prepared with different particle size, formation pressure, and firmness coefficient coal samples. Through static load damage testing of coal samples and then dynamic load (vibration exciter) and static (jack) combination destructive testing, the crack images of coal samples under the load condition were obtained. Combined with digital image processing technology, an algorithm of crack identification with high precision and in real-time is proposed. With the crack features of the coal samples under different load conditions as the research object, we analyzed the distribution of cracks on the surface of the coal samples and the factors influencing crack evolution using the proposed algorithm and a high-resolution industrial camera. Experimental results showed that the major portion of the crack after excitation is located in the rear of the coal sample where the vibration exciter cannot act. Under the same disturbance conditions, crack size and particle size exhibit a positive correlation, while crack size and formation pressure exhibit a negative correlation. Soft coal is more likely to lead to crack evolution than hard coal, and more easily causes instability failure. The experimental results and crack identification algorithm provide a solid basis for the prevention and control of instability and failure of coal and rock mass, and they are helpful in improving the monitoring method of coal and rock dynamic disasters.
Šmarda, Petr; Bureš, Petr; Horová, Lucie
2007-01-01
Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968
High-concentration zeta potential measurements using light-scattering techniques
Kaszuba, Michael; Corbett, Jason; Watson, Fraser Mcneil; Jones, Andrew
2010-01-01
Zeta potential is the key parameter that controls electrostatic interactions in particle dispersions. Laser Doppler electrophoresis is an accepted method for the measurement of particle electrophoretic mobility and hence zeta potential of dispersions of colloidal size materials. Traditionally, samples measured by this technique have to be optically transparent. Therefore, depending upon the size and optical properties of the particles, many samples will be too concentrated and will require dilution. The ability to measure samples at or close to their neat concentration would be desirable as it would minimize any changes in the zeta potential of the sample owing to dilution. However, the ability to measure turbid samples using light-scattering techniques presents a number of challenges. This paper discusses electrophoretic mobility measurements made on turbid samples at high concentration using a novel cell with reduced path length. Results are presented on two different sample types, titanium dioxide and a polyurethane dispersion, as a function of sample concentration. For both of the sample types studied, the electrophoretic mobility results show a gradual decrease as the sample concentration increases and the possible reasons for these observations are discussed. Further, a comparison of the data against theoretical models is presented and discussed. Conclusions and recommendations are made from the zeta potential values obtained at high concentrations. PMID:20732896
NASA Astrophysics Data System (ADS)
Presley, Marsha A.; Craddock, Robert A.
2006-09-01
A line-heat source apparatus was used to measure thermal conductivities of natural fluvial and eolian particulate sediments under low pressures of a carbon dioxide atmosphere. These measurements were compared to a previous compilation of the dependence of thermal conductivity on particle size to determine a thermal conductivity-derived particle size for each sample. Actual particle-size distributions were determined via physical separation through brass sieves. Comparison of the two analyses indicates that the thermal conductivity reflects the larger particles within the samples. In each sample at least 85-95% of the particles by weight are smaller than or equal to the thermal conductivity-derived particle size. At atmospheric pressures less than about 2-3 torr, samples that contain a large amount of small particles (<=125 μm or 4 Φ) exhibit lower thermal conductivities relative to those for the larger particles within the sample. Nonetheless, 90% of the sample by weight still consists of particles that are smaller than or equal to this lower thermal conductivity-derived particle size. These results allow further refinement in the interpretation of geomorphologic processes acting on the Martian surface. High-energy fluvial environments should produce poorer-sorted and coarser-grained deposits than lower energy eolian environments. Hence these results will provide additional information that may help identify coarser-grained fluvial deposits and may help differentiate whether channel dunes are original fluvial sediments that are at most reworked by wind or whether they represent a later overprint of sediment with a separate origin.
Galaxy evolution by color-log(n) type since redshift unity in the Hubble Ultra Deep Field
NASA Astrophysics Data System (ADS)
Cameron, E.; Driver, S. P.
2009-01-01
Aims: We explore the use of the color-log(n) (where n is the global Sérsic index) plane as a tool for subdividing the galaxy population in a physically-motivated manner out to redshift unity. We thereby aim to quantify surface brightness evolution by color-log(n) type, accounting separately for the specific selection and measurement biases against each. Methods: We construct (u-r) color-log(n) diagrams for distant galaxies in the Hubble Ultra Deep Field (UDF) within a series of volume-limited samples to z=1.5. The color-log(n) distributions of these high redshift galaxies are compared against that measured for nearby galaxies in the Millennium Galaxy Catalogue (MGC), as well as to the results of visual morphological classification. Based on this analysis we divide our sample into three color-structure classes. Namely, “red, compact”, “blue, diffuse” and “blue, compact”. Luminosity-size diagrams are constructed for members of the two largest classes (“red, compact” and “blue, diffuse”), both in the UDF and the MGC. Artificial galaxy simulations (for systems with exponential and de Vaucouleurs profile shapes alternately) are used to identify “bias-free” regions of the luminosity-size plane in which galaxies are detected with high completeness, and their fluxes and sizes recovered with minimal surface brightness-dependent biases. Galaxy evolution is quantified via comparison of the low and high redshift luminosity-size relations within these “bias-free” regions. Results: We confirm the correlation between color-log(n) plane position and visual morphological type observed locally and in other high redshift studies in the color and/or structure domain. The combined effects of observational uncertainties, the morphological K-correction and cosmic variance preclude a robust statistical comparison of the shape of the MGC and UDF color-log(n) distributions. However, in the interval 0.75 < z <1.0 where the UDF i-band samples close to rest-frame B-band light (i.e., the morphological K-correction between our samples is negligible) we are able to present tentative evidence of bimodality, albiet for a very small sample size (17 galaxies). Our unique approach to quantifying selection and measurement biases in the luminosity-size plane highlights the need to consider errors in the recovery of both magnitudes and sizes, and their dependence on profile shape. Motivated by these results we divide our sample into the three color-structure classes mentioned above and quantify luminosity-size evolution by galaxy type. Specifically, we detect decreases in B-band, surface brightness of 1.57 ± 0.22 mag arcsec-2 and 1.65 ± 0.22 mag arcsec-2 for our “blue, diffuse” and “red, compact” classes respectively between redshift unity and the present day.
Sampling and data handling methods for inhalable particulate sampling. Final report nov 78-dec 80
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, W.B.; Cushing, K.M.; Johnson, J.W.
1982-05-01
The report reviews the objectives of a research program on sampling and measuring particles in the inhalable particulate (IP) size range in emissions from stationary sources, and describes methods and equipment required. A computer technique was developed to analyze data on particle-size distributions of samples taken with cascade impactors from industrial process streams. Research in sampling systems for IP matter included concepts for maintaining isokinetic sampling conditions, necessary for representative sampling of the larger particles, while flowrates in the particle-sizing device were constant. Laboratory studies were conducted to develop suitable IP sampling systems with overall cut diameters of 15 micrometersmore » and conforming to a specified collection efficiency curve. Collection efficiencies were similarly measured for a horizontal elutriator. Design parameters were calculated for horizontal elutriators to be used with impactors, the EPA SASS train, and the EPA FAS train. Two cyclone systems were designed and evaluated. Tests on an Andersen Size Selective Inlet, a 15-micrometer precollector for high-volume samplers, showed its performance to be with the proposed limits for IP samplers. A stack sampling system was designed in which the aerosol is diluted in flow patterns and with mixing times simulating those in stack plumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhagwat, Mahesh; Ramaswamy, Veda
Nanocrystalline zirconia powder with a fairly narrow particle size distribution has been synthesized by the amorphous citrate route. The sample obtained has a high BET surface area of 89 m{sup 2} g{sup -1}. Rietveld refinement of the powder X-ray diffraction (XRD) profile of the zirconia sample confirms stabilization of zirconia in the tetragonal phase with around 8% monoclinic impurity. The data show the presence of both anionic as well as cationic vacancies in the lattice. Crystallite size determined from XRD is 8 nm and is in close agreement with the particle size determined by TEM. The in situ high temperature-X-raymore » diffraction (HTXRD) study revealed high thermal stability of the mixture till around 1023 K after which the transformation of tetragonal phase into the monoclinic phase has been seen as a function of temperature till 1473 K. This transformation is accompanied by an increase in the crystallite size of the sample from 8 to 55 nm. The thermal expansion coefficients are 9.14 x 10{sup -6} K{sup -1} along 'a'- and 15.8 x 10{sup -6} K{sup -1} along 'c'-axis. The lattice thermal expansion coefficient in the temperature range 298-1623 K is 34.6 x 10{sup -6} K{sup -1}.« less
Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan
2016-03-09
Cardiac magnetic resonance (CMR) can quantify myocardial infarct (MI) size and myocardium at risk (MaR), enabling assessment of myocardial salvage index (MSI). We assessed how MSI impacts the number of patients needed to reach statistical power in relation to MI size alone and levels of biochemical markers in clinical cardioprotection trials and how scan day affect sample size. Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid scan day bias masking a treatment effect of 25%. Sample size in cardioprotection trials can be reduced 46% to 65% without compromising statistical power when using MSI by CMR as an outcome variable instead of MI size alone or biochemical markers. It is essential to ensure lack of bias in scan day between treatment and control arms to avoid compromising statistical power. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Size dependent exchange bias in single-phase Zn0.3Ni0.7Fe2O4 ferrite nanoparticles
NASA Astrophysics Data System (ADS)
Mohan, Rajendra; Ghosh, Mritunjoy Prasad; Mukherjee, Samrat
2018-07-01
We report the microstructural and magnetic characterization of single phase nanocrystalline partially inverted Zn0.3Ni0.7Fe2O4 mixed spinel ferrite. The samples were annealed at 200 °C, 400 °C, 600 °C, 800 °C and 1000 °C. X-ray diffraction results indicate phase purity of all the samples and application of Debye- Scherrer yielded a crystallite size variation from 5 nm to 33 nm for the different samples. Magnetic measurements have revealed the freezing of interfacial spins which were the cause of the large horizontal M-H loop shift causing large exchange bias with high anisotropy. The magnetic measurements show a hysteresis loop with high effective anisotropy constant due to highly magnetically disordered surface spin at 5 K.
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Efficient computation of the joint sample frequency spectra for multiple populations
Kamm, John A.; Terhorst, Jonathan; Song, Yun S.
2016-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248
NASA Astrophysics Data System (ADS)
Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav
2018-04-01
Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (<0.17 μm) PM fractions were collected by high volume cascade impactor in Prague city center. Particles were examined using electron microscopy and their elemental composition was determined by energy dispersive X-ray spectroscopy. Larger or smaller particles, not corresponding to the impaction cut points, were found in all fractions, as they occur in agglomerates and are impacted according to their aerodynamic diameter. Elemental composition of particles in size-segregated fractions varied significantly. Ns-soot occurred in all size fractions. Metallic nanospheres were found in accumulation fractions, but not in ultrafine fraction where ns-soot, carbonaceous particles, and inorganic salts were identified. Dynamic light scattering was used to measure particle size distribution in water and in cell culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.
Porous silicon structures with high surface area/specific pore size
Northrup, M.A.; Yu, C.M.; Raley, N.F.
1999-03-16
Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gases in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters. 9 figs.
Porous silicon structures with high surface area/specific pore size
Northrup, M. Allen; Yu, Conrad M.; Raley, Norman F.
1999-01-01
Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gasses in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters.
Bergmann's rule is maintained during a rapid range expansion in a damselfly.
Hassall, Christopher; Keat, Simon; Thompson, David J; Watts, Phillip C
2014-02-01
Climate-induced range shifts result in the movement of a sample of genotypes from source populations to new regions. The phenotypic consequences of those shifts depend upon the sample characteristics of the dispersive genotypes, which may act to either constrain or promote phenotypic divergence, and the degree to which plasticity influences the genotype-environment interaction. We sampled populations of the damselfly Erythromma viridulum from northern Europe to quantify the phenotypic (latitude-body size relationship based on seven morphological traits) and genetic (variation at microsatellite loci) patterns that occur during a range expansion itself. We find a weak spatial genetic structure that is indicative of high gene flow during a rapid range expansion. Despite the potentially homogenizing effect of high gene flow, however, there is extensive phenotypic variation among samples along the invasion route that manifests as a strong, positive correlation between latitude and body size consistent with Bergmann's rule. This positive correlation cannot be explained by variation in the length of larval development (voltinism). While the adaptive significance of latitudinal variation in body size remains obscure, geographical patterns in body size in odonates are apparently underpinned by phenotypic plasticity and this permits a response to one or more environmental correlates of latitude during a range expansion. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Mockford, T.; Zobeck, T. M.; Lee, J. A.; Gill, T. E.; Dominguez, M. A.; Peinado, P.
2012-12-01
Understanding the controls of mineral dust emissions and their particle size distributions during wind-erosion events is critical as dust particles play a significant impact in shaping the earth's climate. It has been suggested that emission rates and particle size distributions are independent of soil chemistry and soil texture. In this study, 45 samples of wind-erodible surface soils from the Southern High Plains and Chihuahuan Desert regions of Texas, New Mexico, Colorado and Chihuahua were analyzed by the Lubbock Dust Generation, Analysis and Sampling System (LDGASS) and a Beckman-Coulter particle multisizer. The LDGASS created dust emissions in a controlled laboratory setting using a rotating arm which allows particle collisions. The emitted dust was transferred to a chamber where particulate matter concentration was recorded using a DataRam and MiniVol filter and dust particle size distribution was recorded using a GRIMM particle analyzer. Particle size analysis was also determined from samples deposited on the Mini-Vol filters using a Beckman-Coulter particle multisizer. Soil textures of source samples ranged from sands and sandy loams to clays and silts. Initial results suggest that total dust emissions increased with increasing soil clay and silt content and decreased with increasing sand content. Particle size distribution analysis showed a similar relationship; soils with high silt content produced the widest range of dust particle sizes and the smallest dust particles. Sand grains seem to produce the largest dust particles. Chemical control of dust emissions by calcium carbonate content will also be discussed.
Characterization and Beneficiation Studies of a Low Grade Bauxite Ore
NASA Astrophysics Data System (ADS)
Rao, D. S.; Das, B.
2014-10-01
A low grade bauxite sample of central India was thoroughly characterized with the help of stereomicroscope, reflected light microscope and electron microscope using QEMSCAN. A few hand picked samples were collected from different places of the mine and were subjected to geochemical characterization studies. The geochemical studies indicated that most of the samples contain high silica and low alumina, except a few which are high grade. Mineralogically the samples consist of bauxite (gibbsite and boehmite), ferruginous mineral phases (goethite and hematite), clay and silicate (quartz), and titanium bearing minerals like rutile and ilmenite. Majority of the gibbsite, boehmite and gibbsitic oolites contain clay, quartz and iron and titanium mineral phases within the sample as inclusions. The sample on an average contains 39.1 % Al2O3 and 12.3 % SiO2, and 20.08 % of Fe2O3. Beneficiation techniques like size classification, sorting, scrubbing, hydrocyclone and magnetic separation were employed to reduce the silica content suitable for Bayer process. The studies indicated that, 50 % by weight with 41 % Al2O3 containing less than 5 % SiO2 could be achieved. The finer sized sample after physical beneficiation still contains high silica due to complex mineralogical associations.
Magnetic fingerprint of the sediment load in a meander bend section of the Seine River (France)
NASA Astrophysics Data System (ADS)
Kayvantash, D.; Cojan, I.; Kissel, C.; Franke, C.
2017-06-01
This study aims to evaluate the potential of magnetic methods to determine the composition of the sediment load in a cross section of an unmanaged meander in the upstream stretch of the Seine River (Marnay-sur-Seine). Suspended particulate matter (SPM) was collected based on a regular sampling scheme along a cross section of the river, at two different depth levels: during a low-water stage (May 2014) and a high-water stage (February 2015). Riverbed sediments (RBS) were collected during the low-water stage and supplementary samples were taken from the outer and inner banks. Magnetic properties of the dry bulk SPM and sieved RBS and bank sediments were analysed. After characterizing the main magnetic carrier as magnetite, hysteresis parameters were measured, giving access to the grain size and the concentration of these magnetite particles. The results combined with sedimentary grain size data were compared to the three-dimensional velocity profile of the river flow. In the RBS where the magnetic grain size is rather uniform, the concentration of magnetite is inversely proportional to the mean grain size of the total sediment indicating that magnetite is strongly associated with the fine sedimentary fraction. The same pattern is observed in the samples from the outer and inner banks. During the low-water stage, the uniformly fine SPM grain size distribution characterizes the wash load. The magnetic fraction is also relatively fine (within the pseudo single domain range) with concentration similar to that of the fine RBS fraction. During the high-water stage, SPM samples correspond to mixtures of wash load and resuspended sediment from the bedload and riverbanks. Here, the grain size distribution is heterogeneous across the section showing coarser particles compared to those in the low-water stage and more varying magnetite concentrations while the magnetic grain size is like that of the low-water stage. The magnetite concentration in the high-water SPM can be modelled based on a mixing of the magnetite concentrations of the different grain size fractions, thus quantifying the impact of resuspension in the cross section.
Haverkamp, Nicolas; Beauducel, André
2017-01-01
We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.
NASA Astrophysics Data System (ADS)
Jamil, Farinaa Md; Sulaiman, Mohd Ali; Ibrahim, Suhaina Mohd; Masrom, Abdul Kadir; Yahya, Muhd Zu Azhan
2017-12-01
A series of mesoporous carbon sample was synthesized using silica template, SBA-15 with two different pore sizes. Impregnation method was applied using glucose as a precursor for converting it into carbon. An appropriate carbonization and silica removal process were carried out to produce a series of mesoporous carbon with different pore sizes and surface areas. Mesoporous carbon sample was then assembled as electrode and its performance was tested using cyclic voltammetry and impedance spectroscopy to study the effect of ion transportation into several pore sizes on electric double layer capacitor (EDLC) system. 6M KOH was used as electrolyte at various scan rates of 10, 20, 30 and 50 mVs-1. The results showed that the pore size of carbon increased as the pore size of template increased and the specific capacitance improved as the increasing of the pore size of carbon.
Sub-sampling genetic data to estimate black bear population size: A case study
Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.
2007-01-01
Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.
Nikol'skii, A A
2017-11-01
Dependence of the sound-signal frequency on the animal body length was studied in 14 ground squirrel species (genus Spermophilus) of Eurasia. Regression analysis of the total sample yielded a low determination coefficient (R 2 = 26%), because the total sample proved to be heterogeneous in terms of signal frequency within the dimension classes of animals. When the total sample was divided into two groups according to signal frequency, two statistically significant models (regression equations) were obtained in which signal frequency depended on the body size at high determination coefficients (R 2 = 73 and 94% versus 26% for the total sample). Thus, the problem of correlation between animal body size and the frequency of their vocal signals does not have a unique solution.
Size Matters. The Relevance and Hicksian Surplus of Preferred College Class Size
ERIC Educational Resources Information Center
Mandel, Philipp; Susmuth, Bernd
2011-01-01
The contribution of this paper is twofold. First, we examine the impact of class size on student evaluations of instructor performance using a sample of approximately 1400 economics classes held at the University of Munich from Fall 1998 to Summer 2007. We offer confirmatory evidence for the recent finding of a large, highly significant, and…
Recent advances of mesoporous materials in sample preparation.
Zhao, Liang; Qin, Hongqiang; Wu, Ren'an; Zou, Hanfa
2012-03-09
Sample preparation has been playing an important role in the analysis of complex samples. Mesoporous materials as the promising adsorbents have gained increasing research interest in sample preparation due to their desirable characteristics of high surface area, large pore volume, tunable mesoporous channels with well defined pore-size distribution, controllable wall composition, as well as modifiable surface properties. The aim of this paper is to review the recent advances of mesoporous materials in sample preparation with emphases on extraction of metal ions, adsorption of organic compounds, size selective enrichment of peptides/proteins, specific capture of post-translational peptides/proteins and enzymatic reactor for protein digestion. Copyright © 2011 Elsevier B.V. All rights reserved.
Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T
2007-01-01
The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.
Olsen, Kim Rose; Sørensen, Torben Højmark; Gyrd-Hansen, Dorte
2010-04-19
Due to shortage of general practitioners, it may be necessary to improve productivity. We assess the association between productivity, list size and patient- and practice characteristics. A regression approach is used to perform productivity analysis based on national register data and survey data for 1,758 practices. Practices are divided into four groups according to list size and productivity. Statistical tests are used to assess differences in patient- and practice characteristics. There is a significant, positive correlation between list size and productivity (p < 0.01). Nevertheless, 19% of the practices have a list size below and a productivity above mean sample values. These practices have relatively demanding patients (older, low socioeconomic status, high use of pharmaceuticals) and they are frequently located in areas with limited access to specialized care and have a low use of assisting personnel. 13% of the practices have a list size above and a productivity below mean sample values. These practices have relatively less demanding patients, are located in areas with good access to specialized care, and have a high use of assisting personnel. Lists and practice characteristics have substantial influence on both productivity and list size. Adjusting list size to external factors seems to be an effective tool to increase productivity in general practice.
Development of a Multiple-Stage Differential Mobility Analyzer (MDMA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Da-Ren; Cheng, Mengdawn
2007-01-01
A new DMA column has been designed with the capability of simultaneously extracting monodisperse particles of different sizes in multiple stages. We call this design a multistage DMA, or MDMA. A prototype MDMA has been constructed and experimentally evaluated in this study. The new column enables the fast measurement of particles in a wide size range, while preserving the powerful particle classification function of a DMA. The prototype MDMA has three sampling stages, capable of classifying monodisperse particles of three different sizes simultaneously. The scanning voltage operation of a DMA can be applied to this new column. Each stage ofmore » MDMA column covers a fraction of the entire particle size range to be measured. The covered size fractions of two adjacent stages of the MDMA are designed somewhat overlapped. The arrangement leads to the reduction of scanning voltage range and thus the cycling time of the measurement. The modular sampling stage design of the MDMA allows the flexible configuration of desired particle classification lengths and variable number of stages in the MDMA. The design of our MDMA also permits operation at high sheath flow, enabling high-resolution particle size measurement and/or reduction of the lower sizing limit. Using the tandem DMA technique, the performance of the MDMA, i.e., sizing accuracy, resolution, and transmission efficiency, was evaluated at different ratios of aerosol and sheath flowrates. Two aerosol sampling schemes were investigated. One was to extract aerosol flows at an evenly partitioned flowrate at each stage, and the other was to extract aerosol at a rate the same as the polydisperse aerosol flowrate at each stage. We detail the prototype design of the MDMA and the evaluation result on the transfer functions of the MDMA at different particle sizes and operational conditions.« less
NASA Astrophysics Data System (ADS)
Ghosh, P.; Bhowmik, R. N.; Das, M. R.; Mitra, P.
2017-04-01
We have studied the grain size dependent electrical conductivity, dielectric relaxation and magnetic field dependent current voltage (I - V) characteristics of nickel ferrite (NiFe2O4) . The material has been synthesized by sol-gel self-combustion technique, followed by ball milling at room temperature in air environment to control the grain size. The material has been characterized using X-ray diffraction (refined with MAUD software analysis) and Transmission electron microscopy. Impedance spectroscopy and I - V characteristics in the presence of variable magnetic fields have confirmed the increase of resistivity for the fine powdered samples (grain size 5.17±0.6 nm), resulted from ball milling of the chemical routed sample. Activation energy of the material for electrical charge hopping process has increased with the decrease of grain size by mechanical milling of chemical routed sample. The I - V curves showed many highly non-linear and irreversible electrical features, e.g., I - V loop and bi-stable electronic states (low resistance state-LRS and high resistance state-HRS) on cycling the electrical bias voltage direction during I-V curve measurement. The electrical dc resistance for the chemically routed (without milled) sample in HRS (∼3.4876×104 Ω) at 20 V in presence of magnetic field 10 kOe has enhanced to ∼3.4152×105 Ω for the 10 h milled sample. The samples exhibited an unusual negative differential resistance (NDR) effect that gradually decreased on decreasing the grain size of the material. The magneto-resistance of the samples at room temperature has been found substantially large (∼25-65%). The control of electrical charge transport properties under magnetic field, as observed in the present ferrimagnetic material, indicate the magneto-electric coupling in the materials and the results could be useful in spintronics applications.
NASA Astrophysics Data System (ADS)
Sañé, E.; Chiocci, F. L.; Basso, D.; Martorelli, E.
2016-10-01
The effects of different environmental factors controlling the distribution of different morphologies, sizes and growth forms of rhodoliths in the western Pontine Archipelago have been studied. The analysis of 231 grab samples has been integrated with 68 remotely operated vehicle (ROV) videos (22 h) and a high resolution (<1 m) side scan sonar mosaic of the seafloor surrounding the Archipelago, covering an area of approximately 460 km2. Living rhodoliths were collected in approximately 10% of the grab samples and observed in approximately 30% of the ROV dives. The combination of sediment sampling, video surveys and acoustic facies mapping suggested that the presence of rhodoliths can be associated to the dishomogeneous high backscatter sonar facies and high backscatter facies. Both pralines and unattached branches were found to be the most abundant morphological groups (50% and 41% of samples, respectively), whereas boxwork rhodoliths were less common, accounting only for less than 10% of the total number of samples. Pralines and boxwork rhodoliths were almost equally distributed among large (28%), medium (36%) and small sizes (36%). Pralines generally presented a fruticose growth form (49% of pralines) even if pralines with encrusting-warty (36% of pralines) or lumpy (15% of pralines) growth forms were also present. Morphologies, sizes and growth forms vary mainly along the depth gradient. Large rhodoliths with a boxwork morphology are abundant at depth, whereas unattached branches and, in general, rhodoliths with a high protuberance degree are abundant in shallow waters. The exposure to storm waves and bottom currents related to geostrofic circulation could explain the absence of rhodoliths off the eastern side of the three islands forming the Archipelago.
NASA Astrophysics Data System (ADS)
Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.
2009-04-01
Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".
NASA Astrophysics Data System (ADS)
Peterson, Joseph E.; Lenczewski, Melissa E.; Clawson, Steven R.; Warnock, Jonathan P.
2017-04-01
Microscopic soft tissues have been identified in fossil vertebrate remains collected from various lithologies. However, the diagenetic mechanisms to preserve such tissues have remained elusive. While previous studies have described infiltration of biofilms in Haversian and Volkmann’s canals, biostratinomic alteration (e.g., trampling), and iron derived from hemoglobin as playing roles in the preservation processes, the influence of sediment texture has not previously been investigated. This study uses a Kolmogorov Smirnov Goodness-of-Fit test to explore the influence of biostratinomic variability and burial media against the infiltration of biofilms in bone samples. Controlled columns of sediment with bone samples were used to simulate burial and subsequent groundwater flow. Sediments used in this study include clay-, silt-, and sand-sized particles modeled after various fluvial facies commonly associated with fossil vertebrates. Extant limb bone samples obtained from Gallus gallus domesticus (Domestic Chicken) buried in clay-rich sediment exhibit heavy biofilm infiltration, while bones buried in sands and silts exhibit moderate levels. Crushed bones exhibit significantly lower biofilm infiltration than whole bone samples. Strong interactions between biostratinomic alteration and sediment size are also identified with respect to biofilm development. Sediments modeling crevasse splay deposits exhibit considerable variability; whole-bone crevasse splay samples exhibit higher frequencies of high-level biofilm infiltration, and crushed-bone samples in modeled crevasse splay deposits display relatively high frequencies of low-level biofilm infiltration. These results suggest that sediment size, depositional setting, and biostratinomic condition play key roles in biofilm infiltration in vertebrate remains, and may influence soft tissue preservation in fossil vertebrates.
NASA Astrophysics Data System (ADS)
Herbold, E. B.; Nesterenko, V. F.; Benson, D. J.; Cai, J.; Vecchio, K. S.; Jiang, F.; Addiss, J. W.; Walley, S. M.; Proud, W. G.
2008-11-01
The variation of metallic particle size and sample porosity significantly alters the dynamic mechanical properties of high density granular composite materials processed using a cold isostatically pressed mixture of polytetrafluoroethylene (PTFE), aluminum (Al), and tungsten (W) powders. Quasistatic and dynamic experiments are performed with identical constituent mass fractions with variations in the size of the W particles and pressing conditions. The relatively weak polymer matrix allows the strength and fracture modes of this material to be governed by the granular type behavior of agglomerated metal particles. A higher ultimate compressive strength was observed in relatively high porosity samples with small W particles compared to those with coarse W particles in all experiments. Mesoscale granular force chains of the metallic particles explain this unusual phenomenon as observed in hydrocode simulations of a drop-weight test. Macrocracks forming below the critical failure strain for the matrix and unusual behavior due to a competition between densification and fracture in dynamic tests of porous samples were also observed. Numerical modeling of shock loading of this granular composite material demonstrated that the internal energy, specifically thermal energy, of the soft PTFE matrix can be tailored by the W particle size distribution.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-04-01
In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Soft γ-ray selected radio galaxies: favouring giant size discovery
NASA Astrophysics Data System (ADS)
Bassani, L.; Venturi, T.; Molina, M.; Malizia, A.; Dallacasa, D.; Panessa, F.; Bazzano, A.; Ubertini, P.
2016-09-01
Using the recent INTEGRAL/IBIS and Swift/BAT surveys we have extracted a sample of 64 confirmed plus three candidate radio galaxies selected in the soft gamma-ray band. The sample covers all optical classes and is dominated by objects showing a Fanaroff-Riley type II radio morphology; a large fraction (70 per cent) of the sample is made of `radiative mode' or high-excitation radio galaxies. We measured the source size on images from the NRAO VLA Sky Survey, the Faint Images of the Radio Sky at twenty-cm and the Sydney University Molonglo Sky Survey images and have compared our findings with data in the literature obtaining a good match. We surprisingly found that the soft gamma-ray selection favours the detection of large size radio galaxies: 60 per cent of objects in the sample have size greater than 0.4 Mpc while around 22 per cent reach dimension above 0.7 Mpc at which point they are classified as giant radio galaxies (GRGs), the largest and most energetic single entities in the Universe. Their fraction among soft gamma-ray selected radio galaxies is significantly larger than typically found in radio surveys, where only a few per cent of objects (1-6 per cent) are GRGs. This may partly be due to observational biases affecting radio surveys more than soft gamma-ray surveys, thus disfavouring the detection of GRGs at lower frequencies. The main reasons and/or conditions leading to the formation of these large radio structures are still unclear with many parameters such as high jet power, long activity time and surrounding environment all playing a role; the first two may be linked to the type of active galactic nucleus discussed in this work and partly explain the high fraction of GRGs found in the present sample. Our result suggests that high energy surveys may be a more efficient way than radio surveys to find these peculiar objects.
[Experimental study on particle size distributions of an engine fueled with blends of biodiesel].
Lu, Xiao-Ming; Ge, Yun-Shan; Han, Xiu-Kun; Wu, Si-Jin; Zhu, Rong-Fu; He, Chao
2007-04-01
The purpose of this study is to obtain the particle size distributions of an engine fueled biodiesel and its blends. A turbocharged DI diesel engine was tested on a dynamometer. A pump of 80 L/min and fiber glass filters with diameter of 90 mm were used to sample engine particles in exhaust pipe. Sampling duration was 10 minutes. Particle size distributions were measured by a laser diffraction particle size analyzer. Results indicated that higher engine speed resulted in smaller particle sizes and narrower distributions. The modes on distribution curves and mode variation were larger with dry samples than with wet samples (dry: around 10 - 12 microm vs. wet: around 4 - 10 microm). At low speed, Sauter mean diameter d32 of dry samples was the biggest with B100, the smallest with diesel fuel, and among them with B20, while at high speed, d32 the biggest with B20, the smallest with B100, and in middle with diesel. Median diameter d(0.5) also reflected the results. Except for 2 000 r/min, d32 of wet with B20 is the biggest, the smallest with diesel, and in middle with B100. The large mode variation resulted in increase of d32.
Sample size determination for disease prevalence studies with partially validated data.
Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai
2016-02-01
Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.
Replication and contradiction of highly cited research papers in psychiatry: 10-year follow-up.
Tajika, Aran; Ogawa, Yusuke; Takeshima, Nozomi; Hayasaka, Yu; Furukawa, Toshi A
2015-10-01
Contradictions and initial overestimates are not unusual among highly cited studies. However, this issue has not been researched in psychiatry. Aims: To assess how highly cited studies in psychiatry are replicated by subsequent studies. We selected highly cited studies claiming effective psychiatric treatments in the years 2000 through 2002. For each of these studies we searched for subsequent studies with a better-controlled design, or with a similar design but a larger sample. Among 83 articles recommending effective interventions, 40 had not been subject to any attempt at replication, 16 were contradicted, 11 were found to have substantially smaller effects and only 16 were replicated. The standardised mean differences of the initial studies were overestimated by 132%. Studies with a total sample size of 100 or more tended to produce replicable results. Caution is needed when a study with a small sample size reports a large effect. © The Royal College of Psychiatrists 2015.
Ma, Yan; Xie, Jiawen; Jin, Jing; Wang, Wei; Yao, Zhijian; Zhou, Qing; Li, Aimin; Liang, Ying
2015-07-01
A novel magnetic solid phase extraction coupled with high-performance liquid chromatography method was established to analyze polyaromatic hydrocarbons in environmental water samples. The extraction conditions, including the amount of extraction agent, extraction time, pH and the surface structure of the magnetic extraction agent, were optimized. The results showed that the amount of extraction agent and extraction time significantly influenced the extraction performance. The increase in the specific surface area, the enlargement of pore size, and the reduction of particle size could enhance the extraction performance of the magnetic microsphere. The optimized magnetic extraction agent possessed a high surface area of 1311 m(2) /g, a large pore size of 6-9 nm, and a small particle size of 6-9 μm. The limit of detection for phenanthrene and benzo[g,h,i]perylene in the developed analysis method was 3.2 and 10.5 ng/L, respectively. When applied to river water samples, the spiked recovery of phenanthrene and benzo[g,h,i]perylene ranged from 89.5-98.6% and 82.9-89.1%, respectively. Phenanthrene was detected over a concentration range of 89-117 ng/L in three water samples withdrawn from the midstream of the Huai River, and benzo[g,h,i]perylene was below the detection limit. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Verma, Narendra Kumar; Patel, Sandeep Kumar Singh; Kumar, Dinesh; Singh, Chandra Bhal; Singh, Akhilesh Kumar
2018-05-01
We have investigated the effect of sintering temperature on the densification behaviour, grain size, structural and dielectric properties of BaTiO3 ceramics, prepared by high energy ball milling method. The Powder x-ray diffraction reveals the tetragonal structure with space group P4mm for all the samples. The samples were sintered at four different temperatures, (T = 900°C, 1000°C, 1100°C, 1200°C and 1300°C). Density increased with increasing sintering temperature, reaching up to 97% at 1300°C. A grain growth was observed with increasing sintering temperature. Impedance analyses of the sintered samples at various temperatures were performed. Increase in dielectric constant and Curie temperature is observed with increasing sintering temperature.
NASA Astrophysics Data System (ADS)
Rai, A. K.; Kumar, A.; Hies, T.; Nguyen, H. H.
2016-11-01
High sediment load passing through hydropower components erodes the hydraulic components resulting in loss of efficiency, interruptions in power production and downtime for repair/maintenance, especially in Himalayan regions. The size and concentration of sediment play a major role in silt erosion. The traditional process of collecting samples manually to analyse in laboratory cannot suffice the need of monitoring temporal variation in sediment properties. In this study, a multi-frequency acoustic instrument was applied at desilting chamber to monitor sediment size and concentration entering the turbine. The sediment size and concentration entering the turbine were also measured with manual samples collected twice daily. The samples collected manually were analysed in laboratory with a laser diffraction instrument for size and concentration apart from analysis by drying and filtering methods for concentration. A conductivity probe was used to calculate total dissolved solids, which was further used in results from drying method to calculate suspended solid content of the samples. The acoustic instrument was found to provide sediment concentration values similar to drying and filtering methods. However, no good match was found between mean grain size from the acoustic method with the current status of development and laser diffraction method in the first field application presented here. The future versions of the software and significant sensitivity improvements of the ultrasonic transducers are expected to increase the accuracy in the obtained results. As the instrument is able to capture the concentration and in the future most likely more accurate mean grain size of the suspended sediments, its application for monitoring silt erosion in hydropower plant shall be highly useful.
Size-selective separation of submicron particles in suspensions with ultrasonic atomization.
Nii, Susumu; Oka, Naoyoshi
2014-11-01
Aqueous suspensions containing silica or polystyrene latex were ultrasonically atomized for separating particles of a specific size. With the help of a fog involving fine liquid droplets with a narrow size distribution, submicron particles in a limited size-range were successfully separated from suspensions. Performance of the separation was characterized by analyzing the size and the concentration of collected particles with a high resolution method. Irradiation of 2.4MHz ultrasound to sample suspensions allowed the separation of particles of specific size from 90 to 320nm without regarding the type of material. Addition of a small amount of nonionic surfactant, PONPE20 to SiO2 suspensions enhanced the collection of finer particles, and achieved a remarkable increase in the number of collected particles. Degassing of the sample suspension resulted in eliminating the separation performance. Dissolved air in suspensions plays an important role in this separation. Copyright © 2014 Elsevier B.V. All rights reserved.
Chemical Composition and Source Apportionment of Size ...
The Cleveland airshed comprises a complex mixture of industrial source emissions that contribute to periods of non-attainment for fine particulate matter (PM 2.5 ) and are associated with increased adverse health outcomes in the exposed population. Specific PM sources responsible for health effects however are not fully understood. Size-fractionated PM (coarse, fine, and ultrafine) samples were collected using a ChemVol sampler at an urban site (G.T. Craig (GTC)) and rural site (Chippewa Lake (CLM)) from July 2009 to June 2010, and then chemically analyzed. The resulting speciated PM data were apportioned by EPA positive matrix factorization to identify emission sources for each size fraction and location. For comparisons with the ChemVol results, PM samples were also collected with sequential dichotomous and passive samplers, and evaluated for source contributions to each sampling site. The ChemVol results showed that annual average concentrations of PM, elemental carbon, and inorganic elements in the coarse fraction at GTC were ~ 2, ~7, and ~3 times higher than those at CLM, respectively, while the smaller size fractions at both sites showed similar annual average concentrat ions. Seasonal variations of secondary aerosols (e.g., high N03- level in winter and high SO42- level in summer) were observed at both sites. Source apportionment results demonstrated that the PM samples at GTC and CLM were enriched with local industrial sources (e.g., steel plant and coa
Estimation of the bottleneck size in Florida panthers
Culver, M.; Hedrick, P.W.; Murphy, K.; O'Brien, S.; Hornocker, M.G.
2008-01-01
We have estimated the extent of genetic variation in museum (1890s) and contemporary (1980s) samples of Florida panthers Puma concolor coryi for both nuclear loci and mtDNA. The microsatellite heterozygosity in the contemporary sample was only 0.325 that in the museum samples although our sample size and number of loci are limited. Support for this estimate is provided by a sample of 84 microsatellite loci in contemporary Florida panthers and Idaho pumas Puma concolor hippolestes in which the contemporary Florida panther sample had only 0.442 the heterozygosity of Idaho pumas. The estimated diversities in mtDNA in the museum and contemporary samples were 0.600 and 0.000, respectively. Using a population genetics approach, we have estimated that to reduce either the microsatellite heterozygosity or the mtDNA diversity this much (in a period of c. 80years during the 20th century when the numbers were thought to be low) that a very small bottleneck size of c. 2 for several generations and a small effective population size in other generations is necessary. Using demographic data from Yellowstone pumas, we estimated the ratio of effective to census population size to be 0.315. Using this ratio, the census population size in the Florida panthers necessary to explain the loss of microsatellite variation was c .41 for the non-bottleneck generations and 6.2 for the two bottleneck generations. These low bottleneck population sizes and the concomitant reduced effectiveness of selection are probably responsible for the high frequency of several detrimental traits in Florida panthers, namely undescended testicles and poor sperm quality. The recent intensive monitoring both before and after the introduction of Texas pumas in 1995 will make the recovery and genetic restoration of Florida panthers a classic study of an endangered species. Our estimates of the bottleneck size responsible for the loss of genetic variation in the Florida panther completes an unknown aspect of this account. ?? 2008 The Authors. Journal compilation ?? 2008 The Zoological Society of London.
McKenzie, Erica R; Young, Thomas M
2013-01-01
Size exclusion chromatography (SEC), which separates molecules based on molecular volume, can be coupled with online inductively coupled plasma mass spectrometry (ICP-MS) to explore size-dependent metal-natural organic matter (NOM) complexation. To make effective use of this analytical dual detector system, the operator should be mindful of quality control measures. Al, Cr, Fe, Se, and Sn all exhibited columnless attenuation, which indicated unintended interactions with system components. Based on signal-to-noise ratio and peak reproducibility between duplicate analyses of environmental samples, consistent peak time and height were observed for Mg, Cl, Mn, Cu, Br, and Pb. Al, V, Fe, Co, Ni, Zn, Se, Cd, Sn, and Sb were less consistent overall, but produced consistent measurements in select samples. Ultrafiltering and centrifuging produced similar peak distributions, but glass fiber filtration produced more high molecular weight (MW) peaks. Storage in glass also produced more high MW peaks than did plastic bottles.
Visscher, Peter M; Goddard, Michael E
2015-01-01
Heritability is a population parameter of importance in evolution, plant and animal breeding, and human medical genetics. It can be estimated using pedigree designs and, more recently, using relationships estimated from markers. We derive the sampling variance of the estimate of heritability for a wide range of experimental designs, assuming that estimation is by maximum likelihood and that the resemblance between relatives is solely due to additive genetic variation. We show that well-known results for balanced designs are special cases of a more general unified framework. For pedigree designs, the sampling variance is inversely proportional to the variance of relationship in the pedigree and it is proportional to 1/N, whereas for population samples it is approximately proportional to 1/N(2), where N is the sample size. Variation in relatedness is a key parameter in the quantification of the sampling variance of heritability. Consequently, the sampling variance is high for populations with large recent effective population size (e.g., humans) because this causes low variation in relationship. However, even using human population samples, low sampling variance is possible with high N. Copyright © 2015 by the Genetics Society of America.
Shiau, Yo-Jin; Chen, Jenn-Shing; Chung, Tay-Lung; Tian, Guanglong; Chiu, Chih-Yu
2017-12-01
Soil organic carbon (SOC) and carbon (C) functional groups in different particle-size fractions are important indicators of microbial activity and soil decomposition stages under wildfire disturbances. This research investigated a natural Tsuga forest and a nearby fire-induced grassland along a sampling transect in Central Taiwan with the aim to better understand the effect of forest wildfires on the change of SOC in different soil particle scales. Soil samples were separated into six particle sizes and SOC was characterized by solid-state 13 C nuclear magnetic resonance spectroscopy in each fraction. The SOC content was higher in forest than grassland soil in the particle-size fraction samples. The O-alkyl-C content (carbohydrate-derived structures) was higher in the grassland than the forest soils, but the alkyl-C content (recalcitrant substances) was higher in forest than grassland soils, for a higher humification degree (alkyl-C/O-alkyl-C ratio) in forest soils for all the soil particle-size fractions. High humification degree was found in forest soils. The similar aromaticity between forest and grassland soils might be attributed to the fire-induced aromatic-C content in the grassland that offsets the original difference between the forest and grassland. High alkyl-C content and humification degree and low C/N ratios in the fine particle-size fractions implied that undecomposed recalcitrant substances tended to accumulate in the fine fractions of soils.
Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios
NASA Technical Reports Server (NTRS)
Juarez, Alfredo; Harper, Susana A.
2016-01-01
The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.
Namera, Akira; Saito, Takeshi; Ota, Shigenori; Miyazaki, Shota; Oikawa, Hiroshi; Murata, Kazuhiro; Nagao, Masataka
2017-09-29
Monolithic silica in MonoSpin for solid-phase extraction of drugs from whole blood samples was developed to facilitate high-throughput analysis. Monolithic silica of various pore sizes and octadecyl contents were synthesized, and their effects on recovery rates were evaluated. The silica monolith M18-200 (20μm through-pore size, 10.4nm mesopore size, and 17.3% carbon content) achieved the best recovery of the target analytes in whole blood samples. The extraction proceeded with centrifugal force at 1000rpm for 2min, and the eluate was directly injected into the liquid chromatography-mass spectrometry system without any tedious steps such as evaporation of extraction solvents. Under the optimized condition, low detection limits of 0.5-2.0ngmL -1 and calibration ranges up to 1000ngmL -1 were obtained. The recoveries of the target drugs in the whole blood were 76-108% with relative standard deviation of less than 14.3%. These results indicate that the developed method based on monolithic silica is convenient, highly efficient, and applicable for detecting drugs in whole blood samples. Copyright © 2017 Elsevier B.V. All rights reserved.
In Situ Balloon-Borne Ice Particle Imaging in High-Latitude Cirrus
NASA Astrophysics Data System (ADS)
Kuhn, Thomas; Heymsfield, Andrew J.
2016-09-01
Cirrus clouds reflect incoming solar radiation, creating a cooling effect. At the same time, these clouds absorb the infrared radiation from the Earth, creating a greenhouse effect. The net effect, crucial for radiative transfer, depends on the cirrus microphysical properties, such as particle size distributions and particle shapes. Knowledge of these cloud properties is also needed for calibrating and validating passive and active remote sensors. Ice particles of sizes below 100 µm are inherently difficult to measure with aircraft-mounted probes due to issues with resolution, sizing, and size-dependent sampling volume. Furthermore, artefacts are produced by shattering of particles on the leading surfaces of the aircraft probes when particles several hundred microns or larger are present. Here, we report on a series of balloon-borne in situ measurements that were carried out at a high-latitude location, Kiruna in northern Sweden (68N 21E). The method used here avoids these issues experienced with the aircraft probes. Furthermore, with a balloon-borne instrument, data are collected as vertical profiles, more useful for calibrating or evaluating remote sensing measurements than data collected along horizontal traverses. Particles are collected on an oil-coated film at a sampling speed given directly by the ascending rate of the balloon, 4 m s-1. The collecting film is advanced uniformly inside the instrument so that an always unused section of the film is exposed to ice particles, which are measured by imaging shortly after sampling. The high optical resolution of about 4 µm together with a pixel resolution of 1.65 µm allows particle detection at sizes of 10 µm and larger. For particles that are 20 µm (12 pixel) in size or larger, the shape can be recognized. The sampling volume, 130 cm3 s-1, is well defined and independent of particle size. With the encountered number concentrations of between 4 and 400 L-1, this required about 90- to 4-s sampling times to determine particle size distributions of cloud layers. Depending on how ice particles vary through the cloud, several layers per cloud with relatively uniform properties have been analysed. Preliminary results of the balloon campaign, targeting upper tropospheric, cold cirrus clouds, are presented here. Ice particles in these clouds were predominantly very small, with a median size of measured particles of around 50 µm and about 80 % of all particles below 100 µm in size. The properties of the particle size distributions at temperatures between -36 and -67 °C have been studied, as well as particle areas, extinction coefficients, and their shapes (area ratios). Gamma and log-normal distribution functions could be fitted to all measured particle size distributions achieving very good correlation with coefficients R of up to 0.95. Each distribution features one distinct mode. With decreasing temperature, the mode diameter decreases exponentially, whereas the total number concentration increases by two orders of magnitude with decreasing temperature in the same range. The high concentrations at cold temperatures also caused larger extinction coefficients, directly determined from cross-sectional areas of single ice particles, than at warmer temperatures. The mass of particles has been estimated from area and size. Ice water content (IWC) and effective diameters are then determined from the data. IWC did vary only between 1 × 10-3 and 5 × 10-3 g m-3 at temperatures below -40 °C and did not show a clear temperature trend. These measurements are part of an ongoing study.
Huang, Haijian; Wang, Xing; Tervoort, Elena; Zeng, Guobo; Liu, Tian; Chen, Xi; Sologubenko, Alla; Niederberger, Markus
2018-03-27
A general method for preparing nano-sized metal oxide nanoparticles with highly disordered crystal structure and their processing into stable aqueous dispersions is presented. With these nanoparticles as building blocks, a series of nanoparticles@reduced graphene oxide (rGO) composite aerogels are fabricated and directly used as high-power anodes for lithium-ion hybrid supercapacitors (Li-HSCs). To clarify the effect of the degree of disorder, control samples of crystalline nanoparticles with similar particle size are prepared. The results indicate that the structurally disordered samples show a significantly enhanced electrochemical performance compared to the crystalline counterparts. In particular, structurally disordered Ni x Fe y O z @rGO delivers a capacity of 388 mAh g -1 at 5 A g -1 , which is 6 times that of the crystalline sample. Disordered Ni x Fe y O z @rGO is taken as an example to study the reasons for the enhanced performance. Compared with the crystalline sample, density functional theory calculations reveal a smaller volume expansion during Li + insertion for the structurally disordered Ni x Fe y O z nanoparticles, and they are found to exhibit larger pseudocapacitive effects. Combined with an activated carbon (AC) cathode, full-cell tests of the lithium-ion hybrid supercapacitors are performed, demonstrating that the structurally disordered metal oxide nanoparticles@rGO||AC hybrid systems deliver high energy and power densities within the voltage range of 1.0-4.0 V. These results indicate that structurally disordered nanomaterials might be interesting candidates for exploring high-power anodes for Li-HSCs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohletz, K.H.; Raymond, R. Jr.; Rawson, G.
1988-01-01
The MISTY PICTURE surface burst was detonated at the White Sands Missle range in May of 1987. The Los Alamos National Laboratory dust characterization program was expanded to help correlate and interrelate aspects of the overall MISTY PICTURE dust and ejecta characterization program. Pre-shot sampling of the test bed included composite samples from 15 to 75 m distance from Surface Ground Zero (SGZ) representing depths down to 2.5 m, interval samples from 15 to 25 m from SGZ representing depths down to 3m, and samples of surface material (top 0.5 cm) out to distances of 190 m from SGZ. Sweep-upmore » samples were collected in GREG/SNOB gages located within the DPR. All samples were dry-sieved between 8.0 mm and 0.045 mm (16 size fractures); selected samples were analyzed for fines by a contrifugal settling technique. The size distributions were analyzed using spectral decomposition based upon a sequential fragmentation model. Results suggest that the same particle size subpopulations are present in the ejecta, fallout, and sweep-up samples as are present in the pre-shot test bed. The particle size distribution in post-shot environments apparently can be modelled taking into account heterogeneities in the pre-shot test bed and dominant wind direction during and following the shot. 13 refs., 12 figs., 2 tabs.« less
Digital image processing of nanometer-size metal particles on amorphous substrates
NASA Technical Reports Server (NTRS)
Soria, F.; Artal, P.; Bescos, J.; Heinemann, K.
1989-01-01
The task of differentiating very small metal aggregates supported on amorphous films from the phase contrast image features inherently stemming from the support is extremely difficult in the nanometer particle size range. Digital image processing was employed to overcome some of the ambiguities in evaluating such micrographs. It was demonstrated that such processing allowed positive particle detection and a limited degree of statistical size analysis even for micrographs where by bare eye examination the distribution between particles and erroneous substrate features would seem highly ambiguous. The smallest size class detected for Pd/C samples peaks at 0.8 nm. This size class was found in various samples prepared under different evaporation conditions and it is concluded that these particles consist of 'a magic number' of 13 atoms and have cubooctahedral or icosahedral crystal structure.
Sources of variability in collection and preparation of paint and lead-coating samples.
Harper, S L; Gutknecht, W F
2001-06-01
Chronic exposure of children to lead (Pb) can result in permanent physiological impairment. Since surfaces coated with lead-containing paints and varnishes are potential sources of exposure, it is extremely important that reliable methods for sampling and analysis be available. The sources of variability in the collection and preparation of samples were investigated to improve the performance and comparability of methods and to ensure that data generated will be adequate for its intended use. Paint samples of varying sizes (areas and masses) were collected at different locations across a variety of surfaces including metal, plaster, concrete, and wood. A variety of grinding techniques were compared. Manual mortar and pestle grinding for at least 1.5 min and mechanized grinding techniques were found to generate similar homogenous particle size distributions required for aliquots as small as 0.10 g. When 342 samples were evaluated for sample weight loss during mortar and pestle grinding, 4% had 20% or greater loss with a high of 41%. Homogenization and sub-sampling steps were found to be the principal sources of variability related to the size of the sample collected. Analysis of samples from different locations on apparently identical surfaces were found to vary by more than a factor of two both in Pb concentration (mg cm-2 or %) and areal coating density (g cm-2). Analyses of substrates were performed to determine the Pb remaining after coating removal. Levels as high as 1% Pb were found in some substrate samples, corresponding to more than 35 mg cm-2 Pb. In conclusion, these sources of variability must be considered in development and/or application of any sampling and analysis methodologies.
Will Outer Tropical Cyclone Size Change due to Anthropogenic Warming?
NASA Astrophysics Data System (ADS)
Schenkel, B. A.; Lin, N.; Chavas, D. R.; Vecchi, G. A.; Knutson, T. R.; Oppenheimer, M.
2017-12-01
Prior research has shown significant interbasin and intrabasin variability in outer tropical cyclone (TC) size. Moreover, outer TC size has even been shown to vary substantially over the lifetime of the majority of TCs. However, the factors responsible for both setting initial outer TC size and determining its evolution throughout the TC lifetime remain uncertain. Given these gaps in our physical understanding, there remains uncertainty in how outer TC size will change, if at all, due to anthropogenic warming. The present study seeks to quantify whether outer TC size will change significantly in response to anthropogenic warming using data from a high-resolution global climate model and a regional hurricane model. Similar to prior work, the outer TC size metric used in this study is the radius in which the azimuthal-mean surface azimuthal wind equals 8 m/s. The initial results from the high-resolution global climate model data suggest that the distribution of outer TC size shifts significantly towards larger values in each global TC basin during future climates, as revealed by 1) statistically significant increase of the median outer TC size by 5-10% (p<0.05) according to a 1,000-sample bootstrap resampling approach with replacement and 2) statistically significant differences between distributions of outer TC size from current and future climate simulations as shown using two-sample Kolmogorov Smirnov testing (p<<0.01). Additional analysis of the high-resolution global climate model data reveals that outer TC size does not uniformly increase within each basin in future climates, but rather shows substantial locational dependence. Future work will incorporate the regional mesoscale hurricane model data to help focus on identifying the source of the spatial variability in outer TC size increases within each basin during future climates and, more importantly, why outer TC size changes in response to anthropogenic warming.
Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M
2017-11-21
One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.
Re-estimating sample size in cluster randomised trials with active recruitment within clusters.
van Schie, S; Moerbeek, M
2014-08-30
Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re-estimate the variances and re-calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re-estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.
Integrated investigation of the mixed origin of lunar sample 72161,11
NASA Technical Reports Server (NTRS)
Basu, A.; Des Marais, D. J.; Hayes, J. M.; Meinschein, W. G.
1975-01-01
The comminution-agglutination model and the solar-wind implantation-retention model are used to postulate the origins of the particulate components of lunar sample (72161,11), a submillimeter fraction of a surface sample for the dark mantle regolith at LRV-3. Grain-size analysis was performed by wet sieving with liquid argon, and analyses for CO2, CO, CH4, and H2 were carried out by stepwise pyrolysis in a helium atmosphere. The results indicate that the present sample is from a mature regolith, but the agglutinate content is only 30% in the particle-size range between 90 and 177 microns, indicating an apparent departure from steady state. Analyses of the carbon, methane, and hydrogen concentrations in size fractions larger than 149 microns show that the volume-correlated component of these species increases with increased grain size. It is suggested that the observed increase can be explained in terms of mixing of a dominant local population of coarser agglutinates having high carbon and hydrogen concentrations with an imported population of finer agglutinates relatively poor in carbon and hydrogen.
Large exchange bias effect in NiFe2O4/CoO nanocomposites
NASA Astrophysics Data System (ADS)
Mohan, Rajendra; Prasad Ghosh, Mritunjoy; Mukherjee, Samrat
2018-03-01
In this work, we report the exchange bias effect of NiFe2O4/CoO nanocomposites, synthesized via chemical co-precipitation method. Four samples of different particle size ranging from 4 nm to 31 nm were prepared with the annealing temperature varying from 200 °C to 800 °C. X-ray diffraction analysis of all the samples confirmed the presence of cubic spinel phase of Nickel ferrite along with CoO phase without trace of any impurity. Sizes of the particles were studied from transmission electron micrographs and were found to be in agreement with those estimated from x-ray diffraction. Field cooled (FC) hysteresis loops at 5 K revealed an exchange bias (HE) of 2.2 kOe for the sample heated at 200 °C which decreased with the increase of particle size. Exchange bias expectedly vanished at 300 K due to high thermal energy (kBT) and low effective surface anisotropy. M-T curves revealed a blocking temperature of 135 K for the sample with smaller particle size.
McDonald, Linda S; Panozzo, Joseph F; Salisbury, Phillip A; Ford, Rebecca
2016-01-01
Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective.
McDonald, Linda S.; Panozzo, Joseph F.; Salisbury, Phillip A.; Ford, Rebecca
2016-01-01
Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective. PMID:27176469
High-Throughput and Label-Free Single Nanoparticle Sizing Based on Time-Resolved On-Chip Microscopy
2015-02-17
12,13 soot ,6,14 ice crystals in clouds,15 and engineered nano- materials,16 among others. While there exist various nanoparticle detection and sizing...the sample of interest is placed on an optoelectronic sensor -array with typically less than 0.5 mm gap (z2) between the sample and sensor planes such...that, under unit mag- nification, the entire sensor active area serves as the imaging FOV, easily reaching >2030 mm2 with state-of-the-art CMOS
Lu, Y.; Rostam-Abadi, M.; Chang, R.; Richardson, C.; Paradis, J.
2007-01-01
Nine fly ash samples were collected from the particulate collection devices (baghouse or electrostatic precipitator) of four full-scale pulverized coal (PC) utility boilers burning eastern bituminous coals (EB-PC ashes) and three cyclone utility boilers burning either Powder River Basin (PRB) coals or PRB blends,(PRB-CYC ashes). As-received fly ash samples were mechanically sieved to obtain six size fractions. Unburned carbon (UBC) content, mercury content, and Brunauer-Emmett-Teller (BET)-N2 surface areas of as-received fly ashes and their size fractions were measured. In addition, UBC particles were examined by scanning electron microscopy, high-resolution transmission microscopy, and thermogravimetry to obtain information on their surface morphology, structure, and oxidation reactivity. It was found that the UBC particles contained amorphous carbon, ribbon-shaped graphitic carbon, and highly ordered graphite structures. The mercury contents of the UBCs (Hg/UBC, in ppm) in raw ash samples were comparable to those of the UBC-enriched samples, indicating that mercury was mainly adsorbed on the UBC in fly ash. The UBC content decreased with a decreasing particle size range for all nine ashes. There was no correlation between the mercury and UBC contents of different size fractions of as-received ashes. The mercury content of the UBCs in each size fraction, however, generally increased with a decreasing particle size for the nine ashes. The mercury contents and surface areas of the UBCs in the PRB-CYC ashes were about 8 and 3 times higher than UBCs in the EB-PC ashes, respectively. It appeared that both the particle size and surface area of UBC could contribute to mercury capture. The particle size of the UBC in PRB-CYC ash and thus the external mass transfer was found to be the major factor impacting the mercury adsorption. Both the particle size and surface reactivity of the UBC in EB-PC ash, which generally had a lower carbon oxidation reactivity than the PRB-PC ashes, appeared to be important for the mercury adsorption. ?? 2007 American Chemical Society.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Xu, Huacheng; Guo, Laodong
2017-06-15
Dissolved organic matter (DOM) is ubiquitous in natural waters. The ecological role and environmental fate of DOM are highly related to the chemical composition and size distribution. To evaluate size-dependent DOM quantity and quality, water samples were collected from river, lake, and coastal marine environments and size fractionated through a series of micro- and ultra-filtrations with different membranes having different pore-sizes/cutoffs, including 0.7, 0.4, and 0.2 μm and 100, 10, 3, and 1 kDa. Abundance of dissolved organic carbon, total carbohydrates, chromophoric and fluorescent components in the filtrates decreased consistently with decreasing filter/membrane cutoffs, but with a rapid decline when the filter cutoff reached 3 kDa, showing an evident size-dependent DOM abundance and composition. About 70% of carbohydrates and 90% of humic- and protein-like components were measured in the <3 kDa fraction in freshwater samples, but these percentages were higher in the seawater sample. Spectroscopic properties of DOM, such as specific ultraviolet absorbance, spectral slope, and biological and humification indices also varied significantly with membrane cutoffs. In addition, different ultrafiltration membranes with the same manufacture-rated cutoff also gave rise to different DOM retention efficiencies and thus different colloidal abundances and size spectra. Thus, the size-dependent DOM properties were related to both sample types and membranes used. Our results here provide not only baseline data for filter pore-size selection when exploring DOM ecological and environmental roles, but also new insights into better understanding the physical definition of DOM and its size continuum in quantity and quality in aquatic environments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modeling misidentification errors that result from use of genetic tags in capture-recapture studies
Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.
2011-01-01
Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.
Naidoo, V; du Preez, M; Rakgotho, T; Odhav, B; Buckley, C A
2002-01-01
Industrial effluents and leachates from hazardous landfill sites were tested for toxicity using the anaerobic toxicity assay. This test was done on several industrial effluents (brewery spent grain effluent, a chemical industry effluent, size effluent), and several hazardous landfill leachates giving vastly different toxicity results. The brewery effluent, spent grain effluent and size effluent were found to be less toxic than the chemical effluent and hazardous landfill leachate samples. The chemical industry effluent was found to be most toxic. Leachate samples from the H:h classified hazardous landfill site were found to be less toxic at high concentrations (40% (v/v)) while the H:H hazardous landfill leachate samples were found to be more toxic even at low concentrations of 4% (v/v). The 30 d biochemical methane potential tests revealed that the brewery effluent, organic spent grain effluent and size effluent were 89%, 63%, and 68% biodegradable, respectively. The leachate from Holfontein hazardous landfill site was least biodegradable (19%) while the chemical effluent and Aloes leachate were 29% and 32% biodegradable under anaerobic conditions.
Effects of lint cleaning on lint trash particle size distribution
USDA-ARS?s Scientific Manuscript database
Cotton quality trash measurements used today typically yield a single value for trash parameters for a lint sample (i.e. High Volume Instrument – percent area; Advanced Fiber Information System – total count, trash size, dust count, trash count, and visible foreign matter). A Cotton Trash Identifica...
USDA-ARS?s Scientific Manuscript database
A two-dimensional chromatography method for analyzing anionic targets (specifically phytate) in complex matrices is described. Prior to quantification by anion exchange chromatography, the sample matrix was prepared by size exclusion chromatography, which removed the majority of matrix complexities....
Olives, Casey; Valadez, Joseph J; Brooker, Simon J; Pagano, Marcello
2012-01-01
Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n=15 and n=25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n=15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools.
Can we estimate molluscan abundance and biomass on the continental shelf?
NASA Astrophysics Data System (ADS)
Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.
2017-11-01
Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.
Bed-sediment grain-size and morphologic data from Suisun, Grizzly, and Honker Bays, CA, 1998-2002
Hampton, Margaret A.; Snyder, Noah P.; Chin, John L.; Allison, Dan W.; Rubin, David M.
2003-01-01
The USGS Place Based Studies Program for San Francisco Bay investigates this sensitive estuarine system to aid in resource management. As part of the inter-disciplinary research program, the USGS collected side-scan sonar data and bed-sediment samples from north San Francisco Bay to characterize bed-sediment texture and investigate temporal trends in sedimentation. The study area is located in central California and consists of Suisun Bay, and Grizzly and Honker Bays, sub-embayments of Suisun Bay. During the study (1998-2002), the USGS collected three side-scan sonar data sets and approximately 300 sediment samples. The side-scan data revealed predominantly fine-grained material on the bayfloor. We also mapped five different bottom types from the data set, categorized as featureless, furrows, sand waves, machine-made, and miscellaneous. We performed detailed grain-size and statistical analyses on the sediment samples. Overall, we found that grain size ranged from clay to fine sand, with the coarsest material in the channels and finer material located in the shallow bays. Grain-size analyses revealed high spatial variability in size distributions in the channel areas. In contrast, the shallow regions exhibited low spatial variability and consistent sediment size over time.
Temporal change in the size distribution of airborne Radiocesium derived from the Fukushima accident
NASA Astrophysics Data System (ADS)
Kaneyasu, Naoki; Ohashi, Hideo; Suzuki, Fumie; Okuda, Tomoaki; Ikemori, Fumikazu; Akata, Naofumi
2013-04-01
The accident of Fukushima Dai-ichi nuclear power plant discharged a large amount of radioactive materials into the environment. After 40 days of the accident, we started to collect the size-segregated aerosol at Tsukuba City, Japan, located 170 km south of the plant, by use of a low-pressure cascade impactor. The sampling continued from April 28, through October 26, 2011. The number of sample sets collected in total was 8. The radioactivity of 134Cs and 137Cs in aerosols collected at each stage were determined by gamma-ray with a high sensitivity Germanic detector. After the gamma-ray spectrometry analysis, the chemical species in the aerosols were analyzed. The analyses of first (April 28-May 12) and second (May 12-26) samples showed that the activity size distributions of 134Cs and 137Cs in aerosols reside mostly in the accumulation mode size range. These activity size distributions almost overlapped with the mass size distribution of non-sea-salt sulfate aerosol. From the results, we regarded that sulfate is the main transport medium of these radionuclides, and re-suspended soil particles that attached radionuclides were not the major airborne radioactive substances by the end of May, 2011 (Kaneyasu et al., 2012). We further conducted the successive extraction experiment of radiocesium from the aerosol deposits on the aluminum sheet substrate (8th stage of the first aerosol sample, 0.5-0.7 μm in aerodynamic diameter) with water and 0.1M HCl. In contrast to the relatively insoluble property of Chernobyl radionuclides, those in aerosols collected at Tsukuba in fine mode are completely water-soluble (100%). From the third aerosol sample, the activity size distributions started to change, i.e., the major peak in the accumulation mode size range seen in the first and second aerosol samples became smaller and an additional peak appeared in the coarse mode size range. The comparison of the activity size distributions of radiocesium and the mass size distributions of major aerosol components collected by the end of August, 2011, (i.e., sample No.5) and its implication will be discussed in the presentation. Reference Kaneyasu et al., Environ. Sci. Technol. 46, 5720-5726 (2012).
Sample size and power considerations in network meta-analysis
2012-01-01
Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327
NASA Astrophysics Data System (ADS)
Fan, Y. R.; Huang, G. H.; Baetz, B. W.; Li, Y. P.; Huang, K.
2017-06-01
In this study, a copula-based particle filter (CopPF) approach was developed for sequential hydrological data assimilation by considering parameter correlation structures. In CopPF, multivariate copulas are proposed to reflect parameter interdependence before the resampling procedure with new particles then being sampled from the obtained copulas. Such a process can overcome both particle degeneration and sample impoverishment. The applicability of CopPF is illustrated with three case studies using a two-parameter simplified model and two conceptual hydrologic models. The results for the simplified model indicate that model parameters are highly correlated in the data assimilation process, suggesting a demand for full description of their dependence structure. Synthetic experiments on hydrologic data assimilation indicate that CopPF can rejuvenate particle evolution in large spaces and thus achieve good performances with low sample size scenarios. The applicability of CopPF is further illustrated through two real-case studies. It is shown that, compared with traditional particle filter (PF) and particle Markov chain Monte Carlo (PMCMC) approaches, the proposed method can provide more accurate results for both deterministic and probabilistic prediction with a sample size of 100. Furthermore, the sample size would not significantly influence the performance of CopPF. Also, the copula resampling approach dominates parameter evolution in CopPF, with more than 50% of particles sampled by copulas in most sample size scenarios.
Sample injector for high pressure liquid chromatography
Paul, Phillip H.; Arnold, Don W.; Neyer, David W.
2001-01-01
Apparatus and method for driving a sample, having a well-defined volume, under pressure into a chromatography column. A conventional high pressure sampling valve is replaced by a sample injector composed of a pair of injector components connected in series to a common junction. The injector components are containers of porous dielectric material constructed so as to provide for electroosmotic flow of a sample into the junction. At an appropriate time, a pressure pulse from a high pressure source, that can be an electrokinetic pump, connected to the common junction, drives a portion of the sample, whose size is determined by the dead volume of the common junction, into the chromatographic column for subsequent separation and analysis. The apparatus can be fabricated on a substrate for microanalytical applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, P., E-mail: pchow@carnegiescience.edu; Xiao, Y. M.; Rod, E.
2015-07-15
The double-differential scattering cross-section for the inelastic scattering of x-ray photons from electrons is typically orders of magnitude smaller than that of elastic scattering. With samples 10-100 μm size in a diamond anvil cell at high pressure, the inelastic x-ray scattering signals from samples are obscured by scattering from the cell gasket and diamonds. One major experimental challenge is to measure a clean inelastic signal from the sample in a diamond anvil cell. Among the many strategies for doing this, we have used a focusing polycapillary as a post-sample optic, which allows essentially only scattered photons within its input fieldmore » of view to be refocused and transmitted to the backscattering energy analyzer of the spectrometer. We describe the modified inelastic x-ray spectrometer and its alignment. With a focused incident beam which matches the sample size and the field of view of polycapillary, at relatively large scattering angles, the polycapillary effectively reduces parasitic scattering from the diamond anvil cell gasket and diamonds. Raw data collected from the helium exciton measured by x-ray inelastic scattering at high pressure using the polycapillary method are compared with those using conventional post-sample slit collimation.« less
NASA Astrophysics Data System (ADS)
Røising, Henrik Schou; Simon, Steven H.
2018-03-01
Topological insulator surfaces in proximity to superconductors have been proposed as a way to produce Majorana fermions in condensed matter physics. One of the simplest proposed experiments with such a system is Majorana interferometry. Here we consider two possibly conflicting constraints on the size of such an interferometer. Coupling of a Majorana mode from the edge (the arms) of the interferometer to vortices in the center of the device sets a lower bound on the size of the device. On the other hand, scattering to the usually imperfectly insulating bulk sets an upper bound. From estimates of experimental parameters, we find that typical samples may have no size window in which the Majorana interferometer can operate, implying that a new generation of more highly insulating samples must be explored.
Sulfuric acid intercalated-mechanical exfoliation of reduced graphene oxide from old coconut shell
NASA Astrophysics Data System (ADS)
Islamiyah, Wildatun; Nashirudin, Luthfi; Baqiya, Malik A.; Cahyono, Yoyok; Darminto
2018-04-01
We report a fecile preparation of reduced grapheme oxide (rGO) from an old coconut shell by rapid reduction of heating at 400°C, chemical exfoliation using H2SO4 and HCl intercalating and mechanical exfoliation using ultrasonication. The produced samples consist of random stacks of nanometer-sized sheets. The dispersions prepared from H2SO4 had broader size distributions and larger particle sizes than the that from HCl. An average size of rGO in H2SO4 and HCl is respectively 23.62 nm and 570.4 nm. Furthermore, sample prepared in H2SO4 exhibited a high electronical conductivity of 1.1 × 10-3 S/m with a low energy gap of 0.11 eV.
Eddy Covariance Measurements of the Sea-Spray Aerosol Flu
NASA Astrophysics Data System (ADS)
Brooks, I. M.; Norris, S. J.; Yelland, M. J.; Pascal, R. W.; Prytherch, J.
2015-12-01
Historically, almost all estimates of the sea-spray aerosol source flux have been inferred through various indirect methods. Direct estimates via eddy covariance have been attempted by only a handful of studies, most of which measured only the total number flux, or achieved rather coarse size segregation. Applying eddy covariance to the measurement of sea-spray fluxes is challenging: most instrumentation must be located in a laboratory space requiring long sample lines to an inlet collocated with a sonic anemometer; however, larger particles are easily lost to the walls of the sample line. Marine particle concentrations are generally low, requiring a high sample volume to achieve adequate statistics. The highly hygroscopic nature of sea salt means particles change size rapidly with fluctuations in relative humidity; this introduces an apparent bias in flux measurements if particles are sized at ambient humidity. The Compact Lightweight Aerosol Spectrometer Probe (CLASP) was developed specifically to make high rate measurements of aerosol size distributions for use in eddy covariance measurements, and the instrument and data processing and analysis techniques have been refined over the course of several projects. Here we will review some of the issues and limitations related to making eddy covariance measurements of the sea spray source flux over the open ocean, summarise some key results from the last decade, and present new results from a 3-year long ship-based measurement campaign as part of the WAGES project. Finally we will consider requirements for future progress.
Pelvic dimorphism in relation to body size and body size dimorphism in humans.
Kurki, Helen K
2011-12-01
Many mammalian species display sexual dimorphism in the pelvis, where females possess larger dimensions of the obstetric (pelvic) canal than males. This is contrary to the general pattern of body size dimorphism, where males are larger than females. Pelvic dimorphism is often attributed to selection relating to parturition, or as a developmental consequence of secondary sexual differentiation (different allometric growth trajectories of each sex). Among anthropoid primates, species with higher body size dimorphism have higher pelvic dimorphism (in converse directions), which is consistent with an explanation of differential growth trajectories for pelvic dimorphism. This study investigates whether the pattern holds intraspecifically in humans by asking: Do human populations with high body size dimorphism also display high pelvic dimorphism? Previous research demonstrated that in some small-bodied populations, relative pelvic canal size can be larger than in large-bodied populations, while others have suggested that larger-bodied human populations display greater body size dimorphism. Eleven human skeletal samples (total N: male = 229, female = 208) were utilized, representing a range of body sizes and geographical regions. Skeletal measurements of the pelvis and femur were collected and indices of sexual dimorphism for the pelvis and femur were calculated for each sample [ln(M/F)]. Linear regression was used to examine the relationships between indices of pelvic and femoral size dimorphism, and between pelvic dimorphism and female femoral size. Contrary to expectations, the results suggest that pelvic dimorphism in humans is generally not correlated with body size dimorphism or female body size. These results indicate that divergent patterns of dimorphism exist for the pelvis and body size in humans. Implications for the evaluation of the evolution of pelvic dimorphism and rotational childbirth in Homo are considered. Copyright © 2011 Elsevier Ltd. All rights reserved.
Dilution effects on ultrafine particle emissions from Euro 5 and Euro 6 diesel and gasoline vehicles
NASA Astrophysics Data System (ADS)
Louis, Cédric; Liu, Yao; Martinet, Simon; D'Anna, Barbara; Valiente, Alvaro Martinez; Boreave, Antoinette; R'Mili, Badr; Tassel, Patrick; Perret, Pascal; André, Michel
2017-11-01
Dilution and temperature used during sampling of vehicle exhaust can modify particle number concentration and size distribution. Two experiments were performed on a chassis dynamometer to assess exhaust dilution and temperature on particle number and particle size distribution for Euro 5 and Euro 6 vehicles. In the first experiment, the effects of dilution (ratio from 8 to 4 000) and temperature (ranging from 50 °C to 150 °C) on particle quantification were investigated directly from tailpipe for a diesel and a gasoline Euro 5 vehicles. In the second experiment, particle emissions from Euro 6 diesel and gasoline vehicles directly sampled from the tailpipe were compared to the constant volume sampling (CVS) measurements under similar sampling conditions. Low primary dilutions (3-5) induced an increase in particle number concentration by a factor of 2 compared to high primary dilutions (12-20). Low dilution temperatures (50 °C) induced 1.4-3 times higher particle number concentration than high dilution temperatures (150 °C). For the Euro 6 gasoline vehicle with direct injection, constant volume sampling (CVS) particle number concentrations were higher than after the tailpipe by a factor of 6, 80 and 22 for Artemis urban, road and motorway, respectively. For the same vehicle, particle size distribution measured after the tailpipe was centred on 10 nm, and particles were smaller than the ones measured after CVS that was centred between 50 nm and 70 nm. The high particle concentration (≈106 #/cm3) and the growth of diameter, measured in the CVS, highlighted aerosol transformations, such as nucleation, condensation and coagulation occurring in the sampling system and this might have biased the particle measurements.
Eichmann, Cordula; Parson, Walther
2008-09-01
The traditional protocol for forensic mitochondrial DNA (mtDNA) analyses involves the amplification and sequencing of the two hypervariable segments HVS-I and HVS-II of the mtDNA control region. The primers usually span fragment sizes of 300-400 bp each region, which may result in weak or failed amplification in highly degraded samples. Here we introduce an improved and more stable approach using shortened amplicons in the fragment range between 144 and 237 bp. Ten such amplicons were required to produce overlapping fragments that cover the entire human mtDNA control region. These were co-amplified in two multiplex polymerase chain reactions and sequenced with the individual amplification primers. The primers were carefully selected to minimize binding on homoplasic and haplogroup-specific sites that would otherwise result in loss of amplification due to mis-priming. The multiplexes have successfully been applied to ancient and forensic samples such as bones and teeth that showed a high degree of degradation.
The Tissint Martian meteorite as evidence for the largest impact excavation.
Baziotis, Ioannis P; Liu, Yang; DeCarli, Paul S; Melosh, H Jay; McSween, Harry Y; Bodnar, Robert J; Taylor, Lawrence A
2013-01-01
High-pressure minerals in meteorites provide clues for the impact processes that excavated, launched and delivered these samples to Earth. Most Martian meteorites are suggested to have been excavated from 3 to 7 km diameter impact craters. Here we show that the Tissint meteorite, a 2011 meteorite fall, contains virtually all the high-pressure phases (seven minerals and two mineral glasses) that have been reported in isolated occurrences in other Martian meteorites. Particularly, one ringwoodite (75 × 140 μm(2)) represents the largest grain observed in all Martian samples. Collectively, the ubiquitous high-pressure minerals of unusually large sizes in Tissint indicate that shock metamorphism was widely dispersed in this sample (~25 GPa and ~2,000 °C). Using the size and growth kinetics of the ringwoodite grains, we infer an initial impact crater with ~90 km diameter, with a factor of 2 uncertainty. These energetic conditions imply alteration of any possible low-T minerals in Tissint.
Everett, C.R.; Chin, Y.-P.; Aiken, G.R.
1999-01-01
A 1,000-Dalton tangential-flow ultrafiltration (TFUF) membrane was used to isolate dissolved organic matter (DOM) from several freshwater environments. The TFUF unit used in this study was able to completely retain a polystyrene sulfonate 1,800-Dalton standard. Unaltered and TFUF-fractionated DOM molecular weights were assayed by high-pressure size exclusion chromatography (HPSEC). The weight-averaged molecular weights of the retentates were larger than those of the raw water samples, whereas the filtrates were all significantly smaller and approximately the same size or smaller than the manufacturer-specified pore size of the membrane. Moreover, at 280 nm the molar absorptivity of the DOM retained by the ultrafilter is significantly larger than the material in the filtrate. This observation suggests that most of the chromophoric components are associated with the higher molecular weight fraction of the DOM pool. Multivalent metals in the aqueous matrix also affected the molecular weights of the DOM molecules. Typically, proton-exchanged DOM retentates were smaller than untreated samples. This TFUF system appears to be an effective means of isolating aquatic DOM by size, but the ultimate size of the retentates may be affected by the presence of metals and by configurational properties unique to the DOM phase.
He, Guoai; Tan, Liming; Liu, Feng; Huang, Lan; Huang, Zaiwang; Jiang, Liang
2017-01-01
Controlling grain size in polycrystalline nickel base superalloy is vital for obtaining required mechanical properties. Typically, a uniform and fine grain size is required throughout forging process to realize the superplastic deformation. Strain amount occupied a dominant position in manipulating the dynamic recrystallization (DRX) process and regulating the grain size of the alloy during hot forging. In this article, the high-throughput double cone specimen was introduced to yield wide-range strain in a single sample. Continuous variations of effective strain ranging from 0.23 to 1.65 across the whole sample were achieved after reaching a height reduction of 70%. Grain size is measured to be decreased from the edge to the center of specimen with increase of effective strain. Small misorientation tended to generate near the grain boundaries, which was manifested as piled-up dislocation in micromechanics. After the dislocation density reached a critical value, DRX progress would be initiated at higher deformation region, leading to the refinement of grain size. During this process, the transformations from low angle grain boundaries (LAGBs) to high angle grain boundaries (HAGBs) and from subgrains to DRX grains are found to occur. After the accomplishment of DRX progress, the neonatal grains are presented as having similar orientation inside the grain boundary. PMID:28772514
Nanoparticle formation of deposited Agn-clusters on free-standing graphene
NASA Astrophysics Data System (ADS)
Al-Hada, M.; Peters, S.; Gregoratti, L.; Amati, M.; Sezen, H.; Parisse, P.; Selve, S.; Niermann, T.; Berger, D.; Neeb, M.; Eberhardt, W.
2017-11-01
Size-selected Agn-clusters on unsupported graphene of a commercial Quantifoil sample have been investigated by surface and element-specific techniques such as transmission electron microscopy (TEM), spatially-resolved inner-shell X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES). An agglomeration of the highly mobile clusters into nm-sized Ag-nanodots of 2-3 nm is observed. Moreover, crystalline as well as non-periodic fivefold symmetric structures of the Ag-nanoparticles are evident by high-resolution TEM. Using a lognormal size-distribution as revealed by TEM, the measured positive binding energy shift of the air-exposed Ag-nanodots can be explained by the size-dependent dynamical liquid-drop model.
Olives, Casey; Valadez, Joseph J; Pagano, Marcello
2014-03-01
To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.
Study design requirements for RNA sequencing-based breast cancer diagnostics.
Mer, Arvind Singh; Klevebring, Daniel; Grönberg, Henrik; Rantalainen, Mattias
2016-02-01
Sequencing-based molecular characterization of tumors provides information required for individualized cancer treatment. There are well-defined molecular subtypes of breast cancer that provide improved prognostication compared to routine biomarkers. However, molecular subtyping is not yet implemented in routine breast cancer care. Clinical translation is dependent on subtype prediction models providing high sensitivity and specificity. In this study we evaluate sample size and RNA-sequencing read requirements for breast cancer subtyping to facilitate rational design of translational studies. We applied subsampling to ascertain the effect of training sample size and the number of RNA sequencing reads on classification accuracy of molecular subtype and routine biomarker prediction models (unsupervised and supervised). Subtype classification accuracy improved with increasing sample size up to N = 750 (accuracy = 0.93), although with a modest improvement beyond N = 350 (accuracy = 0.92). Prediction of routine biomarkers achieved accuracy of 0.94 (ER) and 0.92 (Her2) at N = 200. Subtype classification improved with RNA-sequencing library size up to 5 million reads. Development of molecular subtyping models for cancer diagnostics requires well-designed studies. Sample size and the number of RNA sequencing reads directly influence accuracy of molecular subtyping. Results in this study provide key information for rational design of translational studies aiming to bring sequencing-based diagnostics to the clinic.
Thermophoretic separation of aerosol particles from a sampled gas stream
Postma, A.K.
1984-09-07
This disclosure relates to separation of aerosol particles from gas samples withdrawn from within a contained atmosphere, such as containment vessels for nuclear reactors or other process equipment where remote gaseous sampling is required. It is specifically directed to separation of dense aerosols including particles of any size and at high mass loadings and high corrosivity. The United States Government has rights in this invention pursuant to Contract DE-AC06-76FF02170 between the US Department of Energy and Westinghouse Electric Corporation.
NASA Astrophysics Data System (ADS)
Tulej, Marek; Wiesendanger, Reto; Neuland, Maike; Meyer, Stefan; Wurz, Peter; Neubeck, Anna; Ivarsson, Magnus; Riedo, Valentine; Moreno-Garcia, Pavel; Riedo, Andreas; Knopp, Gregor
2017-04-01
Investigation of elemental and isotope compositions of planetary solids with high spatial resolution are of considerable interest to current space research. Planetary materials are typically highly heterogenous and such studies can deliver detailed chemical information of individual sample components with the sizes down to a few micrometres. The results of such investigations can yield mineralogical surface context including mineralogy of individual grains or the elemental composition of of other objects embedded in the sample surface such as micro-sized fossils. The identification of bio-relevant material can follow by the detection of bio-relevant elements and their isotope fractionation effects [1, 2]. For chemical analysis of heterogenous solid surfaces we have combined a miniature laser ablation mass spectrometer (LMS) (mass resolution (m/Dm) 400-600; dynamic range 105-108) with in situ microscope-camera system (spatial resolution ˜2um, depth 10 um). The microscope helps to find the micrometre-sized solids across the surface sample for the direct mass spectrometric analysis by the LMS instrument. The LMS instrument combines an fs-laser ion source and a miniature reflectron-type time-of-flight mass spectrometer. The mass spectrometric analysis of the selected on the sample surface objects followed after ablation, atomisation and ionisation of the sample by a focussed laser radiation (775 nm, 180 fs, 1 kHz; the spot size of ˜20 um) [4, 5, 6]. Mass spectra of almost all elements (isotopes) present in the investigated location are measured instantaneously. A number of heterogenous rock samples containing micrometre-sized fossils and mineralogical grains were investigated with high selectivity and sensitivity. Chemical analyses of filamentous structures observed in carbonate veins (in harzburgite) and amygdales in pillow basalt lava can be well characterised chemically yielding elemental and isotope composition of these objects [7, 8]. The investigation can be prepared with high selectivity since the host composition is typically readily different comparing to that of the analysed objects. In depth chemical analysis (chemical profiling) is found in particularly helpful allowing relatively easy isolation of the chemical composition of the host from the investigated objects [6]. Hence, both he chemical analysis of the environment and microstructures can be derived. Analysis of the isotope compositions can be measured with high level of confidence, nevertheless, presence of cluster of similar masses can make sometimes this analysis difficult. Based on this work, we are confident that similar studies can be conducted in situ planetary surfaces delivering important chemical context and evidences on bio-relevant processes. [1] Summons et al., Astrobiology, 11, 157, 2011. [2] Wurz et al., Sol. Sys. Res. 46, 408, 2012. [3] Riedo et al., J. Anal. Atom. Spectrom. 28, 1256, 2013. [4] Riedo et al., J. Mass Spectrom.48, 1, 2013. [5] Tulej et al., Geostand. Geoanal. Res., 38, 423, 2014. [6] Grimaudo et al., Anal. Chem. 87, 2041, 2015 [7] Tulej et al., Astrobiology, 15, 1, 2015. [8] Neubeck et al., Int. J. Astrobiology, 15, 133, 2016.
NASA Astrophysics Data System (ADS)
Elsabawy, Khaled M.; Fallatah, Ahmed M.; Alharthi, Salman S.
2018-07-01
For the first time high energy Helium-Silver laser which belongs to the category of metal-vapor lasers applied as microstructure promoter for optimally Ir-doped-MgB2sample. The Ir-optimally doped-Mg0.94Ir 0.06B2 superconducting sample was selected from previously published article for one of authors themselves. The samples were irradiated by a three different doses 1, 2 and 3 h from an ultrahigh energy He-Ag-Laser with average power of 103 W/cm2 at distance of 3 cm. Superconducting measurements and micro-structural features were investigated as function of He-Ag Laser irradiation doses. Results indicated that irradiations via an ultrahigh energy He-Ag-Laser promoted grains to lower sizes and consequently measured Jc's values enhanced and increased. Furthermore Tc-offsets for all irradiated samples are better than non-irradiated Mg0.94Ir 0.06B2.
Nixon, Richard M; Wonderling, David; Grieve, Richard D
2010-03-01
Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.
Variation of radiation level and radionuclide enrichment in high background area.
Shetty, P K; Narayana, Y
2010-12-01
Significantly high radiation level and radionuclide concentration along Quilon beach area of coastal Kerala have been reported by several investigators. Detailed gamma radiation level survey was carried out using a portable scintillometer. Detailed studies on radionuclides concentration in different environmental matrices of high background areas were undertaken in the coastal areas of Karunagapalli, Kayankulam, Chavara, Neendakara and Kollam to study the distribution and enrichment of the radionuclides in the region. The absorbed gamma dose rates in air in high background area are in the range 43-17,400nGyh⁻¹. Gamma radiation level is found to be maximum at a distance of 20m from the sea waterline in all beaches. The soil samples collected from different locations were analysed for primordial radionuclides by gamma spectrometry. The activity of primordial radionuclides was determined for the different size fractions of soil to study the enrichment pattern. The highest activity of (232)Th and (226)Ra was found to be enriched in 125-63μ size fraction. The preferential accumulation of (40)K was found in <63μ fraction. The minimum (232)Th activity was 30.2Bqkg⁻¹, found in 1000-500μ particle size fraction at Kollam and maximum activity of 3250.4Bqkg⁻¹ was observed in grains of size 125-63μ at Neendakara. The lowest (226)Ra activity observed was 33.9Bqkg⁻¹ at Neendakara in grains of size 1000-500μ and the highest activity observed was 482.6Bqkg⁻¹ in grains of size 125-63μ in Neendakara. The highest (40)K activity found was 1923Bqkg⁻¹ in grains of size <63μ for a sample collected from Neendakara. A good correlation was observed between computed dose and measured dose in air. The correlation between (232)Th and (226)Ra was also moderately high. The results of these investigations are presented and discussed in this paper. Copyright © 2010 Elsevier Ltd. All rights reserved.
Chen, Hua-xing; Tang, Hong-ming; Duan, Ming; Liu, Yi-gang; Liu, Min; Zhao, Feng
2015-01-01
In this study, the effects of gravitational settling time, temperature, speed and time of centrifugation, flocculant type and dosage, bubble size and gas amount were investigated. The results show that the simple increase in settling time and temperature is of no use for oil-water separation of the three wastewater samples. As far as oil-water separation efficiency is concerned, increasing centrifugal speed and centrifugal time is highly effective for L sample, and has a certain effect on J sample, but is not valid for S sample. The flocculants are highly effective for S and L samples, and the oil-water separation efficiency increases with an increase in the concentration of inorganic cationic flocculants. There exist critical reagent concentrations for the organic cationic and the nonionic flocculants, wherein a higher or lower concentration of flocculant would cause a decrease in the treatment efficiency. Flotation is an effective approach for oil-water separation of polymer-contained wastewater from the three oilfields. The oil-water separation efficiency can be enhanced by increasing floatation agent concentration, flotation time and gas amount, and by decreasing bubble size.
L2 Reading Comprehension and Its Correlates: A Meta-Analysis
ERIC Educational Resources Information Center
Jeon, Eun Hee; Yamashita, Junko
2014-01-01
The present meta-analysis examined the overall average correlation (weighted for sample size and corrected for measurement error) between passage-level second language (L2) reading comprehension and 10 key reading component variables investigated in the research domain. Four high-evidence correlates (with 18 or more accumulated effect sizes: L2…
A Comparison of Learning Cultures in Different Sizes and Types
ERIC Educational Resources Information Center
Brown, Paula D.; Finch, Kim S.; MacGregor, Cynthia
2012-01-01
This study compared relevant data and information about leadership and learning cultures in different sizes and types of high schools. Research was conducted using a quantitative design with a qualitative element. Quantitative data were gathered using a researcher-created survey. Independent sample t-tests were conducted to analyze the means of…
A comprehensive and scalable database search system for metaproteomics.
Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W
2016-08-16
Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.
Carta, D; Marras, C; Loche, D; Mountjoy, G; Ahmed, S I; Corrias, A
2013-02-07
The structural properties of zinc ferrite nanoparticles with spinel structure dispersed in a highly porous SiO(2) aerogel matrix were compared with a bulk zinc ferrite sample. In particular, the details of the cation distribution between the octahedral (B) and tetrahedral (A) sites of the spinel structure were determined using X-ray absorption spectroscopy. The analysis of both the X-ray absorption near edge structure and the extended X-ray absorption fine structure indicates that the degree of inversion of the zinc ferrite spinel structures varies with particle size. In particular, in the bulk microcrystalline sample, Zn(2+) ions are at the tetrahedral sites and trivalent Fe(3+) ions occupy octahedral sites (normal spinel). When particle size decreases, Zn(2+) ions are transferred to octahedral sites and the degree of inversion is found to increase as the nanoparticle size decreases. This is the first time that a variation of the degree of inversion with particle size is observed in ferrite nanoparticles grown within an aerogel matrix.
Visual accumulation tube for size analysis of sands
Colby, B.C.; Christensen, R.P.
1956-01-01
The visual-accumulation-tube method was developed primarily for making size analyses of the sand fractions of suspended-sediment and bed-material samples. Because the fundamental property governing the motion of a sediment particle in a fluid is believed to be its fall velocity. the analysis is designed to determine the fall-velocity-frequency distribution of the individual particles of the sample. The analysis is based on a stratified sedimentation system in which the sample is introduced at the top of a transparent settling tube containing distilled water. The procedure involves the direct visual tracing of the height of sediment accumulation in a contracted section at the bottom of the tube. A pen records the height on a moving chart. The method is simple and fast, provides a continuous and permanent record, gives highly reproducible results, and accurately determines the fall-velocity characteristics of the sample. The apparatus, procedure, results, and accuracy of the visual-accumulation-tube method for determining the sedimentation-size distribution of sands are presented in this paper.
Methodological quality of behavioural weight loss studies: a systematic review
Lemon, S. C.; Wang, M. L.; Haughton, C. F.; Estabrook, D. P.; Frisard, C. F.; Pagoto, S. L.
2018-01-01
Summary This systematic review assessed the methodological quality of behavioural weight loss intervention studies conducted among adults and associations between quality and statistically significant weight loss outcome, strength of intervention effectiveness and sample size. Searches for trials published between January, 2009 and December, 2014 were conducted using PUBMED, MEDLINE and PSYCINFO and identified ninety studies. Methodological quality indicators included study design, anthropometric measurement approach, sample size calculations, intent-to-treat (ITT) analysis, loss to follow-up rate, missing data strategy, sampling strategy, report of treatment receipt and report of intervention fidelity (mean = 6.3). Indicators most commonly utilized included randomized design (100%), objectively measured anthropometrics (96.7%), ITT analysis (86.7%) and reporting treatment adherence (76.7%). Most studies (62.2%) had a follow-up rate >75% and reported a loss to follow-up analytic strategy or minimal missing data (69.9%). Describing intervention fidelity (34.4%) and sampling from a known population (41.1%) were least common. Methodological quality was not associated with reporting a statistically significant result, effect size or sample size. This review found the published literature of behavioural weight loss trials to be of high quality for specific indicators, including study design and measurement. Identified for improvement include utilization of more rigorous statistical approaches to loss to follow up and better fidelity reporting. PMID:27071775
Quantifying the size-resolved dynamics of indoor bioaerosol transport and control.
Kunkel, S A; Azimi, P; Zhao, H; Stark, B C; Stephens, B
2017-09-01
Understanding the bioaerosol dynamics of droplets and droplet nuclei emitted during respiratory activities is important for understanding how infectious diseases are transmitted and potentially controlled. To this end, we conducted experiments to quantify the size-resolved dynamics of indoor bioaerosol transport and control in an unoccupied apartment unit operating under four different HVAC particle filtration conditions. Two model organisms (Escherichia coli K12 and bacteriophage T4) were aerosolized under alternating low and high flow rates to roughly represent constant breathing and periodic coughing. Size-resolved aerosol sampling and settle plate swabbing were conducted in multiple locations. Samples were analyzed by DNA extraction and quantitative polymerase chain reaction (qPCR). DNA from both organisms was detected during all test conditions in all air samples up to 7 m away from the source, but decreased in magnitude with the distance from the source. A greater fraction of T4 DNA was recovered from the aerosol size fractions smaller than 1 μm than E. coli K12 at all air sampling locations. Higher efficiency HVAC filtration also reduced the amount of DNA recovered in air samples and on settle plates located 3-7 m from the source. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Kondrashova, Olga; Love, Clare J.; Lunke, Sebastian; Hsu, Arthur L.; Waring, Paul M.; Taylor, Graham R.
2015-01-01
Whilst next generation sequencing can report point mutations in fixed tissue tumour samples reliably, the accurate determination of copy number is more challenging. The conventional Multiplex Ligation-dependent Probe Amplification (MLPA) assay is an effective tool for measurement of gene dosage, but is restricted to around 50 targets due to size resolution of the MLPA probes. By switching from a size-resolved format, to a sequence-resolved format we developed a scalable, high-throughput, quantitative assay. MLPA-seq is capable of detecting deletions, duplications, and amplifications in as little as 5ng of genomic DNA, including from formalin-fixed paraffin-embedded (FFPE) tumour samples. We show that this method can detect BRCA1, BRCA2, ERBB2 and CCNE1 copy number changes in DNA extracted from snap-frozen and FFPE tumour tissue, with 100% sensitivity and >99.5% specificity. PMID:26569395
Autonomous bed-sediment imaging-systems for revealing temporal variability of grain size
Buscombe, Daniel; Rubin, David M.; Lacy, Jessica R.; Storlazzi, Curt D.; Hatcher, Gerald; Chezar, Henry; Wyland, Robert; Sherwood, Christopher R.
2014-01-01
We describe a remotely operated video microscope system, designed to provide high-resolution images of seabed sediments. Two versions were developed, which differ in how they raise the camera from the seabed. The first used hydraulics and the second used the energy associated with wave orbital motion. Images were analyzed using automated frequency-domain methods, which following a rigorous partially supervised quality control procedure, yielded estimates to within 20% of the true size as determined by on-screen manual measurements of grains. Long-term grain-size variability at a sandy inner shelf site offshore of Santa Cruz, California, USA, was investigated using the hydraulic system. Eighteen months of high frequency (min to h), high-resolution (μm) images were collected, and grain size distributions compiled. The data constitutes the longest known high-frequency record of seabed-grain size at this sample frequency, at any location. Short-term grain-size variability of sand in an energetic surf zone at Praa Sands, Cornwall, UK was investigated using the ‘wave-powered’ system. The data are the first high-frequency record of grain size at a single location of a highly mobile and evolving bed in a natural surf zone. Using this technology, it is now possible to measure bed-sediment-grain size at a time-scale comparable with flow conditions. Results suggest models of sediment transport at sandy, wave-dominated, nearshore locations should allow for substantial changes in grain-size distribution over time-scales as short as a few hours.
Kim, Yong Ho; Krantz, Q Todd; McGee, John; Kovalcik, Kasey D; Duvall, Rachelle M; Willis, Robert D; Kamal, Ali S; Landis, Matthew S; Norris, Gary A; Gilmour, M Ian
2016-11-01
The Cleveland airshed comprises a complex mixture of industrial source emissions that contribute to periods of non-attainment for fine particulate matter (PM 2.5 ) and are associated with increased adverse health outcomes in the exposed population. Specific PM sources responsible for health effects however are not fully understood. Size-fractionated PM (coarse, fine, and ultrafine) samples were collected using a ChemVol sampler at an urban site (G.T. Craig (GTC)) and rural site (Chippewa Lake (CLM)) from July 2009 to June 2010, and then chemically analyzed. The resulting speciated PM data were apportioned by EPA positive matrix factorization to identify emission sources for each size fraction and location. For comparisons with the ChemVol results, PM samples were also collected with sequential dichotomous and passive samplers, and evaluated for source contributions to each sampling site. The ChemVol results showed that annual average concentrations of PM, elemental carbon, and inorganic elements in the coarse fraction at GTC were ∼2, ∼7, and ∼3 times higher than those at CLM, respectively, while the smaller size fractions at both sites showed similar annual average concentrations. Seasonal variations of secondary aerosols (e.g., high NO 3 - level in winter and high SO 4 2- level in summer) were observed at both sites. Source apportionment results demonstrated that the PM samples at GTC and CLM were enriched with local industrial sources (e.g., steel plant and coal-fired power plant) but their contributions were influenced by meteorological conditions and the emission source's operation conditions. Taken together the year-long PM collection and data analysis provides valuable insights into the characteristics and sources of PM impacting the Cleveland airshed in both the urban center and the rural upwind background locations. These data will be used to classify the PM samples for toxicology studies to determine which PM sources, species, and size fractions are of greatest health concern. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
Laurin, Nancy; DeMoors, Anick; Frégeau, Chantal
2012-09-01
Direct amplification of STR loci from biological samples collected on FTA cards without prior DNA purification was evaluated using Identifiler Direct and PowerPlex 16 HS in conjunction with the use of a high throughput Applied Biosystems 3730 DNA Analyzer. In order to reduce the overall sample processing cost, reduced PCR volumes combined with various FTA disk sizes were tested. Optimized STR profiles were obtained using a 0.53 mm disk size in 10 μL PCR volume for both STR systems. These protocols proved effective in generating high quality profiles on the 3730 DNA Analyzer from both blood and buccal FTA samples. Reproducibility, concordance, robustness, sample stability and profile quality were assessed using a collection of blood and buccal samples on FTA cards from volunteer donors as well as from convicted offenders. The new developed protocols offer enhanced throughput capability and cost effectiveness without compromising the robustness and quality of the STR profiles obtained. These results support the use of these protocols for processing convicted offender samples submitted to the National DNA Data Bank of Canada. Similar protocols could be applied to the processing of casework reference samples or in paternity or family relationship testing. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Feichtmeier, Nadine S; Leopold, Kerstin
2014-06-01
In this work, we present a fast and simple approach for detection of silver nanoparticles (AgNPs) in biological material (parsley) by solid sampling high-resolution-continuum source atomic absorption spectrometry (HR-CS AAS). A novel evaluation strategy was developed in order to distinguish AgNPs from ionic silver and for sizing of AgNPs. For this purpose, atomisation delay was introduced as significant indication of AgNPs, whereas atomisation rates allow distinction of 20-, 60-, and 80-nm AgNPs. Atomisation delays were found to be higher for samples containing silver ions than for samples containing silver nanoparticles. A maximum difference in atomisation delay normalised by the sample weight of 6.27 ± 0.96 s mg(-1) was obtained after optimisation of the furnace program of the AAS. For this purpose, a multivariate experimental design was used varying atomisation temperature, atomisation heating rate and pyrolysis temperature. Atomisation rates were calculated as the slope of the first inflection point of the absorbance signals and correlated with the size of the AgNPs in the biological sample. Hence, solid sampling HR-CS AAS was proved to be a promising tool for identifying and distinguishing silver nanoparticles from ionic silver directly in solid biological samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Correa, E. L., E-mail: eduardo.correa@usp.br; Bosch-Santos, B.; Cavalcante, F. H. M.
2016-05-15
The magnetic behavior of Gd{sub 2}O{sub 3} nanoparticles, produced by thermal decomposition method and subsequently annealed at different temperatures, was investigated by magnetization measurements and, at an atomic level, by perturbed γ − γ angular correlation (PAC) spectroscopy measuring hyperfine interactions at {sup 111}In({sup 111}Cd) probe nuclei. Nanoparticle structure, size and shape were characterized by X-ray diffraction (XRD) and Transmission Electron Microscopy (TEM). Magnetization measurements were carried out to characterize the paramagnetic behavior of the samples. XRD results show that all samples crystallize in the cubic-C form of the bixbyite structure with space group Ia3. TEM images showed that particlesmore » annealed at 873 K present particles with highly homogeneous sizes in the range from 5 nm to 10 nm and those annealed at 1273 K show particles with quite different sizes from 5 nm to 100 nm, with a wide size distribution. PAC and magnetization results show that samples annealed at 873 and 1273 K are paramagnetic. Magnetization measurements show no indication of blocking temperatures for all samples down to 2 K and the presence of antiferromagnetic correlations.« less
Heavy metals in the gold mine soil of the upstream area of a metropolitan drinking water source.
Ding, Huaijian; Ji, Hongbing; Tang, Lei; Zhang, Aixing; Guo, Xinyue; Li, Cai; Gao, Yang; Briki, Mergem
2016-02-01
Pinggu District is adjacent to the county of Miyun, which contains the largest drinking water source of Beijing (Miyun Reservoir). The Wanzhuang gold field and tailing deposits are located in Pinggu, threatening Beijing's drinking water security. In this study, soil samples were collected from the surface of the mining area and the tailings piles and analyzed for physical and chemical properties, as well as heavy metal contents and particle size fraction to study the relationship between degree of pollution degree and particle size. Most metal concentrations in the gold mine soil samples exceeded the background levels in Beijing. The spatial distribution of As, Cd, Cu, Pb, and Zn was the same, while that of Cr and Ni was relatively similar. Trace element concentrations increased in larger particles, decreased in the 50-74 μm size fraction, and were lowest in the <2 μm size fraction. Multivariate analysis showed that Cu, Cd, Zn, and Pb originated from anthropogenic sources, while Cr, Ni, and Sc were of natural origin. The geo-accumulation index indicated serious Pb, As, and Cd pollution, but moderate to no Ni, Cr, and Hg pollution. The Tucker 3 model revealed three factors for particle fractions, metals, and samples. There were two factors in model A and three factors for both the metals and samples (models B and C, respectively). The potential ecological risk index shows that most of the study areas have very high potential ecological risk, a small portion has high potential ecological risk, and only a few sampling points on the perimeter have moderate ecological risk, with higher risk closer to the mining area.
NASA Astrophysics Data System (ADS)
Nelson, Robert M.; Boryta, Mark D.; Hapke, Bruce W.; Manatt, Kenneth S.; Shkuratov, Yuriy; Psarev, V.; Vandervoort, Kurt; Kroner, Desire; Nebedum, Adaze; Vides, Christina L.; Quiñones, John
2018-03-01
We present reflectance and polarization phase curve measurements of highly reflective planetary regolith analogues having physical characteristics expected on atmosphereless solar system bodies (ASSBs) such as a eucritic asteroids or icy satellites. We used a goniometric photopolarimeter (GPP) of novel design to study thirteen well-sorted particle size fractions of aluminum oxide (Al2O3). The sample suite included particle sizes larger than, approximately equal to, and smaller than the wavelength of the incident monochromatic radiation (λ = 635 nm). The observed phase angle, α, was 0.056 o < α < 15°. These Al2O3 particulate samples have very high normal reflectance (> ∼95%). The incident radiation has a very high probability of being multiply scattered before being backscattered toward the incident direction or ultimately absorbed. The five smallest particle sizes exhibited extremely high void space (> ∼95%). The reflectance phase curves for all particle size fractions show a pronounced non-linear reflectance increase with decreasing phase angle at α∼ < 3°. Our earlier studies suggest that the cause of this non-linear reflectance increase is constructive interference of counter-propagating waves in the medium by coherent backscattering (CB), a photonic analog of Anderson localization of electrons in solid state media. The polarization phase curves for particle size fractions with size parameter (particle radius/wavelength) r/λ < ∼1, show that the linear polarization rapidly decreases as α increases from 0°; it reaches a minimum near α = ∼2°. Longward of ∼2°, the negative polarization decreases as phase angle increases, becoming positive between 12° and at least 15°, (probably ∼20°) depending on particle size. For size parameters r/λ > ∼1 we detect no polarization. This polarization behavior is distinct from that observed in low albedo solar system objects such as the Moon and asteroids and for absorbing materials in the laboratory. We suggest this behavior arises because photons that are backscattered have a high probability of having interacted with two or more particles, thus giving rise to the CB process. These results may explain the unusual negative polarization behavior observed near small phase angles reported for several decades on highly reflective ASSBs such as the asteroids 44 Nysa, 64 Angelina and the Galilean satellites Io, Europa and Ganymede. Our results suggest these ASSB regoliths scatter electromagnetic radiation as if they were extremely fine grained with void space > ∼95%, and grain sizes of the order < = λ. This portends consequences for efforts to deploy landers on high ASSBs such as Europa. These results are also germane to the field of terrestrial geo-engineering, particularly to suggestions that earth's radiation balance can be modified by injecting Al2O3 particulates into the stratosphere thereby offsetting the effect of anthropogenic greenhouse gas emissions. The GPP used in this study was modified from our previous design so that the sample is presented with light that is alternatingly polarized perpendicular to and parallel to the scattering plane. There are no analyzers before the detector. This optical arrangement, following the Helmholtz Reciprocity Principle (HRP), produces a physically identical result to the traditional laboratory reflectance polarization measurements in which the incident light is unpolarized and the analyzers are placed before the detector. The results are identical in samples measured by both methods. We believe that ours is the first experimental demonstration of the HRP for polarized light, first proposed by Helmholtz in 1856.
Influence of Sample Size of Polymer Materials on Aging Characteristics in the Salt Fog Test
NASA Astrophysics Data System (ADS)
Otsubo, Masahisa; Anami, Naoya; Yamashita, Seiji; Honda, Chikahisa; Takenouchi, Osamu; Hashimoto, Yousuke
Polymer insulators have been used in worldwide because of some superior properties; light weight, high mechanical strength, good hydrophobicity etc., as compared with porcelain insulators. In this paper, effect of sample size on the aging characteristics in the salt fog test is examined. Leakage current was measured by using 100 MHz AD board or 100 MHz digital oscilloscope and separated three components as conductive current, corona discharge current and dry band arc discharge current by using FFT and the current differential method newly proposed. Each component cumulative charge was estimated automatically by a personal computer. As the results, when the sample size increased under the same average applied electric field, the peak values of leakage current and each component current increased. Especially, the cumulative charges and the arc discharge length of dry band arc discharge increased remarkably with the increase of gap length.
NASA Astrophysics Data System (ADS)
Park, Ki-Chan; Madavali, Babu; Kim, Eun-Bin; Koo, Kyung-Wan; Hong, Soon-Jik
2017-05-01
p-Type Bi2Te3 + 75% Sb2Te3 based thermoelectric materials were fabricated via gas atomization and the hot extrusion process. The gas atomized powder showed a clean surface with a spherical shape, and expanded in a wide particle size distribution (average particle size 50 μm). The phase of the fabricated extruded and R-extruded bars was identified using x-ray diffraction. The relative densities of both the extruded and R-extruded samples were measured by Archimedes principle with ˜98% relative density. The R-extruded bar exhibited finer grain microstructure than that of single extrusion process, which was attributed to a recrystallization mechanism during the fabrication. The R-extruded sample showed improved Vickers hardness compared to the extruded sample due to its fine grain microstructure. The electrical conductivity improved for the extruded sample whereas the Seebeck coefficient decreases due to its high carrier concentration. The peak power factor, ˜4.26 × 10-3 w/mK2 was obtained for the single extrusion sample, which is higher than the R-extrusion sample owing to its high electrical properties.
NASA Astrophysics Data System (ADS)
Lari, L.; Wright, I.; Boyes, E. D.
2015-10-01
A very simple tomography sample holder at minimal cost was developed in-house. The holder is based on a JEOL single tilt fast exchange sample holder where its exchangeable tip was modified to allow high angle degree tilt. The shape of the tip was designed to retain mechanical stability while minimising the lateral size of the tip. The sample can be mounted on as for a standard 3mm Cu grids as well as semi-circular grids from FIB sample preparation. Applications of the holder on different sample systems are shown.
Khondoker, Mizanur; Dobson, Richard; Skirrow, Caroline; Simmons, Andrew; Stahl, Daniel
2016-10-01
Recent literature on the comparison of machine learning methods has raised questions about the neutrality, unbiasedness and utility of many comparative studies. Reporting of results on favourable datasets and sampling error in the estimated performance measures based on single samples are thought to be the major sources of bias in such comparisons. Better performance in one or a few instances does not necessarily imply so on an average or on a population level and simulation studies may be a better alternative for objectively comparing the performances of machine learning algorithms. We compare the classification performance of a number of important and widely used machine learning algorithms, namely the Random Forests (RF), Support Vector Machines (SVM), Linear Discriminant Analysis (LDA) and k-Nearest Neighbour (kNN). Using massively parallel processing on high-performance supercomputers, we compare the generalisation errors at various combinations of levels of several factors: number of features, training sample size, biological variation, experimental variation, effect size, replication and correlation between features. For smaller number of correlated features, number of features not exceeding approximately half the sample size, LDA was found to be the method of choice in terms of average generalisation errors as well as stability (precision) of error estimates. SVM (with RBF kernel) outperforms LDA as well as RF and kNN by a clear margin as the feature set gets larger provided the sample size is not too small (at least 20). The performance of kNN also improves as the number of features grows and outplays that of LDA and RF unless the data variability is too high and/or effect sizes are too small. RF was found to outperform only kNN in some instances where the data are more variable and have smaller effect sizes, in which cases it also provide more stable error estimates than kNN and LDA. Applications to a number of real datasets supported the findings from the simulation study. © The Author(s) 2013.
Towards well-defined gold nanomaterials via diafiltration and aptamer mediated synthesis
NASA Astrophysics Data System (ADS)
Sweeney, Scott Francis
Gold nanoparticles have garnered recent attention due to their intriguing size- and shape-dependent properties. Routine access to well-defined gold nanoparticle samples in terms of core diameter, shape, peripheral functionality and purity is required in order to carry out fundamental studies of their properties and to utilize these properties in future applications. For this reason, the development of methods for preparing well-defined gold nanoparticle samples remains an area of active research in materials science. In this dissertation, two methods, diafiltration and aptamer mediated synthesis, are explored as possible routes towards well-defined gold nanoparticle samples. It is shown that diafiltration has considerable potential for the efficient and convenient purification and size separation of water-soluble nanoparticles. The suitability of diafiltration for (i) the purification of water-soluble gold nanoparticles, (ii) the separation of a bimodal distribution of nanoparticles into fractions, (iii) the fractionation of a polydisperse sample and (iv) the isolation of [rimers from monomers and aggregates is studied. NMR, thermogravimetric analysis (TGA), and X-ray photoelectron spectroscopy (XPS) measurements demonstrate that diafiltration produces highly pure nanoparticles. UV-visible spectroscopic and transmission electron microscopic analyses show that diafiltration offers the ability to separate nanoparticles of disparate core size, including linked nanoparticles. These results demonstrate the applicability of diafiltration for the rapid and green preparation of high-purity gold nanoparticle samples and the size separation of heterogeneous nanoparticle samples. In the second half of the dissertation, the identification of materials specific aptamers and their use to synthesize shaped gold nanoparticles is explored. The use of in vitro selection for identifying materials specific peptide and oligonucleotide aptamers is reviewed, outlining the specific requirements of in vitro selection for materials and the ways in which the field can be advanced. A promising new technique, in vitro selection on surfaces (ISOS), is developed and the discovery using ISOS of RNA aptamers that bind to evaporated gold is discussed. Analysis of the isolated gold binding RNA aptamers indicates that they are highly structured with single-stranded polyadenosine binding motifs. These aptamers, and similarly isolated peptide aptamers, are briefly explored for their ability to synthesize gold nanoparticles. This dissertation contains both previously published and unpublished co-authored material.
Temporal dynamics of linkage disequilibrium in two populations of bighorn sheep
Miller, Joshua M; Poissant, Jocelyn; Malenfant, René M; Hogg, John T; Coltman, David W
2015-01-01
Linkage disequilibrium (LD) is the nonrandom association of alleles at two markers. Patterns of LD have biological implications as well as practical ones when designing association studies or conservation programs aimed at identifying the genetic basis of fitness differences within and among populations. However, the temporal dynamics of LD in wild populations has received little empirical attention. In this study, we examined the overall extent of LD, the effect of sample size on the accuracy and precision of LD estimates, and the temporal dynamics of LD in two populations of bighorn sheep (Ovis canadensis) with different demographic histories. Using over 200 microsatellite loci, we assessed two metrics of multi-allelic LD, D′, and χ′2. We found that both populations exhibited high levels of LD, although the extent was much shorter in a native population than one that was founded via translocation, experienced a prolonged bottleneck post founding, followed by recent admixture. In addition, we observed significant variation in LD in relation to the sample size used, with small sample sizes leading to depressed estimates of the extent of LD but inflated estimates of background levels of LD. In contrast, there was not much variation in LD among yearly cross-sections within either population once sample size was accounted for. Lack of pronounced interannual variability suggests that researchers may not have to worry about interannual variation when estimating LD in a population and can instead focus on obtaining the largest sample size possible. PMID:26380673
Oono, Ryoko
2017-01-01
High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions 'how and why are communities different?' This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences.
2017-01-01
High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions ‘how and why are communities different?’ This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences. PMID:29253889
NASA Astrophysics Data System (ADS)
Austin, N. J.; Evans, B.; Dresen, G. H.; Rybacki, E.
2009-12-01
Deformed rocks commonly consist of several mineral phases, each with dramatically different mechanical properties. In both naturally and experimentally deformed rocks, deformation mechanisms and, in turn, strength, are commonly investigated by analyzing microstructural elements such as crystallographic preferred orientation (CPO) and recrystallized grain size. Here, we investigated the effect of variations in the volume fraction and the geometry of rigid second phases on the strength and evolution of CPO and grain size of synthetic calcite rocks. Experiments using triaxial compression and torsional loading were conducted at 1023 K and equivalent strain rates between ~2e-6 and 1e-3 s-1. The second phases in these synthetic assemblages are rigid carbon spheres or splinters with known particle size distributions and geometries, which are chemically inert at our experimental conditions. Under hydrostatic conditions, the addition of as little as 1 vol.% carbon spheres poisons normal grain growth. Shape is also important: for an equivalent volume fraction and grain dimension, carbon splinters result in a finer calcite grain size than carbon spheres. In samples deformed at “high” strain rates, or which have “large” mean free spacing of the pinning phase, the final recrystallized grain size is well explained by competing grain growth and grain size reduction processes, where the grain-size reduction rate is determined by the rate that mechanical work is done during deformation. In these samples, the final grain size is finer than in samples heat-treated hydrostatically for equivalent durations. The addition of 1 vol.% spheres to calcite has little effect on either the strength or CPO development. Adding 10 vol.% splinters increases the strength at low strains and low strain rates, but has little effect on the strength at high strains and/or high strain rates, compared to pure samples. A CPO similar to that in pure samples is observed, although the intensity is reduced in samples containing 10 vol.% splinters. When 10 vol.% spheres are added to calcite, the strength of the aggregate is reduced, and a distinct and strong CPO develops. Viscoplastic self consistent calculations were used to model the evolution of CPO in these materials, and these suggest a variation in the activity of the various slip systems within pure samples and those containing 10 vol.% spheres. The applicability of these laboratory observations has been tested with field-based observations made in the Morcles Nappe (Swiss Helvetic Alps). In the Morcles Nappe, calcite grain size becomes progressively finer as the thrust contact is approached, and there is a concomitant increase in CPO intensity, with the strongest CPO’s in the finest-grained, quartz-rich limestones, nearest the thrust contact, which are interpreted to have been deformed to the highest strains. Thus, our laboratory results may be used to provide insight into the distribution of strain observed in natural shear zones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappellari, Michele
2013-11-20
The distribution of galaxies on the mass-size plane as a function of redshift or environment is a powerful test for galaxy formation models. Here we use integral-field stellar kinematics to interpret the variation of the mass-size distribution in two galaxy samples spanning extreme environmental densities. The samples are both identically and nearly mass-selected (stellar mass M {sub *} ≳ 6 × 10{sup 9} M {sub ☉}) and volume-limited. The first consists of nearby field galaxies from the ATLAS{sup 3D} parent sample. The second consists of galaxies in the Coma Cluster (Abell 1656), one of the densest environments for which good, resolvedmore » spectroscopy can be obtained. The mass-size distribution in the dense environment differs from the field one in two ways: (1) spiral galaxies are replaced by bulge-dominated disk-like fast-rotator early-type galaxies (ETGs), which follow the same mass-size relation and have the same mass distribution as in the field sample; (2) the slow-rotator ETGs are segregated in mass from the fast rotators, with their size increasing proportionally to their mass. A transition between the two processes appears around the stellar mass M {sub crit} ≈ 2 × 10{sup 11} M {sub ☉}. We interpret this as evidence for bulge growth (outside-in evolution) and bulge-related environmental quenching dominating at low masses, with little influence from merging. In contrast, significant dry mergers (inside-out evolution) and halo-related quenching drives the mass and size growth at the high-mass end. The existence of these two processes naturally explains the diverse size evolution of galaxies of different masses and the separability of mass and environmental quenching.« less
Liu, Shuxin; Wang, Haibin; Yin, Hengbo; Wang, Hong; He, Jichuan
2014-03-01
The carbon coated LiFePO4 (LiFePO4/C) nanocomposites materials were successfully synthesized by sol-gel method. The microstructure and morphology of LiFePO4/C nanocomposites were characterized by X-ray diffraction, Raman spectroscopy and scanning electron microscopy. The results showed that the carbon layers decomposed by different dispersant and carbon source had different graphitization degree, and the sugar could decompose to form more graphite-like structure carbon. The carbon source and heat-treatment temperature had some effect on the particle size and morphology, the sample LFP-S700 synthesized by adding sugar as carbon source at 700 degrees C had smaller particle size, uniform size distribution and spherical shape. The electrochemical behavior of LiFePO4/C nanocomposites was analyzed using galvanostatic measurements and cyclic voltammetry (CV). The results showed that the sample LFP-S700 had higher discharge specific capacities, higher apparent lithium ion diffusion coefficient and lower charge transfer resistance. The excellent electrochemical performance of sample LFP-S700 could be attributed to its high graphitization degree of carbon, smaller particle size and uniform size distribution.
SIZE, STRUCTURE AND FUNCTIONALITY IN SHALLOW COVE COMMUNITIES IN RI
We are using an ecosystem approach to examine the ecological integrity and important habitats in small estuarine coves. We sampled the small undeveloped Coggeshall Cove during the sununer of 1999. The cove was sampled at high tide at every 15 cm of substrate elevation along trans...
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
A USANS/SANS study of the accessibility of pores in the Barnett Shale to methane and water
Ruppert, Leslie F.; Sakurovs, Richard; Blach, Tomasz P.; He, Lilin; Melnichenko, Yuri B.; Mildner, David F.; Alcantar-Lopez, Leo
2013-01-01
Shale is an increasingly important source of natural gas in the United States. The gas is held in fine pores that need to be accessed by horizontal drilling and hydrofracturing techniques. Understanding the nature of the pores may provide clues to making gas extraction more efficient. We have investigated two Mississippian Barnett Shale samples, combining small-angle neutron scattering (SANS) and ultrasmall-angle neutron scattering (USANS) to determine the pore size distribution of the shale over the size range 10 nm to 10 μm. By adding deuterated methane (CD4) and, separately, deuterated water (D2O) to the shale, we have identified the fraction of pores that are accessible to these compounds over this size range. The total pore size distribution is essentially identical for the two samples. At pore sizes >250 nm, >85% of the pores in both samples are accessible to both CD4 and D2O. However, differences in accessibility to CD4 are observed in the smaller pore sizes (~25 nm). In one sample, CD4 penetrated the smallest pores as effectively as it did the larger ones. In the other sample, less than 70% of the smallest pores (4, but they were still largely penetrable by water, suggesting that small-scale heterogeneities in methane accessibility occur in the shale samples even though the total porosity does not differ. An additional study investigating the dependence of scattered intensity with pressure of CD4 allows for an accurate estimation of the pressure at which the scattered intensity is at a minimum. This study provides information about the composition of the material immediately surrounding the pores. Most of the accessible (open) pores in the 25 nm size range can be associated with either mineral matter or high reflectance organic material. However, a complementary scanning electron microscopy investigation shows that most of the pores in these shale samples are contained in the organic components. The neutron scattering results indicate that the pores are not equally proportioned in the different constituents within the shale. There is some indication from the SANS results that the composition of the pore-containing material varies with pore size; the pore size distribution associated with mineral matter is different from that associated with organic phases.
Physicochemical properties of respirable-size lunar dust
NASA Astrophysics Data System (ADS)
McKay, D. S.; Cooper, B. L.; Taylor, L. A.; James, J. T.; Thomas-Keprta, K.; Pieters, C. M.; Wentworth, S. J.; Wallace, W. T.; Lee, T. S.
2015-02-01
We separated the respirable dust and other size fractions from Apollo 14 bulk sample 14003,96 in a dry nitrogen environment. While our toxicology team performed in vivo and in vitro experiments with the respirable fraction, we studied the size distribution and shape, chemistry, mineralogy, spectroscopy, iron content and magnetic resonance of various size fractions. These represent the finest-grained lunar samples ever measured for either FMR np-Fe0 index or precise bulk chemistry, and are the first instance we know of in which SEM/TEM samples have been obtained without using liquids. The concentration of single-domain, nanophase metallic iron (np-Fe0) increases as particle size diminishes to 2 μm, confirming previous extrapolations. Size-distribution studies disclosed that the most frequent particle size was in the 0.1-0.2 μm range suggesting a relatively high surface area and therefore higher potential toxicity. Lunar dust particles are insoluble in isopropanol but slightly soluble in distilled water (~0.2 wt%/3 days). The interaction between water and lunar fines, which results in both agglomeration and partial dissolution, is observable on a macro scale over time periods of less than an hour. Most of the respirable grains were smooth amorphous glass. This suggests less toxicity than if the grains were irregular, porous, or jagged, and may account for the fact that lunar dust is less toxic than ground quartz.
Hydrochemical responses among nested catchments of the Sleepers River Research Watershed.
NASA Astrophysics Data System (ADS)
Sebestyen, S. D.; Boyer, E. W.; Shanley, J. B.; Kendall, C.
2005-12-01
We are probing chemical and isotopic tracers of dissolved organic carbon (DOC) and nitrate over both space and time to determine how stream nutrient dynamics change with increasing basin size and differ with flow conditions. At the Sleepers River Research Watershed in northeastern Vermont, USA, 20 to 30 nested sub-basins that ranged in size from 3 to 11,000 ha were sampled repeatedly under baseflow conditions. These synoptic surveys showed a pattern of heterogeneity in headwaters that converged to a consistent response at larger basin sizes and is consistent with findings of other studies. In addition to characterizing spatial patterns under baseflow, we sampled rainfall and snowmelt events over a gradient of basin sizes to investigate scaling responses under different flow conditions. During high flow events, DOC and nitrate flushing responses varied among different basins where high-frequency event samples were collected. While the DOC and nitrate concentration patterns were similar at four headwater basins, the concentration responses of larger basins were markedly different in that the concentration patterns, flushing duration, and maximum concentrations were attenuated from headwaters to the largest basin. We are using these data to explore how flow paths and solute mixing aggregate. Overall, these results highlight the complexities of understanding spatial scaling issues in catchments and underscore the need to consider event responses of hydrology and chemistry among catchments.
Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions
NASA Astrophysics Data System (ADS)
Dong, Miao L.; Goyal, Kashika G.; Worth, Bradley W.; Makkar, Sorab S.; Calhoun, William R.; Bali, Lalit M.; Bali, Samir
2013-08-01
A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.
Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions.
Dong, Miao L; Goyal, Kashika G; Worth, Bradley W; Makkar, Sorab S; Calhoun, William R; Bali, Lalit M; Bali, Samir
2013-08-01
A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.
Magnetic Force Microscopy Investigation of Magnetic Domains in Nd2Fe14B
NASA Astrophysics Data System (ADS)
Talari, Mahesh Kumar; Markandeyulu, G.; Rao, K. Prasad
2010-07-01
Remenance and coercivity in Nd2Fe14B materials are strongly dependent on the microstructural aspects like phases morphology and grain size. The coercivity (Hc) of a magnetic material varies inversely with the grain size (D) and there is a critical size below which Hc∝D6. Domain wall pinning by grain boundaries and foreign phases is the important mechanism in explaining the improvement in coercivity and remenance. Nd2Fe14B intermetallic compound with stochiometric composition was prepared from pure elements (Nd -99.5%, Fe—99.95%, B -99.99%) by arc melting in argon atmosphere. Magnetic Force Microscope (MFM) gives high-resolution magnetic domain structural information of ferromagnetic samples. DI-3100 Scanning Probe Microscope with MESP probes was used For MFM characterization of the samples. Magnetic domains observed in cast ingots were very long (up to 40 μm were observed) and approximately 1-5 μm wide due to high anisotropy of the compounds. Magnetic domains have displayed different image contrast and morphologies at different locations of the samples. The domain morphologies and image contrast obtained in this analysis were explained in this paper.
Lucas-González, Raquel; Viuda-Martos, Manuel; Pérez-Álvarez, José Ángel; Fernández-López, Juana
2017-03-01
The aim of the work was to study the influence of particle size in the composition, physicochemical, techno-functional and physio-functional properties of two flours obtained from persimmon (Diospyros kaki Trumb. cvs. 'Rojo Brillante' (RBF) and 'Triump' (THF) coproducts. The cultivar (RBF and THF) and particle size significantly affected all parameters under study, although depending on the evaluated property, only one of these effects predominated. Carbohydrates (38.07-46.98 g/100 g) and total dietary fiber (32.07-43.57 g/100 g) were the main components in both flours (RBF and THF). Furthermore, insoluble dietary fiber represented more than 68% of total dietary fiber content. All color properties studied were influenced by cultivar and particle size. For both cultivars, the lower particle size, the higher lightness and hue values. RBF flours showed high values for emulsifying activity (69.33-74.00 mL/mL), while THF presented high values for water holding capacity (WHC: 9.47-12.19 g water/g sample). The bile holding capacity (BHC) and fat/oil binding values were, in general, higher in RBF (19.61-12.19 g bile/g sample and 11.98-9.07, respectively) than THF (16.12-12.40 g bile/g sample and 9.78-7.96, respectively). The effect of particle size was really evident in both WHC and BHC. Due to their dietary fiber content, techno-functional and physio-functional properties, persimmon flours seem to have a good profile to be used as potential functional ingredient.
Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon
2016-01-01
Introduction Crowdsourcing has become an increasingly important tool to address many problems – from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. Methods We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. Results The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14–16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94–0.96). Conclusions Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45–55 experts. PMID:27350874
Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon
2016-06-01
Crowdsourcing has become an increasingly important tool to address many problems - from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14-16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94-0.96). Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45-55 experts.
Environmental DNA particle size distribution from Brook Trout (Salvelinus fontinalis)
Taylor M. Wilcox; Kevin S. McKelvey; Michael K. Young; Winsor H. Lowe; Michael K. Schwartz
2015-01-01
Environmental DNA (eDNA) sampling has become a widespread approach for detecting aquatic animals with high potential for improving conservation biology. However, little research has been done to determine the size of particles targeted by eDNA surveys. In this study, we conduct particle distribution analysis of eDNA from a captive Brook Trout (Salvelinus fontinalis) in...
Effects of plot size on forest-type algorithm accuracy
James A. Westfall
2009-01-01
The Forest Inventory and Analysis (FIA) program utilizes an algorithm to consistently determine the forest type for forested conditions on sample plots. Forest type is determined from tree size and species information. Thus, the accuracy of results is often dependent on the number of trees present, which is highly correlated with plot area. This research examines the...
ERIC Educational Resources Information Center
Ahmad Salfi, Naseer; Saeed, Muhammad
2007-01-01
Purpose: This paper seeks to determine the relationship among school size, school culture and students' achievement at secondary level in Pakistan. Design/methodology/approach: The study was descriptive (survey type). It was conducted on a sample of 90 secondary school head teachers and 540 primary, elementary and high school teachers working in…
Graphite Black shale of Vendas de Ceira, Coimbra, Portugal
NASA Astrophysics Data System (ADS)
Quinta-Ferreira, Mário; Silva, Daniela; Coelho, Nuno; Gomes, Ruben; Santos, Ana; Piedade, Aldina
2017-04-01
The graphite black shale of Vendas de Ceira located in south of Coimbra (Portugal), caused serious instability problems in recent road excavation slopes. The problems increased with the rain, transforming shales into a dark mud that acquires a metallic hue when dried. The black shales are attributed to the Devonian or eventually, to the Silurian. At the base of the slope is observed graphite black shale and on the topbrown schist. Samples were collected during the slope excavation works. Undisturbed and less altered materials were selected. Further, sampling was made difficult as the graphite shale was covered by a thick layer of reinforced concrete, which was used to stabilize the excavated surfaces. The mineralogy is mainly constituted by quartz, muscovite, ilite, ilmenite and feldspar without the presence of expansive minerals. The organic matter content is 0.3 to 0.4%. The durability evaluated by the Slake Durability Test varies from very low (Id2 of 6% for sample A) to high (98% for sample C). The grain size distribution of the shale particles, was determined after disaggregation with water, which allowed verifying that sample A has 37% of fines (5% of clay and 32% of silt) and 63% of sand, while sample C has only 14% of fines (2% clay and 12% silt) and 86% sand, showing that the decrease in particle size contributes to reduce durability. The unconfined linear expansion confirms the higher expandability (13.4%) for sample A, reducing to 12.1% for sample B and 10.5% for sample C. Due the shale material degradated with water, mercury porosimetry was used. While the dry weight of the three samples does not change significantly, around 26 kN/m3, the porosity is much higher in sample A with 7.9% of pores, reducing to 1.4% in sample C. The pores size vary between 0.06 to 0.26 microns, does not seem to have any significant influence in the shale behaviour. In order to have a comparison term, a porosity test was carried out on the low weatherable brown shale, which is quite abundant at the site. The main difference to the graphite shale is the high porosity of the brown shale with 14.7% and the low volume weight of 23 kN/m3, evidencing the distinct characteristics of the graphite schists. The maximum strength was evaluated by the Schmidt hammer, as the point load test could not be performed as the rock was very soft. The maximum estimated values on dry samples were 32 MPa for sample A and 85 MPa for sample C. The results show a singular material characterized by significant heterogeneity. It can be concluded that for the graphite schists the smaller particle size and higher porosity make the soft rock extremely weatherable when decompressed and exposed to water, as a result of high capillary tension and reduced cohesion. They also exhibit high expansion and an enormous degradation of the rock presenting a behaviour close to a soil. The graphite black schist is a highly weatherable soft rock, without expansive minerals, with small pores, in which the porosity, low strength and low cohesion allow their rapid degradation when decompressed and exposed to the action of Water.
Optical and size characterization of dissolved organic matter from the lower Yukon River
NASA Astrophysics Data System (ADS)
Guo, L.; Lin, H.
2017-12-01
The Arctic rivers have experienced significant climate and environmental changes over the last several decades and their export fluxes and environmental fate of dissolved organic matter (DOM) have received considerable attention. Monthly or bimonthly water samples were collected from the Yukon River, one of the Arctic rivers, between July 2004 and September 2005 for size fractionation to isolate low-molecular-weight (LMW, <1 kDa) and high-molecular-weight (HMW, >1 kDa) DOM. The freeze-dried HMW-DOM was then characterized for their optical properties using fluorescence spectroscopy and colloidal size spectra using asymmetrical flow field-flow fractionation techniques. Ratios of biological index (BIX) to humification index (HIX) show a seasonal change, with lower values in river open seasons and higher values under the ice, and the influence of rive discharge. Three major fluorescence DOM components were identified, including two humic-like components (Ex/Em at 260/480 nm and 250/420 nm, respectively) and one protein-like component (Ex/Em=250/330). The ratio of protein-like to humic-like components was broadly correlated with discharge, with low values during spring freshet and high values under the ice. The relatively high protein-like/humic-like ratio during the ice-covered season suggested sources from macro-organisms and/or ice-algae. Both protein-like and humic-like colloidal fluorophores were partitioned mostly in the 1-5 kDa size fraction although the protein-like fluorophores in some samples also contained larger colloidal size. The relationship between chemical/biological reactivity and size/optical characteristics of DOM needs to be further investigated.
High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Juan; Zou, Qingze, E-mail: qzzou@rci.rutgers.edu
In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized inmore » a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.« less
High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force.
Ren, Juan; Zou, Qingze
2014-07-01
In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized in a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Body Size Correlates with Fertilization Success but not Gonad Size in Grass Goby Territorial Males
Pujolar, Jose Martin; Locatello, Lisa; Zane, Lorenzo; Mazzoldi, Carlotta
2012-01-01
In fish species with alternative male mating tactics, sperm competition typically occurs when small males that are unsuccessful in direct contests steal fertilization opportunities from large dominant males. In the grass goby Zosterisessor ophiocephalus, large territorial males defend and court females from nest sites, while small sneaker males obtain matings by sneaking into nests. Parentage assignment of 688 eggs from 8 different nests sampled in the 2003–2004 breeding season revealed a high level of sperm competition. Fertilization success of territorial males was very high but in all nests sneakers also contributed to the progeny. In territorial males, fertilization success correlated positively with male body size. Gonadal investment was explored in a sample of 126 grass gobies collected during the period 1995–1996 in the same area (61 territorial males and 65 sneakers). Correlation between body weight and testis weight was positive and significant for sneaker males, while correlation was virtually equal to zero in territorial males. That body size in territorial males is correlated with fertilization success but not gonad size suggests that males allocate much more energy into growth and relatively little into sperm production once the needed size to become territorial is attained. The increased paternity of larger territorial males might be due to a more effective defense of the nest in comparison with smaller territorial males. PMID:23056415
Body size correlates with fertilization success but not gonad size in grass goby territorial males.
Pujolar, Jose Martin; Locatello, Lisa; Zane, Lorenzo; Mazzoldi, Carlotta
2012-01-01
In fish species with alternative male mating tactics, sperm competition typically occurs when small males that are unsuccessful in direct contests steal fertilization opportunities from large dominant males. In the grass goby Zosterisessor ophiocephalus, large territorial males defend and court females from nest sites, while small sneaker males obtain matings by sneaking into nests. Parentage assignment of 688 eggs from 8 different nests sampled in the 2003-2004 breeding season revealed a high level of sperm competition. Fertilization success of territorial males was very high but in all nests sneakers also contributed to the progeny. In territorial males, fertilization success correlated positively with male body size. Gonadal investment was explored in a sample of 126 grass gobies collected during the period 1995-1996 in the same area (61 territorial males and 65 sneakers). Correlation between body weight and testis weight was positive and significant for sneaker males, while correlation was virtually equal to zero in territorial males. That body size in territorial males is correlated with fertilization success but not gonad size suggests that males allocate much more energy into growth and relatively little into sperm production once the needed size to become territorial is attained. The increased paternity of larger territorial males might be due to a more effective defense of the nest in comparison with smaller territorial males.
NASA Astrophysics Data System (ADS)
Weinkauf, Manuel F. G.; Milker, Yvonne
2018-05-01
Benthic Foraminifera assemblages are employed for past environmental reconstructions, as well as for biomonitoring studies in recent environments. Despite their established status for such applications, and existing protocols for sample treatment, not all studies using benthic Foraminifera employ the same methodology. For instance, there is no broad practical consensus whether to use the >125 µm or >150 µm size fraction for benthic foraminiferal assemblage analyses. Here, we use early Pleistocene material from the Pefka E section on the Island of Rhodes (Greece), which has been counted in both size fractions, to investigate whether a 25 µm difference in the counted fraction is already sufficient to have an impact on ecological studies. We analysed the influence of the difference in size fraction on studies of biodiversity as well as multivariate assemblage analyses of the sample material. We found that for both types of studies, the general trends remain the same regardless of the chosen size fraction, but in detail significant differences emerge which are not consistently distributed between samples. Studies which require a high degree of precision can thus not compare results from analyses that used different size fractions, and the inconsistent distribution of differences makes it impossible to develop corrections for this issue. We therefore advocate the consistent use of the >125 µm size fraction for benthic foraminiferal studies in the future.
Mean estimation in highly skewed samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pederson, S P
The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education
ERIC Educational Resources Information Center
Slavin, Robert; Smith, Dewi
2009-01-01
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
NASA Astrophysics Data System (ADS)
Yulia, M.; Suhandy, D.
2018-03-01
NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.
Designing image segmentation studies: Statistical power, sample size and reference standard quality.
Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C
2017-12-01
Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Characterization of the enhancement effect of Na2CO3 on the sulfur capture capacity of limestones.
Laursen, Karin; Kern, Arnt A; Grace, John R; Lim, C Jim
2003-08-15
It has been known for a long time that certain additives (e.g., NaCl, CaCl2, Na2CO3, Fe2O3) can increase the sulfur dioxide capture-capacity of limestones. In a recent study we demonstrated that very small amounts of Na2CO3 can be very beneficial for producing sorbents of very high sorption capacities. This paper explores what contributes to these significant increases. Mercury porosimetry measurements of calcined limestone samples reveal a change in the pore-size from 0.04-0.2 microm in untreated samples to 2-10 microm in samples treated with Na2CO3--a pore-size more favorable for penetration of sulfur into the particles. The change in pore-size facilitates reaction with lime grains throughout the whole particle without rapid plugging of pores, avoiding premature change from a fast chemical reaction to a slow solid-state diffusion controlled process, as seen for untreated samples. Calcination in a thermogravimetric reactor showed that Na2CO3 increased the rate of calcination of CaCO3 to CaO, an effect which was slightly larger at 825 degrees C than at 900 degrees C. Peak broadening analysis of powder X-ray diffraction data of the raw, calcined, and sulfated samples revealed an unaffected calcite size (approximately 125-170 nm) but a significant increase in the crystallite size for lime (approximately 60-90 nm to approximately 250-300 nm) and less for anhydrite (approximately 125-150 nm to approximately 225-250 nm). The increase in the crystallite and pore-size of the treated limestones is attributed to an increase in ionic mobility in the crystal lattice due to formation of vacancies in the crystals when Ca is partly replaced by Na.
Myatt, Julia P; Crompton, Robin H; Thorpe, Susannah K S
2011-01-01
By relating an animal's morphology to its functional role and the behaviours performed, we can further develop our understanding of the selective factors and constraints acting on the adaptations of great apes. Comparison of muscle architecture between different ape species, however, is difficult because only small sample sizes are ever available. Further, such samples are often comprised of different age–sex classes, so studies have to rely on scaling techniques to remove body mass differences. However, the reliability of such scaling techniques has been questioned. As datasets increase in size, more reliable statistical analysis may eventually become possible. Here we employ geometric and allometric scaling techniques, and ancovas (a form of general linear model, GLM) to highlight and explore the different methods available for comparing functional morphology in the non-human great apes. Our results underline the importance of regressing data against a suitable body size variable to ascertain the relationship (geometric or allometric) and of choosing appropriate exponents by which to scale data. ancova models, while likely to be more robust than scaling for species comparisons when sample sizes are high, suffer from reduced power when sample sizes are low. Therefore, until sample sizes are radically increased it is preferable to include scaling analyses along with ancovas in data exploration. Overall, the results obtained from the different methods show little significant variation, whether in muscle belly mass, fascicle length or physiological cross-sectional area between the different species. This may reflect relatively close evolutionary relationships of the non-human great apes; a universal influence on morphology of generalised orthograde locomotor behaviours or, quite likely, both. PMID:21507000
Phylogenetic effective sample size.
Bartoszek, Krzysztof
2016-10-21
In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
de Andrade, Jailson B.; Tanner, Roger L.
A method is described for the specific collection of formaldehyde as hydroxymethanesulfonate on bisulfate-coated cellulose filters. Following extraction in aqueous acid and removal on unreacted bisulfite, the hydroxymethanesulfonate is decomposed by base, and HCHO is determined by DNPH (2,4-dinitrophenylhydrazine) derivatization and HPLC. Since the collection efficiency for formaldehyde is moderately high even when sampling ambient air at high-volume flow rates, a limit of detection of 0.2 ppbv is achieved with 30 min sampling times. Interference from acetaldehyde co-collected as 1-hydroxyethanesulfonate is <5% using this procedure. The technique shows promise for both short-term airborne sampling, and as a means of collecting mg-sized samples of HCHO on an inorganic matrix for carbon isotopic analyses.
Wolbers, Marcel; Heemskerk, Dorothee; Chau, Tran Thi Hong; Yen, Nguyen Thi Bich; Caws, Maxine; Farrar, Jeremy; Day, Jeremy
2011-02-02
In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 x 2 factorial design. We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 x 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Current Controlled Trials ISRCTN61649292.
Effect of bait and gear type on channel catfish catch and turtle bycatch in a reservoir
Cartabiano, Evan C.; Stewart, David R.; Long, James M.
2014-01-01
Hoop nets have become the preferred gear choice to sample channel catfish Ictalurus punctatus but the degree of bycatch can be high, especially due to the incidental capture of aquatic turtles. While exclusion and escapement devices have been developed and evaluated, few have examined bait choice as a method to reduce turtle bycatch. The use of Zote™ soap has shown considerable promise to reduce bycatch of aquatic turtles when used with trotlines but its effectiveness in hoop nets has not been evaluated. We sought to determine the effectiveness of hoop nets baited with cheese bait or Zote™ soap and trotlines baited with shad or Zote™ soap as a way to sample channel catfish and prevent capture of aquatic turtles. We used a repeated-measures experimental design and treatment combinations were randomly assigned using a Latin-square arrangement. Eight sampling locations were systematically selected and then sampled with either hoop nets or trotlines using Zote™ soap (both gears), waste cheese (hoop nets), or cut shad (trotlines). Catch rates did not statistically differ among the gear–bait-type combinations. Size bias was evident with trotlines consistently capturing larger sized channel catfish compared to hoop nets. Results from a Monte Carlo bootstrapping procedure estimated the number of samples needed to reach predetermined levels of sampling precision to be lowest for trotlines baited with soap. Moreover, trotlines baited with soap caught no aquatic turtles, while hoop nets captured many turtles and had high mortality rates. We suggest that Zote™ soap used in combination with multiple hook sizes on trotlines may be a viable alternative to sample channel catfish and reduce bycatch of aquatic turtles.
Metric variation and sexual dimorphism in the dentition of Ouranopithecus macedoniensis.
Schrein, Caitlin M
2006-04-01
The fossil sample attributed to the late Miocene hominoid taxon Ouranopithecus macedoniensis is characterized by a high degree of dental metric variation. As a result, some researchers support a multiple-species taxonomy for this sample. Other researchers do not think that the sample variation is too great to be accommodated within one species. This study examines variation and sexual dimorphism in mandibular canine and postcanine dental metrics of an Ouranopithecus sample. Bootstrapping (resampling with replacement) of extant hominoid dental metric data is performed to test the hypothesis that the coefficients of variation (CV) and the indices of sexual dimorphism (ISD) of the fossil sample are not significantly different from those of modern great apes. Variation and sexual dimorphism in Ouranopithecus M(1) dimensions were statistically different from those of all extant ape samples; however, most of the dental metrics of Ouranopithecus were neither more variable nor more sexually dimorphic than those of Gorilla and Pongo. Similarly high levels of mandibular molar variation are known to characterize other fossil hominoid species. The Ouranopithecus specimens are morphologically homogeneous and it is probable that all but one specimen included in this study are from a single population. It is unlikely that the sample includes specimens of two sympatric large-bodied hominoid species. For these reasons, a single-species hypothesis is not rejected for the Ouranopithecus macedoniensis material. Correlations between mandibular first molar tooth size dimorphism and body size dimorphism indicate that O. macedoniensis and other extinct hominoids were more sexually size dimorphic than any living great apes, which suggests that social behaviors and life history profiles of these species may have been different from those of living species.
Temperature dependence of the size distribution function of InAs quantum dots on GaAs(001)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arciprete, F.; Fanfoni, M.; Patella, F.
2010-04-15
We present a detailed atomic-force-microscopy study of the effect of annealing on InAs/GaAs(001) quantum dots grown by molecular-beam epitaxy. Samples were grown at a low growth rate at 500 deg. C with an InAs coverage slightly greater than critical thickness and subsequently annealed at several temperatures. We find that immediately quenched samples exhibit a bimodal size distribution with a high density of small dots (<50 nm{sup 3}) while annealing at temperatures greater than 420 deg. C leads to a unimodal size distribution. This result indicates a coarsening process governing the evolution of the island size distribution function which is limitedmore » by the attachment-detachment of the adatoms at the island boundary. At higher temperatures one cannot ascribe a single rate-determining step for coarsening because of the increased role of adatom diffusion. However, for long annealing times at 500 deg. C the island size distribution is strongly affected by In desorption.« less
NASA Astrophysics Data System (ADS)
Heinze, Karsta; Frank, Xavier; Lullien-Pellerin, Valérie; George, Matthieu; Radjai, Farhang; Delenne, Jean-Yves
2017-06-01
Wheat grains can be considered as a natural cemented granular material. They are milled under high forces to produce food products such as flour. The major part of the grain is the so-called starchy endosperm. It contains stiff starch granules, which show a multi-modal size distribution, and a softer protein matrix that surrounds the granules. Experimental milling studies and numerical simulations are going hand in hand to better understand the fragmentation behavior of this biological material and to improve milling performance. We present a numerical study of the effect of granule size distribution on the strength of such a cemented granular material. Samples of bi-modal starch granule size distribution were created and submitted to uniaxial tension, using a peridynamics method. We show that, when compared to the effects of starch-protein interface adhesion and voids, the granule size distribution has a limited effect on the samples' yield stress.
Structure of Nano-sized CeO 2 Materials: Combined Scattering and Spectroscopic Investigations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchbank, Huw R.; Clark, Adam H.; Hyde, Timothy I.
Here, the nature of nano-sized ceria, CeO 2, systems were investigated using neutron and X-ray diffraction and X-ray absorption spectroscopy. Whilst both diffraction andtotal pair distribution functions (PDFs) revealed that in all the samples the occupancy of both Ce 4+ and O 2- are very close to the ideal stoichiometry, the analysis using reverse Monte Carlo technique revealedsignificant disorder around oxygen atoms in the nano sized ceria samples in comparison to the highly crystalline NIST standard.In addition, the analysis reveal that the main differences observed in the pair correlations from various X-ray and neutron diffraction techniques were attributed to themore » particle size of the CeO 2 prepared by the reported three methods. Furthermore, detailed analysis of the Ce L 3– and K-edge EXAFS data support this finding; in particular the decrease in higher shell coordination numbers with respect to the NIST standard, are attributed to differences in particle size.« less
Structure of Nano-sized CeO 2 Materials: Combined Scattering and Spectroscopic Investigations
Marchbank, Huw R.; Clark, Adam H.; Hyde, Timothy I.; ...
2016-08-29
Here, the nature of nano-sized ceria, CeO 2, systems were investigated using neutron and X-ray diffraction and X-ray absorption spectroscopy. Whilst both diffraction andtotal pair distribution functions (PDFs) revealed that in all the samples the occupancy of both Ce 4+ and O 2- are very close to the ideal stoichiometry, the analysis using reverse Monte Carlo technique revealedsignificant disorder around oxygen atoms in the nano sized ceria samples in comparison to the highly crystalline NIST standard.In addition, the analysis reveal that the main differences observed in the pair correlations from various X-ray and neutron diffraction techniques were attributed to themore » particle size of the CeO 2 prepared by the reported three methods. Furthermore, detailed analysis of the Ce L 3– and K-edge EXAFS data support this finding; in particular the decrease in higher shell coordination numbers with respect to the NIST standard, are attributed to differences in particle size.« less
Effect of Sampling Plans on the Risk of Escherichia coli O157 Illness.
Kiermeier, Andreas; Sumner, John; Jenson, Ian
2015-07-01
Australia exports about 150,000 to 200,000 tons of manufacturing beef to the United States annually. Each lot is tested for Escherichia coli O157 using the N-60 sampling protocol, where 60 small pieces of surface meat from each lot of production are tested. A risk assessment of E. coli O157 illness from the consumption of hamburgers made from Australian manufacturing meat formed the basis to evaluate the effect of sample size and amount on the number of illnesses predicted. The sampling plans evaluated included no sampling (resulting in an estimated 55.2 illnesses per annum), the current N-60 plan (50.2 illnesses), N-90 (49.6 illnesses), N-120 (48.4 illnesses), and a more stringent N-60 sampling plan taking five 25-g samples from each of 12 cartons (47.4 illnesses per annum). While sampling may detect some highly contaminated lots, it does not guarantee that all such lots are removed from commerce. It is concluded that increasing the sample size or sample amount from the current N-60 plan would have a very small public health effect.
Considerations for successful cosmogenic 3He dating in accessory phases
NASA Astrophysics Data System (ADS)
Amidon, W. H.; Farley, K. A.; Rood, D. H.
2008-12-01
We have been working to develop cosmogenic 3He dating of phases other than the commonly dated olivine and pyroxene, especially apatite and zircon. Recent work by Dunai et al. underscores that cosmogenic 3He dating is complicated by 3He production via 6Li(n,α) 3H --> 3He. The reacting thermal neutrons can be produced from three distinct sources; nucleogenic processes (3Henuc), muon interactions (3Hemu), and by high-energy "cosmogenic" neutrons (3Hecn). Accurate cosmogenic 3He dating requires determination of the relative fractions of Li-derived and spallation derived 3He. An important complication for the fine-grained phases we are investigating is that both spallation and the 6Li reaction eject high energy particles, with consequences for redistribution of 3He among phases in a rock. Although shielded samples can be used to estimate 3Henuc, they do not conatin the 3Hecn component produced in the near surface. To calculate this component, we propose a procedure in which the bulk rock chemistry, helium closure age, 3He concentration, grain size and Li content of the target mineral are measured in a shielded sample. The average Li content of the adjacent minerals can then be calculated, which in turn allows calculation of the 3Hecn component in surface exposed samples of the same lithology. If identical grain sizes are used in the shielded and surface exposed samples, then "effective" Li can be calculated directly from the shielded sample, and it may not be necessary to measure Li at all. To help validate our theoretical understanding of Li-3He production, and to constrain the geologic contexts in which cosmogenic 3He dating with zircon and apatite is likely to be successful, results are presented from four different field locations. For example, results from ~18 Ky old moraines in the Sierra Nevada show that the combination of low Li contents and high closure ages (>50 My) creates a small 3Hecn component (2%) but a large 3Henuc component (40-70%) for zircon and apatite. In contrast the combination of high Li contents and a young closure age (0.6 My) in rhyolite from the Coso volcanic field leads to a large 3Hecn component (30%) and small 3Henuc component (5%) in zircon. Analysis of samples from a variety of lithologies shows that zircon and apatite tend to be low in Li (1-10 ppm), but are vulnerable to implantation of 3He from adjacent minerals due to their small grain size, especially from minerals like biotite and hornblende. This point is well illustrated by data from both the Sierra Nevada and Coso examples, in which there is a strong correlation between grain size and 3He concentration for zircons due to implantation. In contrast, very large zircons (150>125 um width) obtained from shielded samples of the Shoshone Falls rhyolite (SW Idaho) do not contain a significant implanted component. Thus, successful 3He dating of accessory phases requires low Li content (<10 ppm) in the target mineral and either 1) low Li in adjacent minerals, or 2) the use of large grain sizes (>100 um). In high-Li cases, the fraction of 3Henuc is minimized in samples with young helium closure ages or longer duration of exposure. However because the 3Hecn/3Hespall ratio is fixed for a given Li content, longer exposure will not reduce the fraction of 3Hecn.
NASA Astrophysics Data System (ADS)
Steinbach, Florian; Kuiper, Ernst-Jan N.; Eichler, Jan; Bons, Paul D.; Drury, Martyn R.; Griera, Albert; Pennock, Gill M.; Weikusat, Ilka
2017-09-01
The flow of ice depends on the properties of the aggregate of individual ice crystals, such as grain size or lattice orientation distributions. Therefore, an understanding of the processes controlling ice micro-dynamics is needed to ultimately develop a physically based macroscopic ice flow law. We investigated the relevance of the process of grain dissection as a grain-size-modifying process in natural ice. For that purpose, we performed numerical multi-process microstructure modelling and analysed microstructure and crystallographic orientation maps from natural deep ice-core samples from the North Greenland Eemian Ice Drilling (NEEM) project. Full crystallographic orientations measured by electron backscatter diffraction (EBSD) have been used together with c-axis orientations using an optical technique (Fabric Analyser). Grain dissection is a feature of strain-induced grain boundary migration. During grain dissection, grain boundaries bulge into a neighbouring grain in an area of high dislocation energy and merge with the opposite grain boundary. This splits the high dislocation-energy grain into two parts, effectively decreasing the local grain size. Currently, grain size reduction in ice is thought to be achieved by either the progressive transformation from dislocation walls into new high-angle grain boundaries, called subgrain rotation or polygonisation, or bulging nucleation that is assisted by subgrain rotation. Both our time-resolved numerical modelling and NEEM ice core samples show that grain dissection is a common mechanism during ice deformation and can provide an efficient process to reduce grain sizes and counter-act dynamic grain-growth in addition to polygonisation or bulging nucleation. Thus, our results show that solely strain-induced boundary migration, in absence of subgrain rotation, can reduce grain sizes in polar ice, in particular if strain energy gradients are high. We describe the microstructural characteristics that can be used to identify grain dissection in natural microstructures.
Preliminary findings of chemistry and bioaccessibility in base metal smelter slags.
Morrison, Anthony L; Gulson, Brian L
2007-08-15
Leaching of toxic metals from slag waste produced during smelting of Pb-Zn ores is generally considered to be negligible. A 1.4 million tonne stockpile of slag containing up to 2.5% Pb and other contaminants has accumulated on a smelter site at North Lake Macquarie, New South Wales, Australia, and it has also been freely used within the community for landscaping and drainage projects. It had been suggested that Pb in fine particles derived from the slags may be a potential contributor to the blood Pb of some children in this community, although there is conflicting evidence in the literature for such a hypothesis. Bioaccessibility of lead and selected metals derived from nine slag samples collected from areas of public open space was examined using a relatively simple in vitro gastric dissolution technique. Size analyses of the slag samples demonstrate that finely-sized material was present in the slags which could be ingested, especially by children. The finer-sized particles contain high levels of Pb (6,490-41,400 ppm), along with Cd and As. Pb bioaccessibility of the slags was high, averaging 45% for -250 microm material and 75% for particles in the size range -53+32 microm. Increasing bioaccessibility and Pb concentration showed an inverse relationship to particle size. Almost 100% of Pb would be bioaccessible in the smallest slag particles (<20 microm), which also contained very high Pb levels ranging from 50,000 to 80,000 ppm and thus constitute a potential health hazard for children.
Rosenberger, Amanda E.; Dunham, Jason B.
2005-01-01
Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.
NASA Astrophysics Data System (ADS)
Chan, Y. C.; Vowles, P. D.; McTainsh, G. H.; Simpson, R. W.; Cohen, D. D.; Bailey, G. M.; McOrist, G. D.
This paper describes a method for the simultaneous collection of size-fractionated aerosol samples on several collection substrates, including glass-fibre filter, carbon tape and silver tape, with a commercially available high-volume cascade impactor. This permitted various chemical analysis procedures, including ion beam analysis (IBA), instrumental neutron activation analysis (INAA), carbon analysis and scanning electron microscopy (SEM), to be carried out on the samples.
Characterizing acoustic shocks in high-performance jet aircraft flyover noise.
Reichman, Brent O; Gee, Kent L; Neilsen, Tracianne B; Downing, J Micah; James, Michael M; Wall, Alan T; McInerny, Sally Anne
2018-03-01
Acoustic shocks have been previously documented in high-amplitude jet noise, including both the near and far fields of military jet aircraft. However, previous investigations into the nature and formation of shocks have historically concentrated on stationary, ground run-up measurements, and previous attempts to connect full-scale ground run-up and flyover measurements have omitted the effect of nonlinear propagation. This paper shows evidence for nonlinear propagation and the presence of acoustic shocks in acoustical measurements of F-35 flyover operations. Pressure waveforms, derivatives, and statistics indicate nonlinear propagation, and the resulting shock formation is significant at high engine powers. Variations due to microphone size, microphone height, and sampling rate are considered, and recommendations for future measurements are made. Metrics indicating nonlinear propagation are shown to be influenced by changes in sampling rate and microphone size, and exhibit less variation due to microphone height.
Bubble evolution in Kr-irradiated UO2 during annealing
NASA Astrophysics Data System (ADS)
He, L.; Bai, X. M.; Pakarinen, J.; Jaques, B. J.; Gan, J.; Nelson, A. T.; El-Azab, A.; Allen, T. R.
2017-12-01
Transmission electron microscopy observation of Kr bubble evolution in polycrystalline UO2 annealed at high temperature was conducted in order to understand the inert gas behavior in oxide nuclear fuel. The average diameter of intragranular bubbles increased gradually from 0.8 nm in as-irradiated sample at room temperature to 2.6 nm at 1600 °C and the bubble size distribution changed from a uniform distribution to a bimodal distribution above 1300 °C. The size of intergranular bubbles increased more rapidly than intragranular ones and bubble denuded zones near grain boundaries formed in all the annealed samples. It was found that high-angle grain boundaries held bigger bubbles than low-angle grain boundaries. Complementary atomistic modeling was conducted to interpret the effects of grain boundary character on the Kr segregation. The area density of strong segregation sites in the high-angle grain boundaries is much higher than that in the low angle grain boundaries.
Accounting for twin births in sample size calculations for randomised trials.
Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J
2018-05-04
Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.
Donegan, Thomas M.
2018-01-01
Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266
Imaging natural materials with a quasi-microscope. [spectrophotometry of granular materials
NASA Technical Reports Server (NTRS)
Bragg, S.; Arvidson, R.
1977-01-01
A Viking lander camera with auxilliary optics mounted inside the dust post was evaluated to determine its capability for imaging the inorganic properties of granular materials. During mission operations, prepared samples would be delivered to a plate positioned within the camera's field of view and depth of focus. The auxiliary optics would then allow soil samples to be imaged with an 11 pm pixel size in the broad band (high resolution, black and white) mode, and a 33 pm pixel size in the multispectral mode. The equipment will be used to characterize: (1) the size distribution of grains produced by igneous (intrusive and extrusive) processes or by shock metamorphism, (2) the size distribution resulting from crushing, chemical alteration, or by hydraulic or aerodynamic sorting; (3) the shape and degree of grain roundness and surface texture induced by mechanical and chemical alteration; and (4) the mineralogy and chemistry of grains.
Free flux flow in two single crystals of V3Si with slightly different pinning strengths
NASA Astrophysics Data System (ADS)
Gafarov, O.; Gapud, A. A.; Moraes, S.; Thompson, J. R.; Christen, D. K.; Reyes, A. P.
2010-10-01
Results of recent measurements on two very clean, single-crystal samples of the A15 superconductor V3Si are presented. Magnetization and transport data already confirmed the ``clean'' quality of both samples, as manifested by: (i) high residual resistivity ratio, (ii) very low critical current densities, and (iii) a ``peak'' effect in the field dependence of critical current. The (H,T) phase line for this peak effect is shifted in the slightly ``dirtier'' sample, which consequently also has higher critical current density Jc(H). High-current Lorentz forces are applied on mixed-state vortices in order to induce the highly ordered free flux flow (FFF) phase, using the same methods as in previous work. A traditional model by Bardeen and Stephen (BS) predicts a simple field dependence of flux flow resistivity ρf(H), presuming a field-independent flux core size. A model by Kogan and Zelezhina (KZ) takes core size into account, and predict a clear deviation from BS. In this study, ρf(H) is confirmed to be consistent with predictions of KZ, as will be discussed.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Public Opinion Polls, Chicken Soup and Sample Size
ERIC Educational Resources Information Center
Nguyen, Phung
2005-01-01
Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.
Olives, Casey; Valadez, Joseph J.; Brooker, Simon J.; Pagano, Marcello
2012-01-01
Background Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. Methodology We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n = 15 and n = 25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Principle Findings Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n = 15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. Conclusion/Significance This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools. PMID:22970333
Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.
Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto
2007-07-01
To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
Disease-Concordant Twins Empower Genetic Association Studies.
Tan, Qihua; Li, Weilong; Vandin, Fabio
2017-01-01
Genome-wide association studies with moderate sample sizes are underpowered, especially when testing SNP alleles with low allele counts, a situation that may lead to high frequency of false-positive results and lack of replication in independent studies. Related individuals, such as twin pairs concordant for a disease, should confer increased power in genetic association analysis because of their genetic relatedness. We conducted a computer simulation study to explore the power advantage of the disease-concordant twin design, which uses singletons from disease-concordant twin pairs as cases and ordinary healthy samples as controls. We examined the power gain of the twin-based design for various scenarios (i.e., cases from monozygotic and dizygotic twin pairs concordant for a disease) and compared the power with the ordinary case-control design with cases collected from the unrelated patient population. Simulation was done by assigning various allele frequencies and allelic relative risks for different mode of genetic inheritance. In general, for achieving a power estimate of 80%, the sample sizes needed for dizygotic and monozygotic twin cases were one half and one fourth of the sample size of an ordinary case-control design, with variations depending on genetic mode. Importantly, the enriched power for dizygotic twins also applies to disease-concordant sibling pairs, which largely extends the application of the concordant twin design. Overall, our simulation revealed a high value of disease-concordant twins in genetic association studies and encourages the use of genetically related individuals for highly efficiently identifying both common and rare genetic variants underlying human complex diseases without increasing laboratory cost. © 2016 John Wiley & Sons Ltd/University College London.
Estimating the breeding population of long-billed curlew in the United States
Stanley, T.R.; Skagen, S.K.
2007-01-01
Determining population size and long-term trends in population size for species of high concern is a priority of international, national, and regional conservation plans. Long-billed curlews (Numenius americanus) are a species of special concern in North America due to apparent declines in their population. Because long-billed curlews are not adequately monitored by existing programs, we undertook a 2-year study with the goals of 1) determining present long-billed curlew distribution and breeding population size in the United States and 2) providing recommendations for a long-term long-billed curlew monitoring protocol. We selected a stratified random sample of survey routes in 16 western states for sampling in 2004 and 2005, and we analyzed count data from these routes to estimate detection probabilities and abundance. In addition, we evaluated habitat along roadsides to determine how well roadsides represented habitat throughout the sampling units. We estimated there were 164,515 (SE = 42,047) breeding long-billed curlews in 2004, and 109,533 (SE = 31,060) breeding individuals in 2005. These estimates far exceed currently accepted estimates based on expert opinion. We found that habitat along roadsides was representative of long-billed curlew habitat in general. We make recommendations for improving sampling methodology, and we present power curves to provide guidance on minimum sample sizes required to detect trends in abundance.
Royer, Danielle F; Lockwood, Charles A; Scott, Jeremiah E; Grine, Frederick E
2009-10-01
Previous studies of the Middle Stone Age human remains from Klasies River have concluded that they exhibited more sexual dimorphism than extant populations, but these claims have not been assessed statistically. We evaluate these claims by comparing size variation in the best-represented elements at the site, namely the mandibular corpora and M(2)s, to that in samples from three recent human populations using resampling methods. We also examine size variation in these same elements from seven additional middle and late Pleistocene sites: Skhūl, Dolní Vestonice, Sima de los Huesos, Arago, Krapina, Shanidar, and Vindija. Our results demonstrate that size variation in the Klasies assemblage was greater than in recent humans, consistent with arguments that the Klasies people were more dimorphic than living humans. Variation in the Skhūl, Dolní Vestonice, and Sima de los Huesos mandibular samples is also higher than in the recent human samples, indicating that the Klasies sample was not unusual among middle and late Pleistocene hominins. In contrast, the Neandertal samples (Krapina, Shanidar, and Vindija) do not evince relatively high mandibular and molar variation, which may indicate that the level of dimorphism in Neandertals was similar to that observed in extant humans. These results suggest that the reduced levels of dimorphism in Neandertals and living humans may have developed independently, though larger fossil samples are needed to test this hypothesis.
The grain size(s) of Black Hills Quartzite deformed in the dislocation creep regime
NASA Astrophysics Data System (ADS)
Heilbronner, Renée; Kilian, Rüdiger
2017-10-01
General shear experiments on Black Hills Quartzite (BHQ) deformed in the dislocation creep regimes 1 to 3 have been previously analyzed using the CIP method (Heilbronner and Tullis, 2002, 2006). They are reexamined using the higher spatial and orientational resolution of EBSD. Criteria for coherent segmentations based on c-axis orientation and on full crystallographic orientations are determined. Texture domains of preferred c-axis orientation (Y and B domains) are extracted and analyzed separately. Subdomains are recognized, and their shape and size are related to the kinematic framework and the original grains in the BHQ. Grain size analysis is carried out for all samples, high- and low-strain samples, and separately for a number of texture domains. When comparing the results to the recrystallized quartz piezometer of Stipp and Tullis (2003), it is found that grain sizes are consistently larger for a given flow stress. It is therefore suggested that the recrystallized grain size also depends on texture, grain-scale deformation intensity, and the kinematic framework (of axial vs. general shear experiments).
Bhaskar, Anand; Song, Yun S
2014-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.
Bhaskar, Anand; Song, Yun S.
2016-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011
NASA Technical Reports Server (NTRS)
Banerjee, S. K.
1974-01-01
The direction and magnitude of natural remanent magnetization of five approximately 3-g subsamples of 72275 and 72255 and the high field saturation magnetization, coercive force, and isothermal remanent magnetization of 100-mg chip from each of these samples, were studied. Given an understanding of the magnetization processes, group 1 experiments provide information about the absolute direction of the ancient magnetizing field and a qualitative estimate of its size (paleointensity). The group 2 experiments yield a quantitative estimate of the iron content and a qualitative ideal of the grain sizes.
Combining the boundary shift integral and tensor-based morphometry for brain atrophy estimation
NASA Astrophysics Data System (ADS)
Michalkiewicz, Mateusz; Pai, Akshay; Leung, Kelvin K.; Sommer, Stefan; Darkner, Sune; Sørensen, Lauge; Sporring, Jon; Nielsen, Mads
2016-03-01
Brain atrophy from structural magnetic resonance images (MRIs) is widely used as an imaging surrogate marker for Alzheimers disease. Their utility has been limited due to the large degree of variance and subsequently high sample size estimates. The only consistent and reasonably powerful atrophy estimation methods has been the boundary shift integral (BSI). In this paper, we first propose a tensor-based morphometry (TBM) method to measure voxel-wise atrophy that we combine with BSI. The combined model decreases the sample size estimates significantly when compared to BSI and TBM alone.
40 CFR 86.1845-01 - Manufacturer in-use verification testing requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of test vehicles in the sample comply with the sample size requirements of this section. Any post... vehicles, light-duty trucks, and complete heavy-duty vehicles shall test, or cause to have tested a...) Low mileage testing. [Reserved] (c) High-mileage testing—(1) Test groups. Testing must be conducted...
40 CFR 86.1845-01 - Manufacturer in-use verification testing requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of test vehicles in the sample comply with the sample size requirements of this section. Any post... vehicles, light-duty trucks, and complete heavy-duty vehicles shall test, or cause to have tested a...) Low mileage testing. [Reserved] (c) High-mileage testing—(1) Test groups. Testing must be conducted...
40 CFR 86.1845-01 - Manufacturer in-use verification testing requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of test vehicles in the sample comply with the sample size requirements of this section. Any post... vehicles, light-duty trucks, and complete heavy-duty vehicles shall test, or cause to have tested a...) Low mileage testing. [Reserved] (c) High-mileage testing—(1) Test groups. Testing must be conducted...
40 CFR 86.1845-01 - Manufacturer in-use verification testing requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of test vehicles in the sample comply with the sample size requirements of this section. Any post... vehicles, light-duty trucks, and complete heavy-duty vehicles shall test, or cause to have tested a...) Low mileage testing. [Reserved] (c) High-mileage testing—(1) Test groups. Testing must be conducted...
Construction of the Examination Stress Scale for Adolescent Students
ERIC Educational Resources Information Center
Sung, Yao-Ting; Chao, Tzu-Yang
2015-01-01
The tools used for measuring examination stress have three main limitations: sample selected, sample sizes, and measurement contents. In this study, we constructed the Examination Stress Scale (ExamSS), and 4,717 high school students participated in this research. The results indicate that ExamSS has satisfactory reliability, construct validity,…
Effect of size on structural, optical and magnetic properties of SnO2 nanoparticles
NASA Astrophysics Data System (ADS)
Thamarai Selvi, E.; Meenakshi Sundar, S.
2017-07-01
Tin Oxide (SnO2) nanostructures were synthesized by a microwave oven assisted solvothermal method using with and without cetyl trimethyl ammonium bromide (CTAB) capping agent. XRD confirmed the pure rutile-type tetragonal phase of SnO2 for both uncapped and capped samples. The presence of functional groups was analyzed by Fourier transform infrared spectroscopy. Scanning electron microscopy shows the morphology of the samples. Transmission electron microscopy images exposed the size of the SnO2 nanostructures. Surface defect-related g factor of SnO2 nanoparticles using fluorescence spectroscopy is shown. For both uncapped and capped samples, UV-visible spectrum shows a blue shift in absorption edge due to the quantum confinement effect. Defect-related bands were identified by electron paramagnetic resonance (EPR) spectroscopy. The magnetic properties were studied by using vibrating sample magnetometer (VSM). A high value of magnetic moment 0.023 emu g-1 at room temperature for uncapped SnO2 nanoparticles was observed. Capping with CTAB enhanced the saturation magnetic moment to high value of 0.081 emu g-1 by altering the electronic configuration on the surface.
NASA Technical Reports Server (NTRS)
Wilcox, Mike
1993-01-01
The number of pixels per unit area sampling an image determines Nyquist resolution. Therefore, the highest pixel density is the goal. Unfortunately, as reduction in pixel size approaches the wavelength of light, sensitivity is lost and noise increases. Animals face the same problems and have achieved novel solutions. Emulating these solutions offers potentially unlimited sensitivity with detector size approaching the diffraction limit. Once an image is 'captured', cellular preprocessing of information allows extraction of high resolution information from the scene. Computer simulation of this system promises hyperacuity for machine vision.
ERIC Educational Resources Information Center
Perfetto, John Charles; Holland, Glenda; Davis, Rebecca; Fedynich, La Vonne
2013-01-01
This study was conducted to determine the themes present in the context of high schools, to determine any significant differences in themes for high and low performing high schools, and to determine if significant differences were present for the same sample of high schools based on school size. An analysis of the content of mission statements…
Analysis of variability in additive manufactured open cell porous structures.
Evans, Sam; Jones, Eric; Fox, Pete; Sutcliffe, Chris
2017-06-01
In this article, a novel method of analysing build consistency of additively manufactured open cell porous structures is presented. Conventionally, methods such as micro computed tomography or scanning electron microscopy imaging have been applied to the measurement of geometric properties of porous material; however, high costs and low speeds make them unsuitable for analysing high volumes of components. Recent advances in the image-based analysis of open cell structures have opened up the possibility of qualifying variation in manufacturing of porous material. Here, a photogrammetric method of measurement, employing image analysis to extract values for geometric properties, is used to investigate the variation between identically designed porous samples measuring changes in material thickness and pore size, both intra- and inter-build. Following the measurement of 125 samples, intra-build material thickness showed variation of ±12%, and pore size ±4% of the mean measured values across five builds. Inter-build material thickness and pore size showed mean ranges higher than those of intra-build, ±16% and ±6% of the mean material thickness and pore size, respectively. Acquired measurements created baseline variation values and demonstrated techniques suitable for tracking build deviation and inspecting additively manufactured porous structures to indicate unwanted process fluctuations.
Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.
Rochon, K; Scoles, G A; Lysyk, T J
2012-03-01
A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.
High-Field Liquid-State Dynamic Nuclear Polarization in Microliter Samples.
Yoon, Dongyoung; Dimitriadis, Alexandros I; Soundararajan, Murari; Caspers, Christian; Genoud, Jeremy; Alberti, Stefano; de Rijk, Emile; Ansermet, Jean-Philippe
2018-05-01
Nuclear hyperpolarization in the liquid state by dynamic nuclear polarization (DNP) has been of great interest because of its potential use in NMR spectroscopy of small samples of biological and chemical compounds in aqueous media. Liquid state DNP generally requires microwave resonators in order to generate an alternating magnetic field strong enough to saturate electron spins in the solution. As a consequence, the sample size is limited to dimensions of the order of the wavelength, and this restricts the sample volume to less than 100 nL for DNP at 9 T (∼260 GHz). We show here a new approach that overcomes this sample size limitation. Large saturation of electron spins was obtained with a high-power (∼150 W) gyrotron without microwave resonators. Since high power microwaves can cause serious dielectric heating in polar solutions, we designed a planar probe which effectively alleviates dielectric heating. A thin liquid sample of 100 μm of thickness is placed on a block of high thermal conductivity aluminum nitride, with a gold coating that serves both as a ground plane and as a heat sink. A meander or a coil were used for NMR. We performed 1 H DNP at 9.2 T (∼260 GHz) and at room temperature with 10 μL of water, a volume that is more than 100× larger than reported so far. The 1 H NMR signal is enhanced by a factor of about -10 with 70 W of microwave power. We also demonstrated the liquid state of 31 P DNP in fluorobenzene containing triphenylphosphine and obtained an enhancement of ∼200.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
Mesoporous Akaganeite of Adjustable Pore Size Synthesized using Mixed Templates
NASA Astrophysics Data System (ADS)
Zhang, Y.; Ge, D. L.; Ren, H. P.; Fan, Y. J.; Wu, L. M.; Sun, Z. X.
2017-12-01
Mesoporous akaganeite with large and adjustable pore size was synthesized through a co-template method, which was achieved by the combined interaction between PEG2000 and alkyl amines with different lengths of the straight carbon chain. The characterized results indicate that the synthesized samples show comparatively narrow BJH pore size distributions and centered at 14.3 nm when PEG and HEPA was used, and it could be enlarged to 16.8 and 19.4 nm respectively through changing the alkyl amines to DDA and HDA. Meanwhile, all the synthesized akaganeite possess relativity high specific surface area ranging from 183 to 281 m2/g and high total pore volume of 0.98 to 1.5 cm3/g. A possible mechanism leading to the pore size changing was also proposed.
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H
2017-01-01
In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
Experimental study on microsphere assisted nanoscope in non-contact mode
NASA Astrophysics Data System (ADS)
Ling, Jinzhong; Li, Dancui; Liu, Xin; Wang, Xiaorui
2018-07-01
Microsphere assisted nanoscope was proposed in existing literatures to capture super-resolution images of the nano-structures beneath the microsphere attached on sample surface. In this paper, a microsphere assisted nanoscope working in non-contact mode is designed and demonstrated, in which the microsphere is controlled with a gap separated to sample surface. With a gap, the microsphere is moved in parallel to sample surface non-invasively, so as to observe all the areas of interest. Furthermore, the influence of gap size on image resolution is studied experimentally. Only when the microsphere is close enough to the sample surface, super-resolution image could be obtained. Generally, the resolution decreases when the gap increases as the contribution of evanescent wave disappears. To keep an appropriate gap size, a quantitative method is implemented to estimate the gap variation by observing Newton's rings around the microsphere, serving as a real-time feedback for tuning the gap size. With a constant gap, large-area image with high resolution can be obtained during microsphere scanning. Our study of non-contact mode makes the microsphere assisted nanoscope more practicable and easier to implement.
Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction
Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; Coe, Jesse; Conrad, Chelsie E.; Dörner, Katerina; Sierra, Raymond G.; Stevenson, Hilary P.; Camacho-Alanis, Fernanda; Grant, Thomas D.; Nelson, Garrett; James, Daniel; Calero, Guillermo; Wachter, Rebekka M.; Spence, John C. H.; Weierstall, Uwe; Fromme, Petra; Ros, Alexandra
2015-01-01
The advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles can be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ∼4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. This method will also permit an analysis of the dependence of crystal quality on crystal size. PMID:26798818
Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction
Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; ...
2015-08-19
We report that the advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles canmore » be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ~4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. Ultimately, this method will also permit an analysis of the dependence of crystal quality on crystal size.« less
Tomyn, Ronald L; Sleeth, Darrah K; Thiese, Matthew S; Larson, Rodney R
2016-01-01
In addition to chemical composition, the site of deposition of inhaled particles is important for determining the potential health effects from an exposure. As a result, the International Organization for Standardization adopted a particle deposition sampling convention. This includes extrathoracic particle deposition sampling conventions for the anterior nasal passages (ET1) and the posterior nasal and oral passages (ET2). This study assessed how well a polyurethane foam insert placed in an Institute of Occupational Medicine (IOM) sampler can match an extrathoracic deposition sampling convention, while accounting for possible static buildup in the test particles. In this way, the study aimed to assess whether neutralized particles affected the performance of this sampler for estimating extrathoracic particle deposition. A total of three different particle sizes (4.9, 9.5, and 12.8 µm) were used. For each trial, one particle size was introduced into a low-speed wind tunnel with a wind speed set a 0.2 m/s (∼40 ft/min). This wind speed was chosen to closely match the conditions of most indoor working environments. Each particle size was tested twice either neutralized, using a high voltage neutralizer, or left in its normal (non neutralized) state as standard particles. IOM samplers were fitted with a polyurethane foam insert and placed on a rotating mannequin inside the wind tunnel. Foam sampling efficiencies were calculated for all trials to compare against the normalized ET1 sampling deposition convention. The foam sampling efficiencies matched well to the ET1 deposition convention for the larger particle sizes, but had a general trend of underestimating for all three particle sizes. The results of a Wilcoxon Rank Sum Test also showed that only at 4.9 µm was there a statistically significant difference (p-value = 0.03) between the foam sampling efficiency using the standard particles and the neutralized particles. This is interpreted to mean that static buildup may be occurring and neutralizing the particles that are 4.9 µm diameter in size did affect the performance of the foam sampler when estimating extrathoracic particle deposition.
Effect of the microstructure on electrical properties of high-purity germanium
NASA Astrophysics Data System (ADS)
Podkopaev, O. I.; Shimanskii, A. F.; Molotkovskaya, N. O.; Kulakovskaya, T. V.
2013-05-01
The interrelation between the electrical properties and the microstructure of high-purity germanium crystals has been revealed. The electrical conductivity of polycrystalline samples increases and the life-time of nonequilibrium charge carriers in them decreases with a decrease in the crystallite sizes.
Relationship between Spiritual Intelligence and Job Satisfaction among Female High School Teachers
ERIC Educational Resources Information Center
Zamani, Mahmmood Reza; Karimi, Fariba
2015-01-01
The present paper aims to study the relationship between spiritual intelligence and job satisfaction among female high school teachers in Isfahan. It was a descriptive-correlation research. Population included all female high school teachers of Isfahan (2015) in academic year 2013-2014. Sample size calculated was 320 teachers by Krejcie and…
1996-06-10
The dart and associated launching system was developed by engineers at MSFC to collect a sample of the aluminum oxide particles during the static fire testing of the Shuttle's solid rocket motor. The dart is launched through the exhaust and recovered post test. The particles are collected on sticky copper tapes affixed to a cylindrical shaft in the dart. A protective sleeve draws over the tape after the sample is collected to prevent contamination. The sample is analyzed under a scarning electron microscope under high magnification and a particle size distribution is determined. This size distribution is input into the analytical model to predict the radiative heating rates from the motor exhaust. Good prediction models are essential to optimizing the development of the thermal protection system for the Shuttle.
Robust functional statistics applied to Probability Density Function shape screening of sEMG data.
Boudaoud, S; Rix, H; Al Harrach, M; Marin, F
2014-01-01
Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.
Formation of metallic clusters in oxide insulators by means of ion beam mixing
NASA Astrophysics Data System (ADS)
Talut, G.; Potzger, K.; Mücklich, A.; Zhou, Shengqiang
2008-04-01
The intermixing and near-interface cluster formation of Pt and FePt thin films deposited on different oxide surfaces by means of Pt+ ion irradiation and subsequent annealing was investigated. Irradiated as well as postannealed samples were investigated using high resolution transmission electron microscopy. In MgO and Y :ZrO2 covered with Pt, crystalline clusters with mean sizes of 2 and 3.5nm were found after the Pt+ irradiations with 8×1015 and 2×1016cm-2 and subsequent annealing, respectively. In MgO samples covered with FePt, clusters with mean sizes of 1 and 2nm were found after the Pt+ irradiations with 8×1015 and 2×1016cm-2 and subsequent annealing, respectively. In Y :ZrO2 samples covered with FePt, clusters up to 5nm in size were found after the Pt+ irradiation with 2×1016cm-2 and subsequent annealing. In LaAlO3 the irradiation was accompanied by a full amorphization of the host matrix and appearance of embedded clusters of different sizes. The determination of the lattice constant and thus the kind of the clusters in samples covered by FePt was hindered due to strong deviation of the electron beam by the ferromagnetic FePt.
Potential Reporting Bias in Neuroimaging Studies of Sex Differences.
David, Sean P; Naudet, Florian; Laude, Jennifer; Radua, Joaquim; Fusar-Poli, Paolo; Chu, Isabella; Stefanick, Marcia L; Ioannidis, John P A
2018-04-17
Numerous functional magnetic resonance imaging (fMRI) studies have reported sex differences. To empirically evaluate for evidence of excessive significance bias in this literature, we searched for published fMRI studies of human brain to evaluate sex differences, regardless of the topic investigated, in Medline and Scopus over 10 years. We analyzed the prevalence of conclusions in favor of sex differences and the correlation between study sample sizes and number of significant foci identified. In the absence of bias, larger studies (better powered) should identify a larger number of significant foci. Across 179 papers, median sample size was n = 32 (interquartile range 23-47.5). A median of 5 foci related to sex differences were reported (interquartile range, 2-9.5). Few articles (n = 2) had titles focused on no differences or on similarities (n = 3) between sexes. Overall, 158 papers (88%) reached "positive" conclusions in their abstract and presented some foci related to sex differences. There was no statistically significant relationship between sample size and the number of foci (-0.048% increase for every 10 participants, p = 0.63). The extremely high prevalence of "positive" results and the lack of the expected relationship between sample size and the number of discovered foci reflect probable reporting bias and excess significance bias in this literature.
Shamey, Renzo; Zubair, Muhammad; Cheema, Hammad
2015-08-01
The aim of this study was twofold, first to determine the effect of field view size and second of illumination conditions on the selection of unique hue samples (UHs: R, Y, G and B) from two rotatable trays, each containing forty highly chromatic Natural Color System (NCS) samples, on one tray corresponding to 1.4° and on the other to 5.7° field of view size. UH selections were made by 25 color-normal observers who repeated assessments three times with a gap of at least 24h between trials. Observers separately assessed UHs under four illumination conditions simulating illuminants D65, A, F2 and F11. An apparent hue shift (statistically significant for UR) was noted for UH selections at 5.7° field of view compared to those at 1.4°. Observers' overall variability was found to be higher for UH stimuli selections at the larger field of view. Intra-observer variability was found to be approximately 18.7% of inter-observer variability in selection of samples for both sample sizes. The highest intra-observer variability was under simulated illuminant D65, followed by A, F11, and F2. Copyright © 2015 Elsevier Ltd. All rights reserved.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Eduardoff, Mayra; Xavier, Catarina; Strobl, Christina; Casas-Vargas, Andrea; Parson, Walther
2017-01-01
The analysis of mitochondrial DNA (mtDNA) has proven useful in forensic genetics and ancient DNA (aDNA) studies, where specimens are often highly compromised and DNA quality and quantity are low. In forensic genetics, the mtDNA control region (CR) is commonly sequenced using established Sanger-type Sequencing (STS) protocols involving fragment sizes down to approximately 150 base pairs (bp). Recent developments include Massively Parallel Sequencing (MPS) of (multiplex) PCR-generated libraries using the same amplicon sizes. Molecular genetic studies on archaeological remains that harbor more degraded aDNA have pioneered alternative approaches to target mtDNA, such as capture hybridization and primer extension capture (PEC) methods followed by MPS. These assays target smaller mtDNA fragment sizes (down to 50 bp or less), and have proven to be substantially more successful in obtaining useful mtDNA sequences from these samples compared to electrophoretic methods. Here, we present the modification and optimization of a PEC method, earlier developed for sequencing the Neanderthal mitochondrial genome, with forensic applications in mind. Our approach was designed for a more sensitive enrichment of the mtDNA CR in a single tube assay and short laboratory turnaround times, thus complying with forensic practices. We characterized the method using sheared, high quantity mtDNA (six samples), and tested challenging forensic samples (n = 2) as well as compromised solid tissue samples (n = 15) up to 8 kyrs of age. The PEC MPS method produced reliable and plausible mtDNA haplotypes that were useful in the forensic context. It yielded plausible data in samples that did not provide results with STS and other MPS techniques. We addressed the issue of contamination by including four generations of negative controls, and discuss the results in the forensic context. We finally offer perspectives for future research to enable the validation and accreditation of the PEC MPS method for final implementation in forensic genetic laboratories. PMID:28934125
The effects of sample size on population genomic analyses--implications for the tests of neutrality.
Subramanian, Sankar
2016-02-20
One of the fundamental measures of molecular genetic variation is the Watterson's estimator (θ), which is based on the number of segregating sites. The estimation of θ is unbiased only under neutrality and constant population growth. It is well known that the estimation of θ is biased when these assumptions are violated. However, the effects of sample size in modulating the bias was not well appreciated. We examined this issue in detail based on large-scale exome data and robust simulations. Our investigation revealed that sample size appreciably influences θ estimation and this effect was much higher for constrained genomic regions than that of neutral regions. For instance, θ estimated for synonymous sites using 512 human exomes was 1.9 times higher than that obtained using 16 exomes. However, this difference was 2.5 times for the nonsynonymous sites of the same data. We observed a positive correlation between the rate of increase in θ estimates (with respect to the sample size) and the magnitude of selection pressure. For example, θ estimated for the nonsynonymous sites of highly constrained genes (dN/dS < 0.1) using 512 exomes was 3.6 times higher than that estimated using 16 exomes. In contrast this difference was only 2 times for the less constrained genes (dN/dS > 0.9). The results of this study reveal the extent of underestimation owing to small sample sizes and thus emphasize the importance of sample size in estimating a number of population genomic parameters. Our results have serious implications for neutrality tests such as Tajima D, Fu-Li D and those based on the McDonald and Kreitman test: Neutrality Index and the fraction of adaptive substitutions. For instance, use of 16 exomes produced 2.4 times higher proportion of adaptive substitutions compared to that obtained using 512 exomes (24% vs 10 %).
In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.
Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele
2017-04-19
During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (<31 μm, depending on the used grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
MSeq-CNV: accurate detection of Copy Number Variation from Sequencing of Multiple samples.
Malekpour, Seyed Amir; Pezeshk, Hamid; Sadeghi, Mehdi
2018-03-05
Currently a few tools are capable of detecting genome-wide Copy Number Variations (CNVs) based on sequencing of multiple samples. Although aberrations in mate pair insertion sizes provide additional hints for the CNV detection based on multiple samples, the majority of the current tools rely only on the depth of coverage. Here, we propose a new algorithm (MSeq-CNV) which allows detecting common CNVs across multiple samples. MSeq-CNV applies a mixture density for modeling aberrations in depth of coverage and abnormalities in the mate pair insertion sizes. Each component in this mixture density applies a Binomial distribution for modeling the number of mate pairs with aberration in the insertion size and also a Poisson distribution for emitting the read counts, in each genomic position. MSeq-CNV is applied on simulated data and also on real data of six HapMap individuals with high-coverage sequencing, in 1000 Genomes Project. These individuals include a CEU trio of European ancestry and a YRI trio of Nigerian ethnicity. Ancestry of these individuals is studied by clustering the identified CNVs. MSeq-CNV is also applied for detecting CNVs in two samples with low-coverage sequencing in 1000 Genomes Project and six samples form the Simons Genome Diversity Project.
Data splitting for artificial neural networks using SOM-based stratified sampling.
May, R J; Maier, H R; Dandy, G C
2010-03-01
Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.
Li, Ya; Fu, Qiang; Liu, Meng; Jiao, Yuan-Yuan; Du, Wei; Yu, Chong; Liu, Jing; Chang, Chun; Lu, Jian
2012-01-01
In order to prepare a high capacity packing material for solid-phase extraction with specific recognition ability of trace ractopamine in biological samples, uniformly-sized, molecularly imprinted polymers (MIPs) were prepared by a multi-step swelling and polymerization method using methacrylic acid as a functional monomer, ethylene glycol dimethacrylate as a cross-linker, and toluene as a porogen respectively. Scanning electron microscope and specific surface area were employed to identify the characteristics of MIPs. Ultraviolet spectroscopy, Fourier transform infrared spectroscopy, Scatchard analysis and kinetic study were performed to interpret the specific recognition ability and the binding process of MIPs. The results showed that, compared with other reports, MIPs synthetized in this study showed high adsorption capacity besides specific recognition ability. The adsorption capacity of MIPs was 0.063 mmol/g at 1 mmol/L ractopamine concentration with the distribution coefficient 1.70. The resulting MIPs could be used as solid-phase extraction materials for separation and enrichment of trace ractopamine in biological samples. PMID:29403774
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
Bon-EV: an improved multiple testing procedure for controlling false discovery rates.
Li, Dongmei; Xie, Zidian; Zand, Martin; Fogg, Thomas; Dye, Timothy
2017-01-03
Stability of multiple testing procedures, defined as the standard deviation of total number of discoveries, can be used as an indicator of variability of multiple testing procedures. Improving stability of multiple testing procedures can help to increase the consistency of findings from replicated experiments. Benjamini-Hochberg's and Storey's q-value procedures are two commonly used multiple testing procedures for controlling false discoveries in genomic studies. Storey's q-value procedure has higher power and lower stability than Benjamini-Hochberg's procedure. To improve upon the stability of Storey's q-value procedure and maintain its high power in genomic data analysis, we propose a new multiple testing procedure, named Bon-EV, to control false discovery rate (FDR) based on Bonferroni's approach. Simulation studies show that our proposed Bon-EV procedure can maintain the high power of the Storey's q-value procedure and also result in better FDR control and higher stability than Storey's q-value procedure for samples of large size(30 in each group) and medium size (15 in each group) for either independent, somewhat correlated, or highly correlated test statistics. When sample size is small (5 in each group), our proposed Bon-EV procedure has performance between the Benjamini-Hochberg procedure and the Storey's q-value procedure. Examples using RNA-Seq data show that the Bon-EV procedure has higher stability than the Storey's q-value procedure while maintaining equivalent power, and higher power than the Benjamini-Hochberg's procedure. For medium or large sample sizes, the Bon-EV procedure has improved FDR control and stability compared with the Storey's q-value procedure and improved power compared with the Benjamini-Hochberg procedure. The Bon-EV multiple testing procedure is available as the BonEV package in R for download at https://CRAN.R-project.org/package=BonEV .
Use of a miniature diamond-anvil cell in high-pressure single-crystal neutron Laue diffraction
Binns, Jack; Kamenev, Konstantin V.; McIntyre, Garry J.; Moggach, Stephen A.; Parsons, Simon
2016-01-01
The first high-pressure neutron diffraction study in a miniature diamond-anvil cell of a single crystal of size typical for X-ray diffraction is reported. This is made possible by modern Laue diffraction using a large solid-angle image-plate detector. An unexpected finding is that even reflections whose diffracted beams pass through the cell body are reliably observed, albeit with some attenuation. The cell body does limit the range of usable incident angles, but the crystallographic completeness for a high-symmetry unit cell is only slightly less than for a data collection without the cell. Data collections for two sizes of hexamine single crystals, with and without the pressure cell, and at 300 and 150 K, show that sample size and temperature are the most important factors that influence data quality. Despite the smaller crystal size and dominant parasitic scattering from the diamond-anvil cell, the data collected allow a full anisotropic refinement of hexamine with bond lengths and angles that agree with literature data within experimental error. This technique is shown to be suitable for low-symmetry crystals, and in these cases the transmission of diffracted beams through the cell body results in much higher completeness values than are possible with X-rays. The way is now open for joint X-ray and neutron studies on the same sample under identical conditions. PMID:27158503
NASA Astrophysics Data System (ADS)
Chakraborty, Abhishek; Ervens, Barbara; Gupta, Tarun; Tripathi, Sachchida N.
2016-04-01
Size-resolved fog water samples were collected in two consecutive winters at Kanpur, a heavily polluted urban area of India. Samples were analyzed by an aerosol mass spectrometer after drying and directly in other instruments. Residues of fine fog droplets (diameter: 4-16 µm) are found to be more enriched with oxidized (oxygen to carbon ratio, O/C = 0.88) and low volatility organics than residues of coarse (diameter > 22 µm) and medium size (diameter: 16-22 µm) droplets with O/C of 0.68 and 0.74, respectively. These O/C ratios are much higher than those observed for background ambient organic aerosols, indicating efficient oxidation in fog water. Accompanying box model simulations reveal that longer residence times, together with high aqueous OH concentrations in fine droplets, can explain these trends. High aqueous OH concentrations in smaller droplets are caused by their highest surface-volume ratio and high Fe and Cu concentrations, allowing more uptake of gas phase OH and enhanced Fenton reaction rates, respectively. Although some volatile organic species may have escaped during droplet evaporation, these findings indicate that aqueous processing of dissolved organics varies with droplet size. Therefore, large (regional, global)-scale models need to consider the variable reaction rates, together with metal-catalyzed radical formation throughout droplet populations for accurately predicting aqueous secondary organic aerosol formation.
Miyashita, Shin-Ichi; Mitsuhashi, Hiroaki; Fujii, Shin-Ichiro; Takatsu, Akiko; Inagaki, Kazumi; Fujimoto, Toshiyuki
2017-02-01
In order to facilitate reliable and efficient determination of both the particle number concentration (PNC) and the size of nanoparticles (NPs) by single-particle ICP-MS (spICP-MS) without the need to correct for the particle transport efficiency (TE, a possible source of bias in the results), a total-consumption sample introduction system consisting of a large-bore, high-performance concentric nebulizer and a small-volume on-axis cylinder chamber was utilized. Such a system potentially permits a particle TE of 100 %, meaning that there is no need to include a particle TE correction when calculating the PNC and the NP size. When the particle TE through the sample introduction system was evaluated by comparing the frequency of sharp transient signals from the NPs in a measured NP standard of precisely known PNC to the particle frequency for a measured NP suspension, the TE for platinum NPs with a nominal diameter of 70 nm was found to be very high (i.e., 93 %), and showed satisfactory repeatability (relative standard deviation of 1.0 % for four consecutive measurements). These results indicated that employing this total consumption system allows the particle TE correction to be ignored when calculating the PNC. When the particle size was determined using a solution-standard-based calibration approach without an NP standard, the particle diameters of platinum and silver NPs with nominal diameters of 30-100 nm were found to agree well with the particle diameters determined by transmission electron microscopy, regardless of whether a correction was performed for the particle TE. Thus, applying the proposed system enables NP size to be accurately evaluated using a solution-standard-based calibration approach without the need to correct for the particle TE.
Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M
2018-06-01
Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.
Fraley, R. Chris; Vazire, Simine
2014-01-01
The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159
Drop size distributions and related properties of fog for five locations measured from aircraft
NASA Technical Reports Server (NTRS)
Zak, J. Allen
1994-01-01
Fog drop size distributions were collected from aircraft as part of the Synthetic Vision Technology Demonstration Program. Three west coast marine advection fogs, one frontal fog, and a radiation fog were sampled from the top of the cloud to the bottom as the aircraft descended on a 3-degree glideslope. Drop size versus altitude versus concentration are shown in three dimensional plots for each 10-meter altitude interval from 1-minute samples. Also shown are median volume radius and liquid water content. Advection fogs contained the largest drops with median volume radius of 5-8 micrometers, although the drop sizes in the radiation fog were also large just above the runway surface. Liquid water content increased with height, and the total number of drops generally increased with time. Multimodal variations in number density and particle size were noted in most samples where there was a peak concentration of small drops (2-5 micrometers) at low altitudes, midaltitude peak of drops 5-11 micrometers, and high-altitude peak of the larger drops (11-15 micrometers and above). These observations are compared with others and corroborate previous results in fog gross properties, although there is considerable variation with time and altitude even in the same type of fog.
Spatially explicit dynamic N-mixture models
Zhao, Qing; Royle, Andy; Boomer, G. Scott
2017-01-01
Knowledge of demographic parameters such as survival, reproduction, emigration, and immigration is essential to understand metapopulation dynamics. Traditionally the estimation of these demographic parameters requires intensive data from marked animals. The development of dynamic N-mixture models makes it possible to estimate demographic parameters from count data of unmarked animals, but the original dynamic N-mixture model does not distinguish emigration and immigration from survival and reproduction, limiting its ability to explain important metapopulation processes such as movement among local populations. In this study we developed a spatially explicit dynamic N-mixture model that estimates survival, reproduction, emigration, local population size, and detection probability from count data under the assumption that movement only occurs among adjacent habitat patches. Simulation studies showed that the inference of our model depends on detection probability, local population size, and the implementation of robust sampling design. Our model provides reliable estimates of survival, reproduction, and emigration when detection probability is high, regardless of local population size or the type of sampling design. When detection probability is low, however, our model only provides reliable estimates of survival, reproduction, and emigration when local population size is moderate to high and robust sampling design is used. A sensitivity analysis showed that our model is robust against the violation of the assumption that movement only occurs among adjacent habitat patches, suggesting wide applications of this model. Our model can be used to improve our understanding of metapopulation dynamics based on count data that are relatively easy to collect in many systems.
Mozley, Peter S.; Heath, Jason E.; Dewers, Thomas A.; ...
2016-01-01
The Mount Simon Sandstone and Eau Claire Formation represent a principal reservoir - caprock system for wastewater disposal, geologic CO 2 storage, and compressed air energy storage (CAES) in the Midwestern United States. Of primary concern to site performance is heterogeneity in flow properties that could lead to non-ideal injectivity and distribution of injected fluids (e.g., poor sweep efficiency). Using core samples from the Dallas Center Structure, Iowa, we investigate pore structure that governs flow properties of major lithofacies of these formations. Methods include gas porosimetry and permeametry, mercury intrusion porosimetry, thin section petrography, and X-ray diffraction. The lithofacies exhibitmore » highly variable intra- and inter-informational distributions of pore throat and body sizes. Based on pore-throat size, samples fall into four distinct groups. Micropore-throat dominated samples are from the Eau Claire Formation, whereas the macropore-, mesopore-, and uniform-dominated samples are from the Mount Simon Sandstone. Complex paragenesis governs the high degree of pore and pore-throat size heterogeneity, due to an interplay of precipitation, non-uniform compaction, and later dissolution of cements. Furthermore, the cement dissolution event probably accounts for much of the current porosity in the unit. The unusually heterogeneous nature of the pore networks in the Mount Simon Sandstone indicates that there is a greater-than-normal opportunity for reservoir capillary trapping of non-wetting fluids — as quantified by CO 2 and air column heights — which should be taken into account when assessing the potential of the reservoir-caprock system for CO 2 storage and CAES.« less
Rostad, C.E.; Rees, T.F.; Daniel, S.R.
1998-01-01
An on-board technique was developed that combined discharge-weighted pumping to a high-speed continuous-flow centrifuge for isolation of the particulate-sized material with ultrafiltration for isolation of colloid-sized material. In order to address whether these processes changed the particle sizes during isolation, samples of particles in suspension were collected at various steps in the isolation process to evaluate changes in particle size. Particle sizes were determined using laser light-scattering photon correlation spectroscopy and indicated no change in size during the colloid isolation process. Mississippi River colloid particle sizes from twelve sites from Minneapolis to below New Orleans were compared with sizes from four tributaries and three seasons, and from predominantly autochthonous sources upstream to more allochthonous sources downstream. ?? 1998 John Wiley Sons, Ltd.
Infrared thermal wave nondestructive technology on the defect in the shell of solid rocket motor
NASA Astrophysics Data System (ADS)
Zhang, Wei; Song, Yuanjia; Yang, Zhengwei; Li, Ming; Tian, Gan
2010-10-01
Based on the active infrared thermography nondestructive testing (NDT) technology, which is an emerging method and developed in the areas of aviation, spaceflight and national defence, the samples including glass fiber flat bottom hole sample, glass fiber inclusion sample and steel flat bottom hole sample that the shell materials of Solid Rocket Motor (SRM) were heated by a high energy flash lamp. The subsurface flaws can be detected through measuring temperature difference between flaws and materials. The results of the experiments show that: 1) the technique is a fast and effective inspection method, which is used for detecting the composites more easily than the metals. And it also can primarily identify the defect position and size according to the thermal image maps. 2) A best inspection time at when the area of hot spot is the same with that of defect is exited, which can be used to estimate the defect size. The bigger the defect area, the easier it could be detected and also the less of the error for estimating defect area. 3). The infrared thermal images obtained from experiments always have high noise, especially for metal materials due to high reflectivity and environmental factors, which need to be further processed.
Masson, M; Angot, H; Le Bescond, C; Launay, M; Dabrin, A; Miège, C; Le Coz, J; Coquery, M
2018-05-10
Monitoring hydrophobic contaminants in surface freshwaters requires measuring contaminant concentrations in the particulate fraction (sediment or suspended particulate matter, SPM) of the water column. Particle traps (PTs) have been recently developed to sample SPM as cost-efficient, easy to operate and time-integrative tools. But the representativeness of SPM collected with PTs is not fully understood, notably in terms of grain size distribution and particulate organic carbon (POC) content, which could both skew particulate contaminant concentrations. The aim of this study was to evaluate the representativeness of SPM characteristics (i.e. grain size distribution and POC content) and associated contaminants (i.e. polychlorinated biphenyls, PCBs; mercury, Hg) in samples collected in a large river using PTs for differing hydrological conditions. Samples collected using PTs (n = 74) were compared with samples collected during the same time period by continuous flow centrifugation (CFC). The grain size distribution of PT samples shifted with increasing water discharge: the proportion of very fine silts (2-6 μm) decreased while that of coarse silts (27-74 μm) increased. Regardless of water discharge, POC contents were different likely due to integration by PT of high POC-content phytoplankton blooms or low POC-content flood events. Differences in PCBs and Hg concentrations were usually within the range of analytical uncertainties and could not be related to grain size or POC content shifts. Occasional Hg-enriched inputs may have led to higher Hg concentrations in a few PT samples (n = 4) which highlights the time-integrative capacity of the PTs. The differences of annual Hg and PCB fluxes calculated either from PT samples or CFC samples were generally below 20%. Despite some inherent limitations (e.g. grain size distribution bias), our findings suggest that PT sampling is a valuable technique to assess reliable spatial and temporal trends of particulate contaminants such as PCBs and Hg within a river monitoring network. Copyright © 2018 Elsevier B.V. All rights reserved.
Grindability and combustion behavior of coal and torrefied biomass blends.
Gil, M V; García, R; Pevida, C; Rubiera, F
2015-09-01
Biomass samples (pine, black poplar and chestnut woodchips) were torrefied to improve their grindability before being combusted in blends with coal. Torrefaction temperatures between 240 and 300 °C and residence times between 11 and 43 min were studied. The grindability of the torrefied biomass, evaluated from the particle size distribution of the ground sample, significantly improved compared to raw biomass. Higher temperatures increased the proportion of smaller-sized particles after grinding. Torrefied chestnut woodchips (280 °C, 22 min) showed the best grinding properties. This sample was blended with coal (5-55 wt.% biomass). The addition of torrefied biomass to coal up to 15 wt.% did not significantly increase the proportion of large-sized particles after grinding. No relevant differences in the burnout value were detected between the coal and coal/torrefied biomass blends due to the high reactivity of the coal. NO and SO2 emissions decreased as the percentage of torrefied biomass in the blend with coal increased. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effects of normalization on quantitative traits in association test
2009-01-01
Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest. PMID:20003414
Mili, Sami; Ennouri, Rym; Dhib, Amel; Laouar, Houcine; Missaoui, Hechmi; Aleya, Lotfi
2016-06-01
To monitor and assess the state of Tunisian freshwater fisheries, two surveys were undertaken at Ghezala and Lahjar reservoirs. Samples were taken in April and May 2013, a period when the fish catchability is high. The selected reservoirs have different surface areas and bathymetries. Using multi-mesh gill nets (EN 14575 amended) designed for sampling fish in lakes, standard fishing methods were applied to estimate species composition, abundance, biomass, and size distribution. Four species were caught in the two reservoirs: barbel, mullet, pike-perch, and roach. Fish abundance showed significant change according to sampling sites, depth strata, and the different mesh sizes used. From the reservoir to the tributary, it was concluded that fish biomass distribution was governed by depth and was most abundant in the upper water layers. Species size distribution differed significantly between the two reservoirs, exceeding the length at first maturity. Species composition and abundance were greater in Lahjar reservoir than in Ghezala. Both reservoirs require support actions to improve fish productivity.
Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...
2016-05-25
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
NASA Astrophysics Data System (ADS)
Marselin, M. Abila; Jaya, N. Victor
2016-04-01
In this paper, pure NiO and Cu-doped NiO nanoparticles are prepared by co-precipitation method. The electrical resistivity measurements by applying high pressure on pure NiO and Cu-doped NiO nanoparticles were reported. The Bridgman anvil set up is used to measure high pressures up to 8 GPa. These measurements show that there is no phase transformation in the samples till the high pressure is reached. The samples show a rapid decrease in electrical resistivity up to 5 GPa and it remains constant beyond 5 GPa. The electrical resistivity and the transport activation energy of the samples under high pressure up to 8 GPa have been studied in the temperature range of 273-433 K using diamond anvil cell. The temperature versus electrical resistivity studies reveal that the samples behave like a semiconductor. The activation energies of the charge carriers depend on the size of the samples.
Measurements of Regolith Simulant Thermal Conductivity Under Asteroid and Mars Surface Conditions
NASA Astrophysics Data System (ADS)
Ryan, A. J.; Christensen, P. R.
2017-12-01
Laboratory measurements have been necessary to interpret thermal data of planetary surfaces for decades. We present a novel radiometric laboratory method to determine temperature-dependent thermal conductivity of complex regolith simulants under rough to high vacuum and across a wide range of temperatures. This method relies on radiometric temperature measurements instead of contact measurements, eliminating the need to disturb the sample with thermal probes. We intend to determine the conductivity of grains that are up to 2 cm in diameter and to parameterize the effects of angularity, sorting, layering, composition, and eventually cementation. We present the experimental data and model results for a suite of samples that were selected to isolate and address regolith physical parameters that affect bulk conductivity. Spherical glass beads of various sizes were used to measure the effect of size frequency distribution. Spherical beads of polypropylene and well-rounded quartz sand have respectively lower and higher solid phase thermal conductivities than the glass beads and thus provide the opportunity to test the sensitivity of bulk conductivity to differences in solid phase conductivity. Gas pressure in our asteroid experimental chambers is held at 10^-6 torr, which is sufficient to negate gas thermal conduction in even our coarsest of samples. On Mars, the atmospheric pressure is such that the mean free path of the gas molecules is comparable to the pore size for many regolith particulates. Thus, subtle variations in pore size and/or atmospheric pressure can produce large changes in bulk regolith conductivity. For each sample measured in our martian environmental chamber, we repeat thermal measurement runs at multiple pressures to observe this behavior. Finally, we present conductivity measurements of angular basaltic simulant that is physically analogous to sand and gravel that may be present on Bennu. This simulant was used for OSIRIS-REx TAGSAM Sample Return Arm engineering tests. We measure the original size frequency distribution as well as several sorted size fractions. These results will support the efforts of the OSIRIS-REx team in selecting a site on asteroid Bennu that is safe for the spacecraft and meets grain size requirements for sampling.
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Hall, William L; Ramsey, Charles; Falls, J Harold
2014-01-01
Bulk blending of dry fertilizers is a common practice in the United States and around the world. This practice involves the mixing (either physically or volumetrically) of concentrated, high analysis raw materials. Blending is followed by bagging (for small volume application such as lawn and garden products), loading into truck transports, and spreading. The great majority of bulk blended products are not bagged but handled in bulk and transferred from the blender to a holding hopper. The product is then transferred to a transport vehicle, which may, or may not, also be a spreader. If the primary transport vehicle is not a spreader, then there is another transfer at the user site to a spreader for application. Segregation of materials that are mismatched due to size, density, or shape is an issue when attempting to effectively sample or evenly spread bulk blended products. This study, prepared in coordination with and supported by the Florida Department of Agriculture and Consumer Services and the Florida Fertilizer and Agrochemical Association, looks at the impact of varying particle size as it relates to blending, sampling, and application of bulk blends. The study addresses blends containing high ratios of N-P-K materials and varying (often small) quantities of the micronutrient Zn.
A Model of Thermal Conductivity for Planetary Soils: 1. Theory for Unconsolidated Soils
NASA Technical Reports Server (NTRS)
Piqueux, S.; Christensen, P. R.
2009-01-01
We present a model of heat conduction for mono-sized spherical particulate media under stagnant gases based on the kinetic theory of gases, numerical modeling of Fourier s law of heat conduction, theoretical constraints on the gas thermal conductivity at various Knudsen regimes, and laboratory measurements. Incorporating the effect of the temperature allows for the derivation of the pore-filling gas conductivity and bulk thermal conductivity of samples using additional parameters (pressure, gas composition, grain size, and porosity). The radiative and solid-to-solid conductivities are also accounted for. Our thermal model reproduces the well-established bulk thermal conductivity dependency of a sample with the grain size and pressure and also confirms laboratory measurements finding that higher porosities generally lead to lower conductivities. It predicts the existence of the plateau conductivity at high pressure, where the bulk conductivity does not depend on the grain size. The good agreement between the model predictions and published laboratory measurements under a variety of pressures, temperatures, gas compositions, and grain sizes provides additional confidence in our results. On Venus, Earth, and Titan, the pressure and temperature combinations are too high to observe a soil thermal conductivity dependency on the grain size, but each planet has a unique thermal inertia due to their different surface temperatures. On Mars, the temperature and pressure combination is ideal to observe the soil thermal conductivity dependency on the average grain size. Thermal conductivity models that do not take the temperature and the pore-filling gas composition into account may yield significant errors.
Reversing the Signaled Magnitude Effect in Delayed Matching to Sample: Delay-Specific Remembering?
ERIC Educational Resources Information Center
White, K. Geoffrey; Brown, Glenn S.
2011-01-01
Pigeons performed a delayed matching-to-sample task in which large or small reinforcers for correct remembering were signaled during the retention interval. Accuracy was low when small reinforcers were signaled, and high when large reinforcers were signaled (the signaled magnitude effect). When the reinforcer-size cue was switched from small to…
Assessing Live Fuel Moisture For Fire Management Applications
David R. Weise; Roberta A. Hartford; Larry Mahaffey
1998-01-01
The variation associated with sampling live fuel moisture was examined for several shrub and canopy fuels in southern California, Arizona, and Colorado. Ninety-five % confidence intervals ranged from 5 to % . Estimated sample sizes varied greatly. The value of knowing the live fuel moisture content in fire decision making is unknown. If the fuel moisture is highly...
Surface acoustic admittance of highly porous open-cell, elastic foams
NASA Technical Reports Server (NTRS)
Lambert, R. F.
1983-01-01
This work presents a comprehensive study of the surface acoustic admittance properties of graded sizes of open-cell foams that are highly porous and elastic. The intrinsic admittance as well as properties of samples of finite depth were predicted and then measured for sound at normal incidence over a frequency range extending from about 35-3500 Hz. The agreement between theory and experiment for a range of mean pore size and volume porosity is excellent. The implications of fibrous structure on the admittance of open-cell foams is quite evident from the results.
ERIC Educational Resources Information Center
Moody, Judith D.; Gifford, Vernon D.
This study investigated the grouping effect on student achievement in a chemistry laboratory when homogeneous and heterogeneous formal reasoning ability, high and low levels of formal reasoning ability, group sizes of two and four, and homogeneous and heterogeneous gender were used for grouping factors. The sample consisted of all eight intact…
Size-resolved atmospheric particulate polysaccharides in the high summer Arctic
NASA Astrophysics Data System (ADS)
Leck, C.; Gao, Q.; Mashayekhy Rad, F.; Nilsson, U.
2013-12-01
Size-resolved aerosol samples for subsequent quantitative determination of polymer sugars (polysaccharides) after hydrolysis to their subunit monomers (monosaccharides) were collected in surface air over the central Arctic Ocean during the biologically most active summer period. The analysis was carried out by novel use of liquid chromatography coupled with highly selective and sensitive tandem mass spectrometry. Polysaccharides were detected in particle sizes ranging from 0.035 to 10 μm in diameter with distinct features of heteropolysaccharides, enriched in xylose, glucose + mannose as well as a substantial fraction of deoxysugars. Polysaccharides, containing deoxysugar monomers, showed a bimodal size structure with about 70% of their mass found in the Aitken mode over the pack ice area. Pentose (xylose) and hexose (glucose + mannose) had a weaker bimodal character and were largely found with super-micrometer sizes and in addition with a minor sub-micrometer fraction. The concentration of total hydrolysable neutral sugars (THNS) in the samples collected varied over two orders of magnitude (1 to 160 pmol m-3) in the super-micrometer size fraction and to a somewhat lesser extent in sub-micrometer particles (4 to 140 pmol m-3). Lowest THNS concentrations were observed in air masses that had spent more than five days over the pack ice. Within the pack ice area, about 53% of the mass of hydrolyzed polysaccharides was detected in sub-micrometer particles. The relative abundance of sub-micrometer hydrolyzed polysaccharides could be related to the length of time that the air mass spent over pack ice, with the highest fraction (> 90%) observed for > 7 days of advection. The aerosol samples collected onboard ship showed similar monosaccharide composition, compared to particles generated experimentally in situ at the expedition's open lead site. This supports the existence of a primary particle source of polysaccharide containing polymer gels from open leads by bubble bursting at the air-sea interface. We speculate that the occurrence of atmospheric surface-active polymer gels with their hydrophilic and hydrophobic segments, promoting cloud droplet activation, could play a potential role as cloud condensation nuclei in the pristine high Arctic.
NASA Astrophysics Data System (ADS)
Drazin, John Walter
Calcia-, and yttria- doped zirconia powders and samples are essential systems in academia and industry due to their observed bulk polymorphism. Pure zirconia manifests as Baddeleyite, a monoclinic structured mineral with 7-fold coordination. This bulk form of zirconia has little application due to its asymmetry. Therefore dopants are added to the grain in-order to induce phase transitions to either a tetragonal or cubic polymorph with the incorporation of oxygen vacancies due to the dopant charge mis-match with the zirconia matrix. The cubic polymorph has cubic symmetry such that these samples see applications in solid oxide fuel cells (SOFCs) due to the high oxygen vacancy concentrations and high ionic mobility at elevated temperatures. The tetragonal polymorph has slight asymmetry in the c-axis compared to the a-axis such that the tetragonal samples have increased fracture toughness due to an impact induced phase transformation to a cubic structure. These ceramic systems have been extensively studied in academia and used in various industries, but with the advent of nanotechnology one can wonder whether smaller grain samples will see improved characteristics similar to their bulk grain counterparts. However, there is a lack of data and knowledge of these systems in the nano grained region which provides us with an opportunity to advance the theory in these systems. The polymorphism seen in the bulk grains samples is also seen in the nano-grained samples, but at slightly distinct dopant concentrations. The current theory hypothesizes that a surface excess, gamma (J/m 2), can be added to the Gibbs Free energy equation to account for the additional free energy of the nano-grain atoms. However, these surface energies have been difficult to measure and therefore thermodynamic data on these nano-grained samples have been sparse. Therefore, in this work, I will use a well established water adsorption microcalorimetry apparatus to measure the water coverage isotherms while simultaneously collecting the energetic contribution of the adsorbing water vapor. With this data and apparatus, I have derived a 2nd order differential equation that relates the surface energy to the measured quantities such that I collected surfaces energies for over 35 specimens in the calcia-zirconia and yttria-zirconia systems for the first time. From the results, it was found that the monoclinic polymorph had the largest surface energy in the range of 1.9 - 2.1 ( J/m2) while the tetragonal surface energies were roughly 1.4 - 1.6 (J/m2), the cubic surface energies were roughly 0.8 - 1.0 (J/m2), and the amorphous surface energies were the smallest at roughly 0.7 - 0.8 (J/m 2). With the measured surface energy data, collected for the first time, we can create a nano-grain phase diagram similar to a bulk phase diagram that shows the stable polymorph as a function of dopant concentration and grain size using the bulk enthalpy data collected from high temperature oxide melt drop solution calorimetry. The phase diagrams show that pure zirconia will transform into tetragonal and cubic polymorphs from the monoclinic one at 7 and 5 nm respectively which confirms the experimental observations. The results are powerful predictive tools successfully applied in the nCZ and nYZ systems to a high degree of accuracy and adds a new development to conventional bulk phase diagrams. These diagrams should be the basis for nanotechnological efforts in nCZ and nYZ based systems, and suggest similar efforts are needed in other nano systems to pursue an in depth understanding and optimization of nanomaterials. After working on the theoretical aspects of phase stability, the focus of the research will shift to producing dense samples to measure observable quantities such as oxygen conduction and mechanical hardness. However, producing said samples with the nanocrystalline grain sizes has also been challenging as conventional sintering requires high temperatures which, as a consequence, induces grain growth of the samples limiting the minimum grain size of the samples. Therefore, in this work, we have developed a Pressure Assisted Rapid Sintering Technique (PARS) that uses high currents to Joule heat the samples to moderate temperatures (650-900 °C) for short durations (5-10 min) under large compressive pressures (600-2200 MPa). With this new technique, atomic level grain sizes (sub-10nm) can be easily achieved at high relative densities (>98 %). Using the PARS setup, multiple 3nYZ samples were produced with varied grain sizes down to 9 nm and as large as 5mum. The mechanical hardness of these samples were tested using a Vicker's microhardness indentation apparatus. The hardness of the "bulk" grains was roughly 12.9 GPa while the smallest grain size pellet had a hardness approaching 15 GPa. All of the 3nYZ pellets had a higher hardness with diminishing grain size, thereby extending the Hall-Petch relationship to 9 nm in the 3YZ system. This is an amazing and unprecedented result to date. After producing the extreme nano-grained samples (15nCZ and 17.5nYSZ), they were tested for inter- and intragranular oxygen ion conduction as well. The results showed that the smaller grained samples have increased levels of oxygen ion conduction from both inter- and intragranular diffusion regardless of the operating temperatures. In addition, it was seen that the activation energies for both modes of oxygen ion diffusion were lowered for the nCZ system while a plateaued effect was seen in the nYZ system. A new theoretical formulation was proposed to explain the trends such that there are two modifiable parameters to exploit; activation energy and grain size. With the lowering of the grain size, the number of interconnected grain boundaries would increase dramatically allowing for more efficient travel around and through the grains. The activation energy can be lowered by modifying the chemistry of the grain boundary by specifically choosing larger dopants with a positive enthalpy of segregation such the concentration of the dopants on the grain boundary would increase, spacing the unattached bonds further apart and reducing their number. Therefore, one can use an engineered nanograined SOFC to decrease the operating temperature of the device without altering the output power density; significantly improving safety and economics.
Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe
2014-01-01
Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681
Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe
2014-01-01
To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic.
Modernity, Traditionality, and Junior High School Attainment in Turkey
ERIC Educational Resources Information Center
Aytac, Isik A.; Rankin, Bruce H.
2004-01-01
This study focuses on the impact of modernity and traditionality on junior high school attainment of children in Turkey. Using the nationwide Turkish Family Structure Survey, the primary objectives are to determine whether junior high school attainment varies by region, city size, and by family background. Based on a sample of 2025 16 year-old…
A Comparison of Online and Classroom-Based Developmental Math Courses
ERIC Educational Resources Information Center
Eggert, Jeanette Gibeson
2009-01-01
Effectiveness was operationalized as a combination of successful developmental course completion, high student satisfaction at the end of the course, and high academic achievement in a subsequent college-level math course. Instructional methodologies were similar to the extent that the instructional delivery systems allowed. With a sample size of…
Catholic High Schools and Their Finances, 1980.
ERIC Educational Resources Information Center
Bredeweg, Frank H.
The information contained in this report was drawn from data provided by a national sample of 200 Catholic high schools. The schools were selected to reflect types (private, Catholic, diocesan, and parish schools), enrollment sizes, and geographic location. The report addresses these areas. First, information is provided to point out the financial…
NASA Technical Reports Server (NTRS)
Eglinton, G.; Gowar, A. P.; Jull, A. J. T.; Pillinger, C. T.; Agrell, S. O.; Agrell, J. E.; Long, J. V. P.; Bowie, S. H. U.; Simpson, P. R.; Beckinsale, R. D.
1977-01-01
Samples of Luna 16 and 20 have been separated according to size, visual appearance, density, and magnetic susceptibility. Selected aliquots were examined in eight British laboratories. The studies included mineralogy and petrology, selenochronology, magnetic characteristics, Mossbauer spectroscopy, oxygen isotope ratio determinations, cosmic ray track and thermoluminescence investigations, and carbon chemistry measurements. Luna 16 and 20 are typically mare and highland soils, comparing well with their Apollo counterparts, Apollo 11 and 16, respectively. Both soils are very mature (high free iron, carbide, and methane and cosmogenic Ar), while Luna 16, but not Luna 20, is characterized by a high content of glassy materials. An aliquot of anorthosite fragments, handpicked from Luna 20, had a gas retention age of about 4.3 plus or minus 0.1 Gy.
Magnesium and Silicon Isotopes in HASP Glasses from Apollo 16 Lunar Soil 61241
NASA Technical Reports Server (NTRS)
Herzog, G. F.; Delaney, J. S.; Lindsay, F.; Alexander, C. M. O'D; Chakrabarti, R.; Jacobsen, S. B.; Whattam, S.; Korotev, R.; Zeigler, R. A.
2012-01-01
The high-Al (>28 wt %), silica-poor (<45 wt %) (HASP) feldspathic glasses of Apollo 16 are widely regarded as the evaporative residues of impacts in the lunar regolith [1-3]. By virtue of their small size, apparent homogeneity, and high inferred formation temperatures, the HASP glasses appear to be good samples in which to study fractionation processes that may accompany open system evaporation. Calculations suggest that HASP glasses with present-day Al2O3 concentrations of up to 40 wt% may have lost 19 wt% of their original masses, calculated as the oxides of iron and silicon, via evaporation [4]. We report Mg and Si isotope abundances in 10 HASP glasses and 2 impact-glass spherules from a 64-105 m grain-size fraction taken from Apollo 16 soil sample 61241.
Landschoff, Jannes; Du Plessis, Anton; Griffiths, Charles L
2015-01-01
Brooding brittle stars have a special mode of reproduction whereby they retain their eggs and juveniles inside respiratory body sacs called bursae. In the past, studying this phenomenon required disturbance of the sample by dissecting the adult. This caused irreversible damage and made the sample unsuitable for future studies. Micro X-ray computed tomography (μCT) is a promising technique, not only to visualise juveniles inside the bursae, but also to keep the sample intact and make the dataset of the scan available for future reference. Seven μCT scans of five freshly fixed (70 % ethanol) individuals, representing three differently sized brittle star species, provided adequate image quality to determine the numbers, sizes and postures of internally brooded young, as well as anatomy and morphology of adults. No staining agents were necessary to achieve high-resolution, high-contrast images, which permitted visualisations of both calcified and soft tissue. The raw data (projection and reconstruction images) are publicly available for download from GigaDB. Brittle stars of all sizes are suitable candidates for μCT imaging. This explicitly adds a new technique to the suite of tools available for studying the development of internally brooded young. The purpose of applying the technique was to visualise juveniles inside the adult, but because of the universally good quality of the dataset, the images can also be used for anatomical or comparative morphology-related studies of adult structures.
Geometrical characteristics of sandstone with different sample sizes
NASA Astrophysics Data System (ADS)
Cheon, D. S.; Takahashi, M., , Dr
2017-12-01
In many rock engineering projects such as CO2 underground storage, engineering geothermal system, it is important things to understand the fluid flow behavior in the deep geological conditions. This fluid flow is generally affected by the geometrical characteristics of rock, especially porous media. Furthermore, physical properties in rock may depend on the existence of voids space in rock. Total porosity and pore size distribution can be measured by Mercury Intrusion Porosimetry and the other geometrical and spatial information of pores can be obtained through micro-focus X-ray CT. Using the micro-focus X-ray CT, we obtained the extracted void space and transparent image from the original CT voxel images of with different sample sizes like 1 mm, 2 mm, 3 mm cubes. The test samples are Berea sandstone and Otway sandstone. The former is well-known sandstone and it is used for the standard sample to compared to the result from the Otway sandstone. Otway sandstone was obtained from the CO2CRC Otway pilot site for the CO2 geosequestraion project. From the X-ray scan and ExFACT software, we get the informations including effective pore radii, coordination number, tortuosity and effective throat/pore radius ratio etc. The geometrical information analysis showed that for Berea sandstone and Otway sandstone, there is rarely differences with different sample sizes and total value of coordination number show high porosity, the tortuosity of Berea sandstone is higher than the Otway sandstone. In the future, these information will be used for the permeability of the samples.
Problems with sampling desert tortoises: A simulation analysis based on field data
Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.
2005-01-01
The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.
NASA Astrophysics Data System (ADS)
Asefaw Berhe, Asmeret; Kaiser, Michael; Ghezzehei, Teamrat; Myrold, David; Kleber, Markus
2013-04-01
The effectiveness of charcoal and calcium carbonate applications to improve soil conditions has been well documented. However, their influence on the formation of silt-sized aggregates and the amount and protection of associated organic matter (OM) against microbial decomposition is still largely unknown. For sustainable management of agricultural soils, silt-sized aggregates (2-53 µm) are of particularly large importance because they store up to 60% of soil organic carbon with mean residence times between 70 and 400 years. The objectives are i) to analyze the ability of CaCO3 and/or charcoal application to increase the amount of silt-sized aggregates and associated OM, ii) vary soil mineral conditions to establish relevant boundary conditions for amendment-induced aggregation processes, iii) to determine how amendment-induced changes in formation of silt-sized aggregates relate to microbial decomposition of OM. We set up artificial high reactive (HR, clay: 40%, sand: 57%, OM: 3%) and low reactive soils (LR, clay: 10%, sand: 89%, OM: 1%) and mixed them with charcoal (CC, 1%) and/or calcium carbonate (Ca, 0.2%). The samples were adjusted to a water potential of 0.3 bar and sub samples were incubated with microbial inoculum (MO). After a 16-weeks aggregation experiment, size fractions were separated by wet-sieving and sedimentation. Since we did not use mineral compounds in the artificial mixtures within the size range of 2 to 53 µm, we consider material recovered in this fraction as silt-sized aggregates, which was confirmed by SEM analyses. For the LR mixtures, we detected increasing N concentrations within the 2-53 µm fractions of the charcoal amended samples (CC, CC+Ca, and CC+Ca+MO) as compared to the Control sample with the strongest effect for the CC+Ca+MO sample. This indicates an association of N-containing microbial derived OM with silt-sized aggregates. For the charcoal amended LR and HR mixtures, the C concentrations of the 2-53 µm fractions are larger than those of the respective fractions of the Control samples but the effect is several times stronger for the LR mixtures. The C concentrations of the 2-53 µm fractions relative to the total C amount of the LR and HR mixtures are between 30 and 50%. The charcoal amended samples show generally larger relative C amounts associated with the 2-53 µm fractions than the Control samples. Benefits for aggregate formation and OM storage were larger for sand (LR) than for clay soil (HR). The gained data are similar to respective data for natural soils. Consequently, the suggested microcosm experiments are suitable to analyze mechanisms within soil aggregation processes.
Fazey, Francesca M C; Ryan, Peter G
2016-03-01
Recent estimates suggest that roughly 100 times more plastic litter enters the sea than is found floating at the sea surface, despite the buoyancy and durability of many plastic polymers. Biofouling by marine biota is one possible mechanism responsible for this discrepancy. Microplastics (<5 mm in diameter) are more scarce than larger size classes, which makes sense because fouling is a function of surface area whereas buoyancy is a function of volume; the smaller an object, the greater its relative surface area. We tested whether plastic items with high surface area to volume ratios sank more rapidly by submerging 15 different sizes of polyethylene samples in False Bay, South Africa, for 12 weeks to determine the time required for samples to sink. All samples became sufficiently fouled to sink within the study period, but small samples lost buoyancy much faster than larger ones. There was a direct relationship between sample volume (buoyancy) and the time to attain a 50% probability of sinking, which ranged from 17 to 66 days of exposure. Our results provide the first estimates of the longevity of different sizes of plastic debris at the ocean surface. Further research is required to determine how fouling rates differ on free floating debris in different regions and in different types of marine environments. Such estimates could be used to improve model predictions of the distribution and abundance of floating plastic debris globally. Copyright © 2016 Elsevier Ltd. All rights reserved.
Genealogical Properties of Subsamples in Highly Fecund Populations
NASA Astrophysics Data System (ADS)
Eldon, Bjarki; Freund, Fabian
2018-03-01
We consider some genealogical properties of nested samples. The complete sample is assumed to have been drawn from a natural population characterised by high fecundity and sweepstakes reproduction (abbreviated HFSR). The random gene genealogies of the samples are—due to our assumption of HFSR—modelled by coalescent processes which admit multiple mergers of ancestral lineages looking back in time. Among the genealogical properties we consider are the probability that the most recent common ancestor is shared between the complete sample and the subsample nested within the complete sample; we also compare the lengths of `internal' branches of nested genealogies between different coalescent processes. The results indicate how `informative' a subsample is about the properties of the larger complete sample, how much information is gained by increasing the sample size, and how the `informativeness' of the subsample varies between different coalescent processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Ashutosh, E-mail: ashutosh.pph13@iitp.ac.in; Sharma, Himanshu; Tomy, C. V.
2016-05-06
La{sub 0.7}Sr{sub 0.3}MnO{sub 3} polycrystalline samples have been prepared using different synthesis routes. X-ray Diffraction (XRD) confirms that the samples are of single phase with R-3c space group. The surface morphology and particle size has been observed using Field Emission Scanning Electron Microscopy (FESEM). Magnetic measurement shows that the magnetization in the materials are affected by low crystallite size which destroys the spin ordering due to strain at grain boundaries and this also leads to reduction in magnetization as well as high coercivity in the material.
Study on optimum length of raw material in stainless steel high-lock nuts forging
NASA Astrophysics Data System (ADS)
Cheng, Meiwen; Liu, Fenglei; Zhao, Qingyun; Wang, Lidong
2018-04-01
Taking 302 stainless steel (1Cr18Ni9) high-lock nuts for research objects, adjusting the length of raw material, then using DEFORM software to simulate the isothermal forging process of each station and conducting the corresponding field tests to study the effects of raw material size on the stainless steel high-lock nuts forming performance. The tests show that the samples of each raw material length is basically the same as the results of the DEFORM software. When the length of the raw material is 10mm, the appearance size of the parts can meet the design requirements.
Far Field Modeling Methods For Characterizing Surface Detonations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, A.
2015-10-08
Savannah River National Laboratory (SRNL) analyzed particle samples collected during experiments that were designed to replicate tests of nuclear weapons components that involve detonation of high explosives (HE). SRNL collected the particle samples in the HE debris cloud using innovative rocket propelled samplers. SRNL used scanning electronic microscopy to determine the elemental constituents of the particles and their size distributions. Depleted uranium composed about 7% of the particle contents. SRNL used the particle size distributions and elemental composition to perform transport calculations that indicate in many terrains and atmospheric conditions the uranium bearing particles will be transported long distances downwind.more » This research established that HE tests specific to nuclear proliferation should be detectable at long downwind distances by sampling airborne particles created by the test detonations.« less
Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis
2006-01-01
The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.
Opsahl, Stephen P.; Crow, Cassi L.
2014-01-01
During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
Ji, Yuan; Wang, Sue-Jane
2013-01-01
The 3 + 3 design is the most common choice among clinicians for phase I dose-escalation oncology trials. In recent reviews, more than 95% of phase I trials have been based on the 3 + 3 design. Given that it is intuitive and its implementation does not require a computer program, clinicians can conduct 3 + 3 dose escalations in practice with virtually no logistic cost, and trial protocols based on the 3 + 3 design pass institutional review board and biostatistics reviews quickly. However, the performance of the 3 + 3 design has rarely been compared with model-based designs in simulation studies with matched sample sizes. In the vast majority of statistical literature, the 3 + 3 design has been shown to be inferior in identifying true maximum-tolerated doses (MTDs), although the sample size required by the 3 + 3 design is often orders-of-magnitude smaller than model-based designs. In this article, through comparative simulation studies with matched sample sizes, we demonstrate that the 3 + 3 design has higher risks of exposing patients to toxic doses above the MTD than the modified toxicity probability interval (mTPI) design, a newly developed adaptive method. In addition, compared with the mTPI design, the 3 + 3 design does not yield higher probabilities in identifying the correct MTD, even when the sample size is matched. Given that the mTPI design is equally transparent, costless to implement with free software, and more flexible in practical situations, we highly encourage its adoption in early dose-escalation studies whenever the 3 + 3 design is also considered. We provide free software to allow direct comparisons of the 3 + 3 design with other model-based designs in simulation studies with matched sample sizes. PMID:23569307
Study samples are too small to produce sufficiently precise reliability coefficients.
Charter, Richard A
2003-04-01
In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
He, Man; Huang, Lijin; Zhao, Bingshan; Chen, Beibei; Hu, Bin
2017-06-22
For the determination of trace elements and their species in various real samples by inductively coupled plasma mass spectrometry (ICP-MS), solid phase extraction (SPE) is a commonly used sample pretreatment technique to remove complex matrix, pre-concentrate target analytes and make the samples suitable for subsequent sample introduction and measurements. The sensitivity, selectivity/anti-interference ability, sample throughput and application potential of the methodology of SPE-ICP-MS are greatly dependent on SPE adsorbents. This article presents a general overview of the use of advanced functional materials (AFMs) in SPE for ICP-MS determination of trace elements and their species in the past decade. Herein the AFMs refer to the materials featuring with high adsorption capacity, good selectivity, fast adsorption/desorption dynamics and satisfying special requirements in real sample analysis, including nanometer-sized materials, porous materials, ion imprinting polymers, restricted access materials and magnetic materials. Carbon/silica/metal/metal oxide nanometer-sized adsorbents with high surface area and plenty of adsorption sites exhibit high adsorption capacity, and porous adsorbents would provide more adsorption sites and faster adsorption dynamics. The selectivity of the materials for target elements/species can be improved by using physical/chemical modification, ion imprinting and restricted accessed technique. Magnetic adsorbents in conventional batch operation offer unique magnetic response and high surface area-volume ratio which provide a very easy phase separation, greater extraction capacity and efficiency over conventional adsorbents, and chip-based magnetic SPE provides a versatile platform for special requirement (e.g. cell analysis). The performance of these adsorbents for the determination of trace elements and their species in different matrices by ICP-MS is discussed in detail, along with perspectives and possible challenges in the future development. Copyright © 2017 Elsevier B.V. All rights reserved.
Shahbi, M; Rajabpour, A
2017-08-01
Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.
López, Iago; Alvarez, César; Gil, José L; Revilla, José A
2012-11-30
Data on the 95th and 90th percentiles of bacteriological quality indicators are used to classify bathing waters in Europe, according to the requirements of Directive 2006/7/EC. However, percentile values and consequently, classification of bathing waters depend both on sampling effort and sample-size, which may undermine an appropriate assessment of bathing water classification. To analyse the influence of sampling effort and sample size on water classification, a bootstrap approach was applied to 55 bacteriological quality datasets of several beaches in the Balearic Islands (Spain). Our results show that the probability of failing the regulatory standards of the Directive is high when sample size is low, due to a higher variability in percentile values. In this way, 49% of the bathing waters reaching an "Excellent" classification (95th percentile of Escherichia coli under 250 cfu/100 ml) can fail the "Excellent" regulatory standard due to sampling strategy, when 23 samples per season are considered. This percentage increases to 81% when 4 samples per season are considered. "Good" regulatory standards can also be failed in bathing waters with an "Excellent" classification as a result of these sampling strategies. The variability in percentile values may affect bathing water classification and is critical for the appropriate design and implementation of bathing water Quality Monitoring and Assessment Programs. Hence, variability of percentile values should be taken into account by authorities if an adequate management of these areas is to be achieved. Copyright © 2012 Elsevier Ltd. All rights reserved.
Herath, Samantha; Yap, Elaine
2018-02-01
In diagnosing peripheral pulmonary lesions (PPL), radial endobronchial ultrasound (R-EBUS) is emerging as a safer method in comparison to CT-guided biopsy. Despite the better safety profile, the yield of R-EBUS remains lower (73%) than CT-guided biopsy (90%) due to the smaller size of samples. We adopted a hybrid method by adding cryobiopsy via the R-EBUS Guide Sheath (GS) to produce larger, non-crushed samples to improve diagnostic capability and enhance molecular testing. We report six prospective patients who underwent this procedure in our institution. R-EBUS samples were obtained via conventional sampling methods (needle aspiration, forceps biopsy, and cytology brush), followed by a cryobiopsy. An endobronchial blocker was placed near the planned area of biopsy in advance and inflated post-biopsy to minimize the risk of bleeding in all patients. A chest X-ray was performed 1 h post-procedure. All the PPLs were visualized with R-EBUS. The mean diameter of cryobiopsy samples was twice the size of forceps biopsy samples. In four patients, cryobiopsy samples were superior in size and the number of malignant cells per high power filed and was the preferred sample selected for mutation analysis and molecular testing. There was no pneumothorax or significant bleeding to report. Cryobiopsy samples were consistently larger and were the preferred samples for molecular testing, with an increase in the diagnostic yield and reduction in the need for repeat procedures, without hindering the marked safety profile of R-EBUS. Using an endobronchial blocker improves the safety of this procedure.
2010-01-01
Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152
2011-01-01
Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Trial registration Current Controlled Trials ISRCTN61649292 PMID:21288326
Preparation of improved catalytic materials for water purification
NASA Astrophysics Data System (ADS)
Cherkezova-Zheleva, Z.; Paneva, D.; Tsvetkov, M.; Kunev, B.; Milanova, M.; Petrov, N.; Mitov, I.
2014-04-01
The aim of presented paper was to study preparation of catalytic materials for water purification. Iron oxide (Fe3O4) samples supported on activated carbon were prepared by wet impregnation method and low temperature heating in an inert atmosphere. The as-prepared, activated and samples after catalytic test were characterized by Mössbauer spectroscopy and X-ray diffraction. The obtained X-ray diffraction patterns of prepared samples show broad and low-intensity peaks of magnetite phase and the characteristic peaks of the activated carbon. The average crystallite size of magnetite particles was calculated below 20 nm. The registered Mössbauer spectra of prepared materials show a superposition of doublet lines or doublet and sextet components. The calculated hyperfine parameters after spectra evaluation reveal the presence of magnetite phase with nanosize particles. Relaxation phenomena were registered in both cases, i.e. superparamagnetism or collective magnetic excitation behavior, respectively. Low temperature Mössbauer spectra confirm this observation. Application of materials as photo-Fenton catalysts for organic pollutions degradation was studied. It was obtained high adsorption degree of dye, extremely high reaction rate and fast dye degradation. Photocatalytic behaviour of a more active sample was enhanced using mechanochemical activation (MCA). The nanometric size and high dispersion of photocatalyst particles influence both the adsorption and degradation mechanism of reaction. The results showed that all studied photocatalysts effectively decompose the organic pollutants under UV light irradiation. Partial oxidation of samples after catalytic tests was registered. Combination of magnetic particles with high photocatalytic activity meets both the requirements of photocatalytic degradation of water contaminants and that of recovery for cyclic utilization of material.
Michen, Benjamin; Geers, Christoph; Vanhecke, Dimitri; Endes, Carola; Rothen-Rutishauser, Barbara; Balog, Sandor; Petri-Fink, Alke
2015-01-01
Standard transmission electron microscopy nanoparticle sample preparation generally requires the complete removal of the suspending liquid. Drying often introduces artifacts, which can obscure the state of the dispersion prior to drying and preclude automated image analysis typically used to obtain number-weighted particle size distribution. Here we present a straightforward protocol for prevention of the onset of drying artifacts, thereby allowing the preservation of in-situ colloidal features of nanoparticles during TEM sample preparation. This is achieved by adding a suitable macromolecular agent to the suspension. Both research- and economically-relevant particles with high polydispersity and/or shape anisotropy are easily characterized following our approach (http://bsa.bionanomaterials.ch), which allows for rapid and quantitative classification in terms of dimensionality and size: features that are major targets of European Union recommendations and legislation. PMID:25965905
X-ray studies of aluminum alloy of the Al-Mg-Si system subjected to SPD processing
NASA Astrophysics Data System (ADS)
Sitdikov, V. D.; Murashkin, M. Yu; Khasanov, M. R.; Kasatkin, I. A.; Chizhov, P. S.; Bobruk, E. V.
2014-08-01
Recently it has been established that during high pressure torsion dynamic aging takes place in aluminum Al-Mg-Si alloys resulting in formation of nanosized particles of strengthening phases in the aluminum matrix, which greatly improves the electrical conductivity and strength properties. In the present paper structural characterization of ultrafine-grained (UFG) samples of aluminum 6201 alloy produced by severe plastic deformation (SPD) was performed using X-ray diffraction analysis. As a result, structure features (lattice parameter, size of coherent scattering domains) after dynamic aging of UFG samples were determined. The size and distribution of second- phase particles in the Al matrix were assessed with regard to HPT regimes. Impact of the size and distribution of the formed secondary phases on the strength, ductility and electrical conductivity is discussed.
NASA Astrophysics Data System (ADS)
Godino, Neus; Jorde, Felix; Lawlor, Daryl; Jaeger, Magnus; Duschl, Claus
2015-08-01
Microalgae are a promising source of bioactive ingredients for the food, pharmaceutical and cosmetic industries. Every microalgae research group or production facility is facing one major problem regarding the potential contamination of the algal cell with bacteria. Prior to the storage of the microalgae in strain collections or to cultivation in bioreactors, it is necessary to carry out laborious purification procedures to separate the microalgae from the undesired bacterial cells. In this work, we present a disposable microfluidic cartridge for the high-throughput purification of microalgae samples based on inertial microfluidics. Some of the most relevant microalgae strains have a larger size than the relatively small, few micron bacterial cells, so making them distinguishable by size. The inertial microfluidic cartridge was fabricated with inexpensive materials, like pressure sensitive adhesive (PSA) and thin plastic layers, which were patterned using a simple cutting plotter. In spite of fabrication restrictions and the intrinsic difficulties of biological samples, the separation of microalgae from bacteria reached values in excess of 99%, previously only achieved using conventional high-end and high cost lithography methods. Moreover, due to the simple and high-throughput characteristic of the separation, it is possible to concatenate serial purification to exponentially decrease the absolute amount of bacteria in the final purified sample.
NASA Astrophysics Data System (ADS)
Gafarov, Ozarfar; Gapud, Albert A.; Moraes, Sunhee; Thompson, James R.; Christen, David K.; Reyes, Arneil P.
2011-03-01
Results of recent measurements on two very clean, single-crystal samples of the A15 superconductor V3 Si are presented. Magnetization and transport data confirm the ``clean'' quality of both samples, as manifested by: (i) high residual resistivity ratio, (ii) low critical current densities, and (iii) a ``peak'' effect in the field dependence of critical current. The (H,T) phase line for this peak effect is shifted in the slightly ``dirtier'' sample, which also has higher critical current density Jc (H). High-current Lorentz forces are applied on mixed-state vortices in order to induce the highly ordered free flux flow (FFF) phase, using the same methods as in previous work. A traditional model by Bardeen and Stephen (BS) predicts a simple field dependence of flux flow resistivity ρf (H), presuming a field-independent flux core size. A model by Kogan and Zelezhina (KZ) takes core size into account, and predicts a deviation from BS. In this study, ρf (H) is confirmed to be consistent with predictions of KZ, as will be discussed. Funded by Research Corporation and the National Science Foundation.
Wright, Mark H.; Tung, Chih-Wei; Zhao, Keyan; Reynolds, Andy; McCouch, Susan R.; Bustamante, Carlos D.
2010-01-01
Motivation: The development of new high-throughput genotyping products requires a significant investment in testing and training samples to evaluate and optimize the product before it can be used reliably on new samples. One reason for this is current methods for automated calling of genotypes are based on clustering approaches which require a large number of samples to be analyzed simultaneously, or an extensive training dataset to seed clusters. In systems where inbred samples are of primary interest, current clustering approaches perform poorly due to the inability to clearly identify a heterozygote cluster. Results: As part of the development of two custom single nucleotide polymorphism genotyping products for Oryza sativa (domestic rice), we have developed a new genotype calling algorithm called ‘ALCHEMY’ based on statistical modeling of the raw intensity data rather than modelless clustering. A novel feature of the model is the ability to estimate and incorporate inbreeding information on a per sample basis allowing accurate genotyping of both inbred and heterozygous samples even when analyzed simultaneously. Since clustering is not used explicitly, ALCHEMY performs well on small sample sizes with accuracy exceeding 99% with as few as 18 samples. Availability: ALCHEMY is available for both commercial and academic use free of charge and distributed under the GNU General Public License at http://alchemy.sourceforge.net/ Contact: mhw6@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20926420
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Yang; Gorey, Timothy J.; Anderson, Scott L.
2016-12-12
X-ray absorption near-edge structure (XANES) is commonly used to probe the oxidation state of metal-containing nanomaterials, however, as the particle size in the material drops below a few nanometers, it becomes important to consider inherent size effects on the electronic structure of the materials. In this paper, we analyze a series of size-selected Pt n/SiO 2 samples, using X-ray photoelectron spectroscopy (XPS), low energy ion scattering, grazing-incidence small angle X-ray scattering, and XANES. The oxidation state and morphology are characterized both as-deposited in UHV, and after air/O 2 exposure and annealing in H 2. Here, the clusters are found tomore » be stable during deposition and upon air exposure, but sinter if heated above ~150 °C. XANES shows shifts in the Pt L 3 edge, relative to bulk Pt, that increase with decreasing cluster size, and the cluster samples show high white line intensity. Reference to bulk standards would suggest that the clusters are oxidized, however, XPS shows that they are not. Instead, the XANES effects are attributable to development of a band gap and localization of empty state wavefunctions in small clusters.« less
NASA Astrophysics Data System (ADS)
Baasch, Benjamin; Müller, Hendrik; von Dobeneck, Tilo; Oberle, Ferdinand K. J.
2017-05-01
The electric conductivity and magnetic susceptibility of sediments are fundamental parameters in environmental geophysics. Both can be derived from marine electromagnetic profiling, a novel, fast and non-invasive seafloor mapping technique. Here we present statistical evidence that electric conductivity and magnetic susceptibility can help to determine physical grain-size characteristics (size, sorting and mud content) of marine surficial sediments. Electromagnetic data acquired with the bottom-towed electromagnetic profiler MARUM NERIDIS III were analysed and compared with grain size data from 33 samples across the NW Iberian continental shelf. A negative correlation between mean grain size and conductivity (R=-0.79) as well as mean grain size and susceptibility (R=-0.78) was found. Simple and multiple linear regression analyses were carried out to predict mean grain size, mud content and the standard deviation of the grain-size distribution from conductivity and susceptibility. The comparison of both methods showed that multiple linear regression models predict the grain-size distribution characteristics better than the simple models. This exemplary study demonstrates that electromagnetic benthic profiling is capable to estimate mean grain size, sorting and mud content of marine surficial sediments at a very high significance level. Transfer functions can be calibrated using grains-size data from a few reference samples and extrapolated along shelf-wide survey lines. This study suggests that electromagnetic benthic profiling should play a larger role for coastal zone management, seafloor contamination and sediment provenance studies in worldwide continental shelf systems.
Isotopic signatures: An important tool in today`s world
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rokop, D.J.; Efurd, D.W.; Benjamin, T.M.
1995-12-01
High-sensitivity/high-accuracy actinide measurement techniques developed to support weapons diagnostic capabilities at the Los Alamos National Laboratory are now being used for environmental monitoring. The measurement techniques used are Thermal Ionization Mass Spectrometry (TIMS), Alpha Spectrometry(AS), and High Resolution Gamma Spectrometry(HRGS). These techniques are used to address a wide variety of actinide inventory issues: Environmental surveillance, site characterizations, food chain member determination, sedimentary records of activities, and treaty compliance concerns. As little as 10 femtograms of plutonium can be detected in samples and isotopic signatures determined on samples containing sub-100 femtogram amounts. Uranium, present in all environmental samples, can generally yieldmore » isotopic signatures of anthropogenic origin when present at the 40 picogam/gram level. Solid samples (soils, sediments, fauna, and tissue) can range from a few particles to several kilograms in size. Water samples can range from a few milliliters to as much as 200 liters.« less
NASA Astrophysics Data System (ADS)
Liu, Huihua; Chaudhary, Deeptangshu
2014-08-01
The crystalline domain changes and lamellar structure observations of sorbitol-plasticized starch nanocomposite had been investigated via synchrotron. Strong interactions were found between amylose-sorbitol, resulting in reduced inter-helix spacing of the starch polymer. Achievable dspacing of nanoclay was confirmed to be correlated to the moisture content (mc) within the nanocomposites. SAXS diffraction patterns changed from circular (high mc samples) to elliptical (low mc samples), indicating the formation of long periodic structure and increased heterogeneities of the electron density within the samples. Two different domains sized at around 90 Å and 350 Å were found for the low mc samples. However, only the ~90 Å domain was observed in high mc samples. Formation of the 380 Å domain is attributed to the retrogradation behaviour in the absence of water molecules. Meanwhile, the nucleation effect of nanoclay is another factor leading to the emergence of the larger crystalline domain.
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...
Schilling, Kristian; Krause, Frank
2015-01-01
Monoclonal antibodies represent the most important group of protein-based biopharmaceuticals. During formulation, manufacturing, or storage, antibodies may suffer post-translational modifications altering their physical and chemical properties. Such induced conformational changes may lead to the formation of aggregates, which can not only reduce their efficiency but also be immunogenic. Therefore, it is essential to monitor the amount of size variants to ensure consistency and quality of pharmaceutical antibodies. In many cases, antibodies are formulated at very high concentrations > 50 g/L, mostly along with high amounts of sugar-based excipients. As a consequence, all routine aggregation analysis methods, such as size-exclusion chromatography, cannot monitor the size distribution at those original conditions, but only after dilution and usually under completely different solvent conditions. In contrast, sedimentation velocity (SV) allows to analyze samples directly in the product formulation, both with limited sample-matrix interactions and minimal dilution. One prerequisite for the analysis of highly concentrated samples is the detection of steep concentration gradients with sufficient resolution: Commercially available ultracentrifuges are not able to resolve such steep interference profiles. With the development of our Advanced Interference Detection Array (AIDA), it has become possible to register interferograms of solutions as highly concentrated as 150 g/L. The other major difficulty encountered at high protein concentrations is the pronounced non-ideal sedimentation behavior resulting from repulsive intermolecular interactions, for which a comprehensive theoretical modelling has not yet been achieved. Here, we report the first SV analysis of highly concentrated antibodies up to 147 g/L employing the unique AIDA ultracentrifuge. By developing a consistent experimental design and data fit approach, we were able to provide a reliable estimation of the minimum content of soluble aggregates in the original formulations of two antibodies. Limitations of the procedure are discussed.
Intrafamilial clustering of anti-ATLA-positive persons.
Kajiyama, W; Kashiwagi, S; Hayashi, J; Nomura, H; Ikematsu, H; Okochi, K
1986-11-01
A total of 1,333 persons in 627 families were surveyed for presence of antibody to adult T-cell leukemia-associated antigen (anti-ATLA). Each person was classified according to the anti-ATLA status (positive for sample 1, negative for sample 2) of the head of household of his or her family. In sample 1, the sex- and age-standardized prevalence of anti-ATLA was 38.5%. This was five times as high as the standardized prevalence in sample 2 (7.8%). There were significant differences in prevalence of anti-ATLA between males in samples 1 and 2 and between females in samples 1 and 2. In every age group, prevalence in sample 1 was greater than that in sample 2 except for males aged 60-69 years. In each of four subareas, families in sample 1 had higher standardized prevalence (29.6-42.5%) than families in sample 2 (6.0-9.7%). Although crude prevalence decreased with family size in sample 1 (62.1-25.4%) as well as in sample 2, indirectly standardized prevalence was almost equal within each sample, regardless of number of family members. The degree of aggregation was independent of locality and family size. These data suggest that anti-ATLA-positive persons aggregate in family units.
Walking variations in healthy women wearing high-heeled shoes: Shoe size and heel height effects.
Di Sipio, Enrica; Piccinini, Giulia; Pecchioli, Cristiano; Germanotta, Marco; Iacovelli, Chiara; Simbolotti, Chiara; Cruciani, Arianna; Padua, Luca
2018-05-03
The use of high heels is widespread in modern society in professional and social contests. Literature showed that wearing high heels can produce injurious effects on several structures from the toes to the pelvis. No studies considered shoe length as an impacting factor on walking with high heels. The aim of this study is to evaluate walking parameters in young healthy women wearing high heels, considering not only the heel height but also the foot/shoe size. We evaluate spatio-temporal, kinematic and kinetic data, collected using a 8-camera motion capture system, in a sample of 21 healthy women in three different walking conditions: 1) barefoot, 2) wearing 12 cm high heel shoes independently from shoe size, and 3) wearing shoes with heel height based on shoe size, keeping the ankles' plantar flexion angle constant. The main outcome measures were: spatio-temporal parameters, gait harmony measurement, range of motion, flexion and extension maximal values, power and moment of lower limb joints. Comparing the three walking conditions, the Mixed Anova test, showed significant differences between both high heeled conditions (variable and constant height) and barefoot in spatio-temporal, kinematic and kinetic parameters. Regardless of the shoe size, both heeled conditions presented a similar gait pattern and were responsible for negative effects on walking parameters. Considering our results and the relevance of the heel height, further studies are needed to identify a threshold, over which it is possible to observe that wearing high heels could cause harmful effects, independently from the foot/shoe size. Copyright © 2018 Elsevier B.V. All rights reserved.
Global Particle Size Distributions: Measurements during the Atmospheric Tomography (ATom) Project
NASA Astrophysics Data System (ADS)
Brock, C. A.; Williamson, C.; Kupc, A.; Froyd, K. D.; Richardson, M.; Weinzierl, B.; Dollner, M.; Schuh, H.; Erdesz, F.
2016-12-01
The Atmospheric Tomography (ATom) project is a three-year NASA-sponsored program to map the spatial and temporal distribution of greenhouse gases, reactive species, and aerosol particles from the Arctic to the Antarctic. In situ measurements are being made on the NASA DC-8 research aircraft, which will make four global circumnavigations of the Earth over the mid-Pacific and mid-Atlantic Oceans while continuously profiling between 0.2 and 13 km altitude. In situ microphysical measurements will provide an unique and unprecedented dataset of aerosol particle size distributions between 0.004 and 50 µm diameter. This unbiased, representative dataset allows investigation of new particle formation in the remote troposphere, placing strong observational constraints on the chemical and physical mechanisms that govern particle formation and growth to cloud-active sizes. Particles from 0.004 to 0.055 µm are measured with 10 condensation particle counters. Particles with diameters from 0.06 to 1.0 µm are measured with one-second resolution using two ultra-high sensitivity aerosol size spectrometers (UHSASes). A laser aerosol spectrometer (LAS) measures particle size distributions between 0.12 and 10 µm in diameter. Finally, a cloud, aerosol and precipitation spectrometer (CAPS) underwing optical spectrometer probe sizes ambient particles with diameters from 0.5 to 50 µm and images and sizes precipitation-sized particles. Additional particle instruments on the payload include a high-resolution time-of-flight aerosol mass spectrometer and a single particle laser-ablation aerosol mass spectrometer. The instruments are calibrated in the laboratory and on the aircraft. Calibrations are checked in flight by introducing four sizes of polystyrene latex (PSL) microspheres into the sampling inlet. The CAPS probe is calibrated using PSL and glass microspheres that are aspirated into the sample volume. Comparisons between the instruments and checks with the calibration aerosol indicate flight performance within uncertainties expected from laboratory calibrations. Analysis of data from the first ATom circuit in August 2016 shows high concentrations of newly formed particles in the tropical middle and upper troposphere and Arctic lower troposphere.
Stress dependence of microstructures in experimentally deformed calcite
NASA Astrophysics Data System (ADS)
Platt, John P.; De Bresser, J. H. P.
2017-12-01
Optical measurements of microstructural features in experimentally deformed Carrara marble help define their dependence on stress. These features include dynamically recrystallized grain size (Dr), subgrain size (Sg), minimum bulge size (Lρ), and the maximum scale length for surface-energy driven grain-boundary migration (Lγ). Taken together with previously published data Dr defines a paleopiezometer over the range 15-291 MPa and temperature over the range 500-1000 °C, with a stress exponent of -1.09 (CI -1.27 to -0.95), showing no detectable dependence on temperature. Sg and Dr measured in the same samples are closely similar in size, suggesting that the new grains did not grow significantly after nucleation. Lρ and Lγ measured on each sample define a relationship to stress with an exponent of approximately -1.6, which helps define the boundary between a region of dominant strain-energy-driven grain-boundary migration at high stress, from a region of dominant surface-energy-driven grain-boundary migration at low stress.
Efficiency of a new bioaerosol sampler in sampling Betula pollen for antigen analyses.
Rantio-Lehtimäki, A; Kauppinen, E; Koivikko, A
1987-01-01
A new bioaerosol sampler consisting of Liu-type atmospheric aerosol sampling inlet, coarse particle inertial impactor, two-stage high-efficiency virtual impactor (aerodynamic particle sizes respectively in diameter: greater than or equal to 8 microns, 8-2.5 microns, and 2.5 microns; sampling on filters) and a liquid-cooled condenser was designed, fabricated and field-tested in sampling birch (Betula) pollen grains and smaller particles containing Betula antigens. Both microscopical (pollen counts) and immunochemical (enzyme-linked immunosorbent assay) analyses of each stage were carried out. The new sampler was significantly more efficient than Burkard trap e.g. in sampling particles of Betula pollen size (ca. 25 microns in diameter). This was prominent during pollen peak periods (e.g. May 19th, 1985, in the virtual impactor 9482 and in the Burkard trap 2540 Betula p.g. X m-3 of air). Betula antigens were detected also in filter stages where no intact pollen grains were found; in the condenser unit the antigen concentrations instead were very low.
Gray, Peter B; Frederick, David A
2012-09-06
We investigated body image in St. Kitts, a Caribbean island where tourism, international media, and relatively high levels of body fat are common. Participants were men and women recruited from St. Kitts (n = 39) and, for comparison, U.S. samples from universities (n = 618) and the Internet (n = 438). Participants were shown computer generated images varying in apparent body fat level and muscularity or breast size and they indicated their body type preferences and attitudes. Overall, there were only modest differences in body type preferences between St. Kitts and the Internet sample, with the St. Kitts participants being somewhat more likely to value heavier women. Notably, however, men and women from St. Kitts were more likely to idealize smaller breasts than participants in the U.S. samples. Attitudes regarding muscularity were generally similar across samples. This study provides one of the few investigations of body preferences in the Caribbean.
Choi, Yoonha; Liu, Tiffany Ting; Pankratz, Daniel G; Colby, Thomas V; Barth, Neil M; Lynch, David A; Walsh, P Sean; Raghu, Ganesh; Kennedy, Giulia C; Huang, Jing
2018-05-09
We developed a classifier using RNA sequencing data that identifies the usual interstitial pneumonia (UIP) pattern for the diagnosis of idiopathic pulmonary fibrosis. We addressed significant challenges, including limited sample size, biological and technical sample heterogeneity, and reagent and assay batch effects. We identified inter- and intra-patient heterogeneity, particularly within the non-UIP group. The models classified UIP on transbronchial biopsy samples with a receiver-operating characteristic area under the curve of ~ 0.9 in cross-validation. Using in silico mixed samples in training, we prospectively defined a decision boundary to optimize specificity at ≥85%. The penalized logistic regression model showed greater reproducibility across technical replicates and was chosen as the final model. The final model showed sensitivity of 70% and specificity of 88% in the test set. We demonstrated that the suggested methodologies appropriately addressed challenges of the sample size, disease heterogeneity and technical batch effects and developed a highly accurate and robust classifier leveraging RNA sequencing for the classification of UIP.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R.
2016-02-01
Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation.
Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R
2016-02-15
Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation.
Vinson, M.R.; Budy, P.
2011-01-01
We compared sources of variability and cost in paired stomach content and stable isotope samples from three salmonid species collected in September 2001–2005 and describe the relative information provided by each method in terms of measuring diet overlap and food web study design. Based on diet analyses, diet overlap among brown trout, rainbow trout, and mountain whitefish was high, and we observed little variation in diets among years. In contrast, for sample sizes n ≥ 25, 95% confidence interval (CI) around mean δ15Ν and δ13C for the three target species did not overlap, and species, year, and fish size effects were significantly different, implying that these species likely consumed similar prey but in different proportions. Stable isotope processing costs were US$12 per sample, while stomach content analysis costs averaged US$25.49 ± $2.91 (95% CI) and ranged from US$1.50 for an empty stomach to US$291.50 for a sample with 2330 items. Precision in both δ15Ν and δ13C and mean diet overlap values based on stomach contents increased considerably up to a sample size of n = 10 and plateaued around n = 25, with little further increase in precision.
Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R.
2016-01-01
Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation. PMID:26876979
A comparative review of methods for comparing means using partially paired data.
Guo, Beibei; Yuan, Ying
2017-06-01
In medical experiments with the objective of testing the equality of two means, data are often partially paired by design or because of missing data. The partially paired data represent a combination of paired and unpaired observations. In this article, we review and compare nine methods for analyzing partially paired data, including the two-sample t-test, paired t-test, corrected z-test, weighted t-test, pooled t-test, optimal pooled t-test, multiple imputation method, mixed model approach, and the test based on a modified maximum likelihood estimate. We compare the performance of these methods through extensive simulation studies that cover a wide range of scenarios with different effect sizes, sample sizes, and correlations between the paired variables, as well as true underlying distributions. The simulation results suggest that when the sample size is moderate, the test based on the modified maximum likelihood estimator is generally superior to the other approaches when the data is normally distributed and the optimal pooled t-test performs the best when the data is not normally distributed, with well-controlled type I error rates and high statistical power; when the sample size is small, the optimal pooled t-test is to be recommended when both variables have missing data and the paired t-test is to be recommended when only one variable has missing data.
Zou, Yang; Dong, Shuangzhao; Du, Yun; Li, Shengli; Wang, Yajing; Cao, Zhijun
2016-09-01
A study using four Holstein cows with ruminal cannulas was conducted to evaluate the degradability of different moisture content or particle size of maize silage and alfalfa haylage. The maize silage (MS; 20-mm length) and alfalfa haylage (AH; 40-mm length) samples were wet (wet maize silage, MSW; wet alfalfa haylage, AHW), dried (dried maize silage, MSD; dried alfalfa haylage, AHD), or ground to pass through a 2.5-mm screen (dried ground maize silage, MSG; dried ground alfalfa haylage, AHG). Samples were incubated in the rumen for 2, 6, 12, 24, 36, 48, and 72 h. Cows were fed ad libitum and allowed free access to water. High moisture content treatment of MSW expressed a lower rinsing NDF and ADF degradability at 2 h ( P < 0.05) compared with dried samples (MSD and MSG). Different moisture content and particle size had a significant impact ( P < 0.05) on the NDF degradability at 72 h, ADF degradability at 36, 48, and 72 h, and ruminally degradable ADF. All of the highest values were observed in small particle size and low moisture content AHG treatment. Based on this study, sample processing, such as drying and grinding, should be considered when evaluating nutritive values of forages.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz
2017-01-01
Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.
Bhattacharya, K; Tripathi, A K; Dey, G K; Gupta, N M
2005-05-01
Nanosize clusters of titania were dispersed in mesoporous MCM-41 silica matrix with the help of the incipient wet-impregnation route, using an isopropanol solution of titanium isopropoxide as precursor. The clusters thus formed were of pure anatase phase and their size depended upon the titania loading. In the case of low (< 15 wt %) loadings, the TiO2 particles were X-ray and laser-Raman amorphous, confirming very high dispersion. These particles were mostly of < or = 2 nm size. On the other hand, larger size clusters (2-15 nm) were present in a sample with a higher loading of approximately 21 wt %. These particles of titania, irrespective of their size, exhibited an absorbance behavior similar to that of bulk TiO2. Powder X-ray diffraction, N2-adsorption and transmission electron microscopy results showed that while smaller size particles were confined mostly inside the pore system, the larger size particles occupied the external surface of the host matrix. At the same time, the structural integrity of the host was maintained even though some deformation in the pore system was noticed in the case of the sample having highest loading. The core level X-ray photoelectron spectroscopy results revealed a + 4 valence state of Ti in all the samples. A positive binding energy shift and the increase of the width of Ti 2p peaks were observed, however, with the decrease in the particle size of supported titania crystallites, indicative of a microenvironment for surface sites that is different from that of the bulk.
Puls, Robert W.; Eychaner, James H.; Powell, Robert M.
1996-01-01
Investigations at Pinal Creek, Arizona, evaluated routine sampling procedures for determination of aqueous inorganic geochemistry and assessment of contaminant transport by colloidal mobility. Sampling variables included pump type and flow rate, collection under air or nitrogen, and filter pore diameter. During well purging and sample collection, suspended particle size and number as well as dissolved oxygen, temperature, specific conductance, pH, and redox potential were monitored. Laboratory analyses of both unfiltered samples and the filtrates were performed by inductively coupled argon plasma, atomic absorption with graphite furnace, and ion chromatography. Scanning electron microscopy with Energy Dispersive X-ray was also used for analysis of filter particulates. Suspended particle counts consistently required approximately twice as long as the other field-monitored indicators to stabilize. High-flow-rate pumps entrained normally nonmobile particles. Difference in elemental concentrations using different filter-pore sizes were generally not large with only two wells having differences greater than 10 percent in most wells. Similar differences (>10%) were observed for some wells when samples were collected under nitrogen rather than in air. Fe2+/Fe3+ ratios for air-collected samples were smaller than for samples collected under a nitrogen atmosphere, reflecting sampling-induced oxidation.
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
Electron paramagnetic resonance of several lunar rock samples
NASA Technical Reports Server (NTRS)
Marov, P. N.; Dubrov, Y. N.; Yermakov, A. N.
1974-01-01
The results are presented of investigating lunar rock samples returned by the Luna 16 automatic station, using electron paramagnetic resonance (EPR). The EPR technique makes it possible to detect paramagnetic centers and investigate their nature, with high sensitivity. Regolith (finely dispersed material) and five particles from it, 0.3 mm in size, consisting mostly of olivine, were investigated with EPR.
Riccioni, Giulia; Landi, Monica; Ferrara, Giorgia; Milano, Ilaria; Cariani, Alessia; Zane, Lorenzo; Sella, Massimo; Barbujani, Guido; Tinti, Fausto
2010-01-01
Fishery genetics have greatly changed our understanding of population dynamics and structuring in marine fish. In this study, we show that the Atlantic Bluefin tuna (ABFT, Thunnus thynnus), an oceanic predatory species exhibiting highly migratory behavior, large population size, and high potential for dispersal during early life stages, displays significant genetic differences over space and time, both at the fine and large scales of variation. We compared microsatellite variation of contemporary (n = 256) and historical (n = 99) biological samples of ABFTs of the central-western Mediterranean Sea, the latter dating back to the early 20th century. Measures of genetic differentiation and a general heterozygote deficit suggest that differences exist among population samples, both now and 96–80 years ago. Thus, ABFTs do not represent a single panmictic population in the Mediterranean Sea. Statistics designed to infer changes in population size, both from current and past genetic variation, suggest that some Mediterranean ABFT populations, although still not severely reduced in their genetic potential, might have suffered from demographic declines. The short-term estimates of effective population size are straddled on the minimum threshold (effective population size = 500) indicated to maintain genetic diversity and evolutionary potential across several generations in natural populations. PMID:20080643
NASA Astrophysics Data System (ADS)
Ueji, R.; Tsuchida, N.; Harada, K.; Takaki, K.; Fujii, H.
2015-08-01
The grain size effect on the deformation twinning in a high manganese austenitic steel which is so-called TWIP (twining induced plastic deformation) steel was studied in order to understand how to control deformation twinning. The 31wt%Mn-3%Al-3% Si steel was cold rolled and annealed at various temperatures to obtain fully recrystallized structures with different mean grain sizes. These annealed sheets were examined by room temperature tensile tests at a strain rate of 10-4/s. The coarse grained sample (grain size: 49.6μm) showed many deformation twins and the deformation twinning was preferentially found in the grains in which the tensile axis is parallel near to [111]. On the other hand, the sample with finer grains (1.8 μm) had few grains with twinning even after the tensile deformation. The electron back scattering diffraction (EB SD) measurements clarified the relationship between the anisotropy of deformation twinning and that of inhomogeneous plastic deformation. Based on the EBSD analysis, the mechanism of the suppression of deformation twinning by grain refinement was discussed with the concept of the slip pattern competition between the slip system governed by a grain boundary and that activated by the macroscopic load.
Leaching behaviour of bottom ash from RDF high-temperature gasification plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gori, M., E-mail: manuela.gori@dicea.unifi.it; Pifferi, L.; Sirini, P.
2011-07-15
This study investigated the physical properties, the chemical composition and the leaching behaviour of two bottom ash (BA) samples from two different refuse derived fuel high-temperature gasification plants, as a function of particle size. The X-ray diffraction patterns showed that the materials contained large amounts of glass. This aspect was also confirmed by the results of availability and ANC leaching tests. Chemical composition indicated that Fe, Mn, Cu and Cr were the most abundant metals, with a slight enrichment in the finest fractions. Suitability of samples for inert waste landfilling and reuse was evaluated through the leaching test EN 12457-2.more » In one sample the concentration of all metals was below the limit set by law, while limits were exceeded for Cu, Cr and Ni in the other sample, where the finest fraction showed to give the main contribution to leaching of Cu and Ni. Preliminary results of physical and geotechnical characterisation indicated the suitability of vitrified BA for reuse in the field of civil engineering. The possible application of a size separation pre-treatment in order to improve the chemical characteristics of the materials was also discussed.« less
A transmission imaging spectrograph and microfabricated channel system for DNA analysis.
Simpson, J W; Ruiz-Martinez, M C; Mulhern, G T; Berka, J; Latimer, D R; Ball, J A; Rothberg, J M; Went, G T
2000-01-01
In this paper we present the development of a DNA analysis system using a microfabricated channel device and a novel transmission imaging spectrograph which can be efficiently incorporated into a high throughput genomics facility for both sizing and sequencing of DNA fragments. The device contains 48 channels etched on a glass substrate. The channels are sealed with a flat glass plate which also provides a series of apertures for sample loading and contact with buffer reservoirs. Samples can be easily loaded in volumes up to 640 nL without band broadening because of an efficient electrokinetic stacking at the electrophoresis channel entrance. The system uses a dual laser excitation source and a highly sensitive charge-coupled device (CCD) detector allowing for simultaneous detection of many fluorescent dyes. The sieving matrices for the separation of single-stranded DNA fragments are polymerized in situ in denaturing buffer systems. Examples of separation of single-stranded DNA fragments up to 500 bases in length are shown, including accurate sizing of GeneCalling fragments, and sequencing samples prepared with a reduced amount of dye terminators. An increase in sample throughput has been achieved by color multiplexing.
Study design in high-dimensional classification analysis.
Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen
2016-10-01
Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
For Video Games, Bad News Is Good News: News Reporting of Violent Video Game Studies.
Copenhaver, Allen; Mitrofan, Oana; Ferguson, Christopher J
2017-12-01
News coverage of video game violence studies has been critiqued for focusing mainly on studies supporting negative effects and failing to report studies that did not find evidence for such effects. These concerns were tested in a sample of 68 published studies using child and adolescent samples. Contrary to our hypotheses, study effect size was not a predictor of either newspaper coverage or publication in journals with a high-impact factor. However, a relationship between poorer study quality and newspaper coverage approached significance. High-impact journals were not found to publish studies with higher quality. Poorer quality studies, which tended to highlight negative findings, also received more citations in scholarly sources. Our findings suggest that negative effects of violent video games exposure in children and adolescents, rather than large effect size or high methodological quality, increase the likelihood of a study being cited in other academic publications and subsequently receiving news media coverage.
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Martena, Valentina; Shegokar, Ranjita; Di Martino, Piera; Müller, Rainer H
2014-09-01
Nicergoline, a poorly soluble active pharmaceutical ingredient, possesses vaso-active properties which causes peripheral and central vasodilatation. In this study, nanocrystals of nicergoline were prepared in an aqueous solution of polysorbate 80 (nanosuspension) by using four different laboratory scale size reduction techniques: high pressure homogenization (HPH), bead milling (BM) and combination techniques (high pressure homogenization followed by bead milling HPH + BM, and bead milling followed by high pressure homogenization BM + HPH). Nanocrystals were investigated regarding to their mean particles size, zeta potential and particle dissolution. A short term physical stability study on nanocrystals stored at three different temperatures (4, 20 and 40 °C) was performed to evaluate the tendency to change in particle size, aggregation and zeta potential. The size reduction technique and the process parameters like milling time, number of homogenization cycles and pressure greatly affected the size of nanocrystals. Among the techniques used, the combination techniques showed superior and consistent particle size reduction compared to the other two methods, HPH + BM and BM + HPH giving nanocrystals of a mean particle size of 260 and 353 nm, respectively. The particle dissolution was increased for any nanocrystals samples, but it was particularly increased by HPH and combination techniques. Independently to the production method, nicergoline nanocrystals showed slight increase in particle size over the time, but remained below 500 nm at 20 °C and refrigeration conditions.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Loescher, Henry; Ayres, Edward; Duffy, Paul; Luo, Hongyan; Brunke, Max
2014-01-01
Soils are highly variable at many spatial scales, which makes designing studies to accurately estimate the mean value of soil properties across space challenging. The spatial correlation structure is critical to develop robust sampling strategies (e.g., sample size and sample spacing). Current guidelines for designing studies recommend conducting preliminary investigation(s) to characterize this structure, but are rarely followed and sampling designs are often defined by logistics rather than quantitative considerations. The spatial variability of soils was assessed across ∼1 ha at 60 sites. Sites were chosen to represent key US ecosystems as part of a scaling strategy deployed by the National Ecological Observatory Network. We measured soil temperature (Ts) and water content (SWC) because these properties mediate biological/biogeochemical processes below- and above-ground, and quantified spatial variability using semivariograms to estimate spatial correlation. We developed quantitative guidelines to inform sample size and sample spacing for future soil studies, e.g., 20 samples were sufficient to measure Ts to within 10% of the mean with 90% confidence at every temperate and sub-tropical site during the growing season, whereas an order of magnitude more samples were needed to meet this accuracy at some high-latitude sites. SWC was significantly more variable than Ts at most sites, resulting in at least 10× more SWC samples needed to meet the same accuracy requirement. Previous studies investigated the relationship between the mean and variability (i.e., sill) of SWC across space at individual sites across time and have often (but not always) observed the variance or standard deviation peaking at intermediate values of SWC and decreasing at low and high SWC. Finally, we quantified how far apart samples must be spaced to be statistically independent. Semivariance structures from 10 of the 12-dominant soil orders across the US were estimated, advancing our continental-scale understanding of soil behavior. PMID:24465377
Sample size determination in group-sequential clinical trials with two co-primary endpoints
Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi
2014-01-01
We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
A multi-probe thermophoretic soot sampling system for high-pressure diffusion flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vargas, Alex M.; Gülder, Ömer L.
Optical diagnostics and physical probing of the soot processes in high pressure combustion pose challenges that are not faced in atmospheric flames. One of the preferred methods of studying soot in atmospheric flames is in situ thermophoretic sampling followed by transmission electron microscopy imaging and analysis for soot sizing and morphology. The application of this method of sampling to high pressures has been held back by various operational and mechanical problems. In this work, we describe a rotating disk multi-probe thermophoretic soot sampling system, driven by a microstepping stepper motor, fitted into a high-pressure chamber capable of producing sooting laminarmore » diffusion flames up to 100 atm. Innovative aspects of the sampling system design include an easy and precise control of the sampling time down to 2.6 ms, avoidance of the drawbacks of the pneumatic drivers used in conventional thermophoretic sampling systems, and the capability to collect ten consecutive samples in a single experimental run. Proof of principle experiments were performed using this system in a laminar diffusion flame of methane, and primary soot diameter distributions at various pressures up to 10 atm were determined. High-speed images of the flame during thermophoretic sampling were recorded to assess the influence of probe intrusion on the flow field of the flame.« less
Effect of Slag Impregnation on Macroscopic Deformation of Bauxite-Based Material
NASA Astrophysics Data System (ADS)
Coulon, Antoine; De Bilbao, Emmanuel; Michel, Rudy; Bouchetou, Marie-Laure; Brassamin, Séverine; Gazeau, Camille; Zanghi, Didier; Poirier, Jacques
This work aims at studying the volume change of bauxite corroded by a molten slag. Cylindrical samples were prepared by mixing ground bauxite with slag. Optical measurement at high temperature (1450 °C) of deformation with a high-resolution camera has been developed. Image processing allowed for determining the change in diameter of the sample. We showed that the deformation was induced by the precipitation of new expansive crystallised phases observed by SEM-EDS analyses. Adding pellets of the same slag upon the samples allowed to emphasize the effect of the slag amount on the size change. The change in diameter significantly increased in the impregnated area.
NASA Astrophysics Data System (ADS)
Heimböckel, Ruben; Kraas, Sebastian; Hoffmann, Frank; Fröba, Michael
2018-01-01
A series of porous carbon samples were prepared by combining a semi-carbonization process of acidic polymerized phenol-formaldehyde resins and a following chemical activation with KOH used in different ratios to increase specific surface area, micropore content and pore sizes of the carbons which is favourable for supercapacitor applications. Samples were characterized by nitrogen physisorption, powder X-ray diffraction, Raman spectroscopy and scanning electron microscopy. The results show that the amount of KOH, combined with the semi-carbonization step had a remarkable effect on the specific surface area (up to SBET: 3595 m2 g-1 and SDFT: 2551 m2 g-1), pore volume (0.60-2.62 cm3 g-1) and pore sizes (up to 3.5 nm). The carbons were tested as electrode materials for electrochemical double layer capacitors (EDLC) in a two electrode setup with tetraethylammonium tetrafluoroborate in acetonitrile as electrolyte. The prepared carbon material with the largest surface area, pore volume and pore sizes exhibits a high specific capacitance of 145.1 F g-1 at a current density of 1 A g-1. With a high specific energy of 31 W h kg-1 at a power density of 33028 W kg-1 and a short time relaxation constant of 0.29 s, the carbon showed high power capability as an EDLC electrode material.
State of Washington Computer Use Survey.
ERIC Educational Resources Information Center
Beal, Jack L.; And Others
This report presents the results of a spring 1982 survey of a random sample of Washington public schools which separated findings according to school level (elementary, middle, junior high, or high school) and district size (either less than or greater than 2,000 enrollment). A brief review of previous studies and a description of the survey…
An assessment of personality disorders with the Five-Factor Model among Belgian inmates.
Thiry, Benjamin
2012-01-01
Many international studies report a high prevalence of personality disorders among inmates on the basis of (semi)-structured diagnostic interviews. The present study proposes a self-reported evaluation of personality disorders using the NEO PI-R. The sample consists of 244 male and 18 female inmates (N=262) who were psychologically assessed. The analysis of the five psychological domains shows that the French-speaking Belgian inmates are as stable, as extroverted, more closed, more agreeable and more conscientious than the normative sample. The NEO PI-R facets are also analyzed. The mean Cohen's d (.26) is small. Two personality disorders have medium effect sizes: obsessive compulsive personality disorder (high) and histrionic personality (low). Small effect sizes exist for antisocial personality (low), psychopathy (low), narcissistic personality (low), schizoid personality (high) and borderline personality (low). In our view, the context of the assessment can partially explain these results but not entirely. The results do not confirm previous studies and question the high rates of psychiatric prevalence in prison. Copyright © 2012 Elsevier Ltd. All rights reserved.
Impact of asymmetrical flow field-flow fractionation on protein aggregates stability.
Bria, Carmen R M; Williams, S Kim Ratanathanawongs
2016-09-23
The impact of asymmetrical flow field-flow fractionation (AF4) on protein aggregate species is investigated with the aid of multiangle light scattering (MALS) and dynamic light scattering (DLS). The experimental parameters probed in this study include aggregate stability in different carrier liquids, shear stress (related to sample injection), sample concentration (during AF4 focusing), and sample dilution (during separation). Two anti-streptavidin (anti-SA) IgG1 samples composed of low and high molar mass (M) aggregates are subjected to different AF4 conditions. Aggregates suspended and separated in phosphate buffer are observed to dissociate almost entirely to monomer. However, aggregates in citric acid buffer are partially stable with dissociation to 25% and 5% monomer for the low and high M samples, respectively. These results demonstrate that different carrier liquids change the aggregate stability and low M aggregates can behave differently than their larger counterparts. Increasing the duration of the AF4 focusing step showed no significant changes in the percent monomer, percent aggregates, or the average Ms in either sample. Syringe-induced shear related to sample injection resulted in an increase in hydrodynamic diameter (dh) as measured by batch mode DLS. Finally, calculations showed that dilution during AF4 separation is significantly lower than in size exclusion chromatography with dilution occurring mainly at the AF4 channel outlet and not during the separation. This has important ramifications when analyzing aggregates that rapidly dissociate (<∼2s) upon dilution as the size calculated by AF4 theory may be more accurate than that measured by online DLS. Experimentally, the dhs determined by online DLS generally agreed with AF4 theory except for the more well retained larger aggregates for which DLS showed smaller sizes. These results highlight the importance of using AF4 retention theory to understand the impacts of dilution on analytes. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Sahin, Alper; Weiss, David J.
2015-01-01
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Sample size calculations for case-control studies
This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.
Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use
Arthur, Steve M.; Schwartz, Charles C.
1999-01-01
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.
(Sample) Size Matters: Best Practices for Defining Error in Planktic Foraminiferal Proxy Records
NASA Astrophysics Data System (ADS)
Lowery, C.; Fraass, A. J.
2016-02-01
Paleoceanographic research is a vital tool to extend modern observational datasets and to study the impact of climate events for which there is no modern analog. Foraminifera are one of the most widely used tools for this type of work, both as paleoecological indicators and as carriers for geochemical proxies. However, the use of microfossils as proxies for paleoceanographic conditions brings about a unique set of problems. This is primarily due to the fact that groups of individual foraminifera, which usually live about a month, are used to infer average conditions for time periods ranging from hundreds to tens of thousands of years. Because of this, adequate sample size is very important for generating statistically robust datasets, particularly for stable isotopes. In the early days of stable isotope geochemistry, instrumental limitations required hundreds of individual foraminiferal tests to return a value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. While this has many advantages, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB, or 1°C. Here, we demonstrate the use of this tool to quantify error in micropaleontological datasets, and suggest best practices for minimizing error when generating stable isotope data with foraminifera.
Macrophage Migration Inhibitory Factor for the Early Prediction of Infarct Size
Chan, William; White, David A.; Wang, Xin‐Yu; Bai, Ru‐Feng; Liu, Yang; Yu, Hai‐Yi; Zhang, You‐Yi; Fan, Fenling; Schneider, Hans G.; Duffy, Stephen J.; Taylor, Andrew J.; Du, Xiao‐Jun; Gao, Wei; Gao, Xiao‐Ming; Dart, Anthony M.
2013-01-01
Background Early diagnosis and knowledge of infarct size is critical for the management of acute myocardial infarction (MI). We evaluated whether early elevated plasma level of macrophage migration inhibitory factor (MIF) is useful for these purposes in patients with ST‐elevation MI (STEMI). Methods and Results We first studied MIF level in plasma and the myocardium in mice and determined infarct size. MI for 15 or 60 minutes resulted in 2.5‐fold increase over control values in plasma MIF levels while MIF content in the ischemic myocardium reduced by 50% and plasma MIF levels correlated with myocardium‐at‐risk and infarct size at both time‐points (P<0.01). In patients with STEMI, we obtained admission plasma samples and measured MIF, conventional troponins (TnI, TnT), high sensitive TnI (hsTnI), creatine kinase (CK), CK‐MB, and myoglobin. Infarct size was assessed by cardiac magnetic resonance (CMR) imaging. Patients with chronic stable angina and healthy volunteers were studied as controls. Of 374 STEMI patients, 68% had elevated admission MIF levels above the highest value in healthy controls (>41.6 ng/mL), a proportion similar to hsTnI (75%) and TnI (50%), but greater than other biomarkers studied (20% to 31%, all P<0.05 versus MIF). Only admission MIF levels correlated with CMR‐derived infarct size, ventricular volumes and ejection fraction (n=42, r=0.46 to 0.77, all P<0.01) at 3 day and 3 months post‐MI. Conclusion Plasma MIF levels are elevated in a high proportion of STEMI patients at the first obtainable sample and these levels are predictive of final infarct size and the extent of cardiac remodeling. PMID:24096574
Effects of grain size on the properties of bulk nanocrystalline Co-Ni alloys
NASA Astrophysics Data System (ADS)
Qiao, Gui-Ying; Xiao, Fu-Ren
2017-08-01
Bulk nanocrystalline Co78Ni22 alloys with grain size ranging from 5 nm to 35 nm were prepared by high-speed jet electrodeposition (HSJED) and annealing. Microhardness and magnetic properties of these alloys were investigated by microhardness tester and vibrating sample magnetometer. Effects of grain size on these characteristics were also discussed. Results show that the microhardness of nanocrystalline Co78Ni22 alloys increases following a d -1/2-power law with decreasing grain size d. This phenomenon fits the Hall-Petch law when the grain size ranges from 5 nm to 35 nm. However, coercivity H c increases following a 1/d-power law with increasing grain size when the grain size ranges from 5 nm to 15.9 nm. Coercivity H c decreases again for grain sizes above 16.6 nm according to the d 6-power law.
Risk Factors for Addiction and Their Association with Model-Based Behavioral Control.
Reiter, Andrea M F; Deserno, Lorenz; Wilbertz, Tilmann; Heinze, Hans-Jochen; Schlagenhauf, Florian
2016-01-01
Addiction shows familial aggregation and previous endophenotype research suggests that healthy relatives of addicted individuals share altered behavioral and cognitive characteristics with individuals suffering from addiction. In this study we asked whether impairments in behavioral control proposed for addiction, namely a shift from goal-directed, model-based toward habitual, model-free control, extends toward an unaffected sample (n = 20) of adult children of alcohol-dependent fathers as compared to a sample without any personal or family history of alcohol addiction (n = 17). Using a sequential decision-making task designed to investigate model-free and model-based control combined with a computational modeling analysis, we did not find any evidence for altered behavioral control in individuals with a positive family history of alcohol addiction. Independent of family history of alcohol dependence, we however observed that the interaction of two different risk factors of addiction, namely impulsivity and cognitive capacities, predicts the balance of model-free and model-based behavioral control. Post-hoc tests showed a positive association of model-based behavior with cognitive capacity in the lower, but not in the higher impulsive group of the original sample. In an independent sample of particularly high- vs. low-impulsive individuals, we confirmed the interaction effect of cognitive capacities and high vs. low impulsivity on model-based control. In the confirmation sample, a positive association of omega with cognitive capacity was observed in highly impulsive individuals, but not in low impulsive individuals. Due to the moderate sample size of the study, further investigation of the association of risk factors for addiction with model-based behavior in larger sample sizes is warranted.
Separation of cancer cells from white blood cells by pinched flow fractionation.
Pødenphant, Marie; Ashley, Neil; Koprowska, Kamila; Mir, Kalim U; Zalkovskij, Maksim; Bilenberg, Brian; Bodmer, Walter; Kristensen, Anders; Marie, Rodolphe
2015-12-21
In this paper, the microfluidic size-separation technique pinched flow fractionation (PFF) is used to separate cancer cells from white blood cells (WBCs). The cells are separated at efficiencies above 90% for both cell types. Circulating tumor cells (CTCs) are found in the blood of cancer patients and can form new tumors. CTCs are rare cells in blood, but they are important for the understanding of metastasis. There is therefore a high interest in developing a method for the enrichment of CTCs from blood samples, which also enables further analysis of the separated cells. The separation is challenged by the size overlap between cancer cells and the 10(6) times more abundant WBCs. The size overlap prevents high efficiency separation, however we demonstrate that cell deformability can be exploited in PFF devices to gain higher efficiencies than expected from the size distribution of the cells.
"V-junction": a novel structure for high-speed generation of bespoke droplet flows.
Ding, Yun; Casadevall i Solvas, Xavier; deMello, Andrew
2015-01-21
We present the use of microfluidic "V-junctions" as a droplet generation strategy that incorporates enhanced performance characteristics when compared to more traditional "T-junction" formats. This includes the ability to generate target-sized droplets from the very first one, efficient switching between multiple input samples, the production of a wide range of droplet sizes (and size gradients) and the facile generation of droplets with residence time gradients. Additionally, the use of V-junction droplet generators enables the suspension and subsequent resumption of droplet flows at times defined by the user. The high degree of operational flexibility allows a wide range of droplet sizes, payloads, spacings and generation frequencies to be obtained, which in turn provides for an enhanced design space for droplet-based experimentation. We show that the V-junction retains the simplicity of operation associated with T-junction formats, whilst offering functionalities normally associated with droplet-on-demand technologies.
Sharma, Pankaj; Song, Ju-Sub; Han, Moon Hee; Cho, Churl-Hee
2016-01-01
GIS-NaP1 zeolite samples were synthesized using seven different Si/Al ratios (5–11) of the hydrothermal reaction mixtures having chemical composition Al2O3:xSiO2:14Na2O:840H2O to study the impact of Si/Al molar ratio on the water vapour adsorption potential, phase purity, morphology and crystal size of as-synthesized GIS-NaP1 zeolite crystals. The X-ray diffraction (XRD) observations reveal that Si/Al ratio does not affect the phase purity of GIS-NaP1 zeolite samples as high purity GIS-NaP1 zeolite crystals were obtained from all Si/Al ratios. Contrary, Si/Al ratios have remarkable effect on the morphology, crystal size and porosity of GIS-NaP1 zeolite microspheres. Transmission electron microscopy (TEM) evaluations of individual GIS-NaP1 zeolite microsphere demonstrate the characteristic changes in the packaging/arrangement, shape and size of primary nano crystallites. Textural characterisation using water vapour adsorption/desorption, and nitrogen adsorption/desorption data of as-synthesized GIS-NaP1 zeolite predicts the existence of mix-pores i.e., microporous as well as mesoporous character. High water storage capacity 1727.5 cm3 g−1 (138.9 wt.%) has been found for as-synthesized GIS-NaP1 zeolite microsphere samples during water vapour adsorption studies. Further, the total water adsorption capacity values for P6 (1299.4 mg g−1) and P7 (1388.8 mg g−1) samples reveal that these two particular samples can absorb even more water than their own weights. PMID:26964638
Sharma, Pankaj; Song, Ju-Sub; Han, Moon Hee; Cho, Churl-Hee
2016-03-11
GIS-NaP1 zeolite samples were synthesized using seven different Si/Al ratios (5-11) of the hydrothermal reaction mixtures having chemical composition Al2O3:xSiO2:14Na2O:840H2O to study the impact of Si/Al molar ratio on the water vapour adsorption potential, phase purity, morphology and crystal size of as-synthesized GIS-NaP1 zeolite crystals. The X-ray diffraction (XRD) observations reveal that Si/Al ratio does not affect the phase purity of GIS-NaP1 zeolite samples as high purity GIS-NaP1 zeolite crystals were obtained from all Si/Al ratios. Contrary, Si/Al ratios have remarkable effect on the morphology, crystal size and porosity of GIS-NaP1 zeolite microspheres. Transmission electron microscopy (TEM) evaluations of individual GIS-NaP1 zeolite microsphere demonstrate the characteristic changes in the packaging/arrangement, shape and size of primary nano crystallites. Textural characterisation using water vapour adsorption/desorption, and nitrogen adsorption/desorption data of as-synthesized GIS-NaP1 zeolite predicts the existence of mix-pores i.e., microporous as well as mesoporous character. High water storage capacity 1727.5 cm(3) g(-1) (138.9 wt.%) has been found for as-synthesized GIS-NaP1 zeolite microsphere samples during water vapour adsorption studies. Further, the total water adsorption capacity values for P6 (1299.4 mg g(-1)) and P7 (1388.8 mg g(-1)) samples reveal that these two particular samples can absorb even more water than their own weights.
Hydroxymethanesulfonic acid in size-segregated aerosol particles at nine sites in Germany
NASA Astrophysics Data System (ADS)
Scheinhardt, S.; van Pinxteren, D.; Müller, K.; Spindler, G.; Herrmann, H.
2013-12-01
In the course of two field campaigns, size-segregated particle samples were collected at nine sites in Germany, including traffic, urban, rural, marine, and mountain sites. During the chemical characterisation of the samples some of them were found to contain an unknown substance that was later on identified as hydroxymethanesulfonic acid (HMSA). HMSA is known to be formed during the reaction of S(IV) (HSO3- or SO32-) with formaldehyde in the aqueous phase. Due to its stability, HMSA may can act as a reservoir species for S(IV) in the atmosphere and is therefore of interest for the understanding of atmospheric sulphur chemistry. However, no HMSA data are available for atmospheric particles from Central Europe and even on a worldwide scale, data are scarce. Thus, the present study now provides a representative dataset with detailed information on HMSA concentrations in size-segregated Central European aerosol particles. HMSA mass concentrations in this dataset were highly variable: HMSA was found in 224 out of 738 samples (30%), sometimes in high mass concentrations exceeding those of oxalic acid. In average over all 154 impactor runs, 31.5 ng m-3 HMSA were found in PM10, contributing 0.21% to the total mass. The results show that the particle diameter, the sampling location, the sampling season and the air mass origin impact the HMSA mass concentration. Highest concentrations were found in the particle fraction 0.42-1.2 μm, at urban sites, in winter and with eastern (continental) air masses, respectively. The results suggest that HMSA is formed during aging of pollution plumes. A positive correlation of HMSA with sulphate, oxalate and PM is found (R2 > 0.4). The results furthermore suggest that the fraction of HMSA in PM slightly decreases with increasing pH.
Hydroxymethanesulfonic acid in size-segregated aerosol particles at nine sites in Germany
NASA Astrophysics Data System (ADS)
Scheinhardt, S.; van Pinxteren, D.; Müller, K.; Spindler, G.; Herrmann, H.
2014-05-01
In the course of two field campaigns, size-segregated particle samples were collected at nine sites in Germany, including traffic, urban, rural, marine and mountain sites. During the chemical characterisation of the samples some of them were found to contain an unknown substance that was later identified as hydroxymethanesulfonic acid (HMSA). HMSA is known to be formed during the reaction of S(IV) (HSO3- or SO32-) with formaldehyde in the aqueous phase. Due to its stability, HMSA can act as a reservoir species for S(IV) in the atmosphere and is therefore of interest for the understanding of atmospheric sulfur chemistry. However, no HMSA data are available for atmospheric particles from central Europe, and even on a worldwide scale data are scarce. Thus, the present study now provides a representative data set with detailed information on HMSA concentrations in size-segregated central European aerosol particles. HMSA mass concentrations in this data set were highly variable: HMSA was found in 224 out of 738 samples (30%), sometimes in high mass concentrations exceeding those of oxalic acid. On average over all 154 impactor runs, 31.5 ng m-3 HMSA was found in PM10, contributing 0.21% to the total mass. The results show that the particle diameter, the sampling location, the sampling season and the air mass origin impact the HMSA mass concentration. Highest concentrations were found in the particle fraction 0.42-1.2 μm, at urban sites, in winter and with eastern (continental) air masses, respectively. The results suggest that HMSA is formed during aging of pollution plumes. A positive correlation of HMSA with sulfate, oxalate and PM is found (R2 > 0.4). The results furthermore suggest that the fraction of HMSA in PM slightly decreases with increasing pH.
de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff
2016-09-01
The Pearson product–moment correlation coefficient ( r p ) and the Spearman rank correlation coefficient ( r s ) are widely used in psychological research. We compare r p and r s on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, r p and r s have similar expected values but r s is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, r p is more variable than r s . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, r p had lower variability than r s in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, r s had lower variability than r p , and often corresponded more accurately to the population Pearson correlation coefficient ( R p ) than r p did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing r s instead of r p . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of r s and r p . In conclusion, r p is suitable for light-tailed distributions, whereas r s is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. PsycINFO Database Record (c) 2016 APA, all rights reserved
4D x-ray phase contrast tomography for repeatable motion of biological samples
NASA Astrophysics Data System (ADS)
Hoshino, Masato; Uesugi, Kentaro; Yagi, Naoto
2016-09-01
X-ray phase contrast tomography based on a grating interferometer was applied to fast and dynamic measurements of biological samples. To achieve this, the scanning procedure in the tomographic scan was improved. A triangle-shaped voltage signal from a waveform generator to a Piezo stage was used for the fast phase stepping in the grating interferometer. In addition, an optical fiber coupled x-ray scientific CMOS camera was used to achieve fast and highly efficient image acquisitions. These optimizations made it possible to perform an x-ray phase contrast tomographic measurement within an 8 min scan with density resolution of 2.4 mg/cm3. A maximum volume size of 13 × 13 × 6 mm3 was obtained with a single tomographic measurement with a voxel size of 6.5 μm. The scanning procedure using the triangle wave was applied to four-dimensional measurements in which highly sensitive three-dimensional x-ray imaging and a time-resolved dynamic measurement of biological samples were combined. A fresh tendon in the tail of a rat was measured under a uniaxial stretching and releasing condition. To maintain the freshness of the sample during four-dimensional phase contrast tomography, the temperature of the bathing liquid of the sample was kept below 10° using a simple cooling system. The time-resolved deformation of the tendon and each fascicle was measured with a temporal resolution of 5.7 Hz. Evaluations of cross-sectional area size, length of the axis, and mass density in the fascicle during a stretching process provided a basis for quantitative analysis of the deformation of tendon fascicle.
4D x-ray phase contrast tomography for repeatable motion of biological samples.
Hoshino, Masato; Uesugi, Kentaro; Yagi, Naoto
2016-09-01
X-ray phase contrast tomography based on a grating interferometer was applied to fast and dynamic measurements of biological samples. To achieve this, the scanning procedure in the tomographic scan was improved. A triangle-shaped voltage signal from a waveform generator to a Piezo stage was used for the fast phase stepping in the grating interferometer. In addition, an optical fiber coupled x-ray scientific CMOS camera was used to achieve fast and highly efficient image acquisitions. These optimizations made it possible to perform an x-ray phase contrast tomographic measurement within an 8 min scan with density resolution of 2.4 mg/cm 3 . A maximum volume size of 13 × 13 × 6 mm 3 was obtained with a single tomographic measurement with a voxel size of 6.5 μm. The scanning procedure using the triangle wave was applied to four-dimensional measurements in which highly sensitive three-dimensional x-ray imaging and a time-resolved dynamic measurement of biological samples were combined. A fresh tendon in the tail of a rat was measured under a uniaxial stretching and releasing condition. To maintain the freshness of the sample during four-dimensional phase contrast tomography, the temperature of the bathing liquid of the sample was kept below 10° using a simple cooling system. The time-resolved deformation of the tendon and each fascicle was measured with a temporal resolution of 5.7 Hz. Evaluations of cross-sectional area size, length of the axis, and mass density in the fascicle during a stretching process provided a basis for quantitative analysis of the deformation of tendon fascicle.
Douglass, John K; Wehling, Martin F
2016-12-01
A highly automated goniometer instrument (called FACETS) has been developed to facilitate rapid mapping of compound eye parameters for investigating regional visual field specializations. The instrument demonstrates the feasibility of analyzing the complete field of view of an insect eye in a fraction of the time required if using non-motorized, non-computerized methods. Faster eye mapping makes it practical for the first time to employ sample sizes appropriate for testing hypotheses about the visual significance of interspecific differences in regional specializations. Example maps of facet sizes are presented from four dipteran insects representing the Asilidae, Calliphoridae, and Stratiomyidae. These maps provide the first quantitative documentation of the frontal enlarged-facet zones (EFZs) that typify asilid eyes, which, together with the EFZs in male Calliphoridae, are likely to be correlated with high-spatial-resolution acute zones. The presence of EFZs contrasts sharply with the almost homogeneous distribution of facet sizes in the stratiomyid. Moreover, the shapes of EFZs differ among species, suggesting functional specializations that may reflect differences in visual ecology. Surveys of this nature can help identify species that should be targeted for additional studies, which will elucidate fundamental principles and constraints that govern visual field specializations and their evolution.
Etch pit investigation of free electron concentration controlled 4H-SiC
NASA Astrophysics Data System (ADS)
Kim, Hong-Yeol; Shin, Yun Ji; Kim, Jung Gon; Harima, Hiroshi; Kim, Jihyun; Bahng, Wook
2013-04-01
Etch pits were investigated using the molten KOH selective etching method to examine dependence of etch pit shape and size on free electron concentration. The free electron concentrations of highly doped 4H-silicon carbide (SiC) were controlled by proton irradiation and thermal annealing, which was confirmed by a frequency shift in the LO-phonon-plasmon-coupled (LOPC) mode on micro-Raman spectroscopy. The proton irradiated sample with 5×1015 cm-2 fluence and an intrinsic semi-insulating sample showed clearly classified etch pits but different ratios of threading screw dislocation (TSD) and threading edge dislocation (TED) sizes. Easily classified TEDs and TSDs on proton irradiated 4H-SiC were restored as highly doped 4H-SiC after thermal annealing due to the recovered carrier concentrations. The etched surface of proton irradiated 4H-SiC and boron implanted SiC showed different surface conditions after activation.
Isothermal Treatment Effects on Precipitates and Tensile Properties of an HSLA Steel
NASA Astrophysics Data System (ADS)
Kim, J.-E.; Seol, J.-B.; Choi, W.-M.; Lee, B.-J.; Park, C.-G.
2018-05-01
The relationships between tensile properties and precipitates of a high-strength low-alloy steel depending on the isothermal conditions were investigated. While the isothermally treated steel at 300-500 °C for 1 and 24 h had no significant difference, the steel treated at 500 for 336 h, denoted as 500-336 h, not only showed a decrease in tensile stress but also exhibited a highly increased elongation. Transmission electron microscopy and atom probe tomography were utilized to evaluate the precipitates distribution. The results showed that, in the case of 500-336 h sample, the fraction of precipitates with a radius over 10 nm is the highest and that of a few nano-sized precipitates is the lowest among all samples. It can be explained that the coarsening of originally nano-sized precipitates, occurred by diffusion of dissolved carbon in 500-336 h, mainly affects the tensile behavior.
Process for forming a porous silicon member in a crystalline silicon member
Northrup, M. Allen; Yu, Conrad M.; Raley, Norman F.
1999-01-01
Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gasses in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters.
Synthesis and characterization of MOF-aminated graphite oxide composites for CO2 capture
NASA Astrophysics Data System (ADS)
Zhao, Yunxia; Ding, Huiling; Zhong, Qin
2013-11-01
A kind of metal-organic frameworks (MOF-5) and aminated graphite oxide (AGO) composites were prepared for CO2 capture to mitigate global warming. MOF-5, MOF-5/GO (composite of MOF-5 and graphite oxide) and MOF-5/AGO samples were characterized by X-ray powder diffraction (XRD), infrared spectroscopy (IR), scanning electron microscope (SEM), nitrogen adsorption as well as thermogravimetric analysis to figure out their chemistry and structure information. Three types of samples with suitable specific surface area and pore diameter were chosen to test CO2 adsorption performance and stability under humidity conditions. The results indicate that high surface area and pore volume, pore similar in size to the size of gas adsorbate, and extra reactive sites modified in the composites contributes to the high CO2 capacity. Besides, the composites involved by GO or AGO show better anti-moisture performance than the parent MOF.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance
Wang, Weichen
2017-01-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726
Boczkaj, Grzegorz; Przyjazny, Andrzej; Kamiński, Marian
2015-03-01
The paper describes a new procedure for the determination of boiling point distribution of high-boiling petroleum fractions using size-exclusion chromatography with refractive index detection. Thus far, the determination of boiling range distribution by chromatography has been accomplished using simulated distillation with gas chromatography with flame ionization detection. This study revealed that in spite of substantial differences in the separation mechanism and the detection mode, the size-exclusion chromatography technique yields similar results for the determination of boiling point distribution compared with simulated distillation and novel empty column gas chromatography. The developed procedure using size-exclusion chromatography has a substantial applicability, especially for the determination of exact final boiling point values for high-boiling mixtures, for which a standard high-temperature simulated distillation would have to be used. In this case, the precision of final boiling point determination is low due to the high final temperatures of the gas chromatograph oven and an insufficient thermal stability of both the gas chromatography stationary phase and the sample. Additionally, the use of high-performance liquid chromatography detectors more sensitive than refractive index detection allows a lower detection limit for high-molar-mass aromatic compounds, and thus increases the sensitivity of final boiling point determination. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Najlah, Mohammad; Hidayat, Kanar; Omer, Huner K; Mwesigwa, Enosh; Ahmed, Waqar; AlObaidy, Kais G; Phoenix, David A; Elhissi, Abdelbary
2015-03-01
In this study, a niosome nanodispersion was manufactured using high-pressure homogenization following the hydration of proniosomes. Using beclometasone dipropionate (BDP) as a model drug, the characteristics of the homogenized niosomes were compared with vesicles prepared via the conventional approach of probe-sonication. Particle size, zeta potential, and the drug entrapment efficiency were similar for both size reduction mechanisms. However, high-pressure homogenization was much more efficient than sonication in terms of homogenization output rate, avoidance of sample contamination, offering a greater potential for a large-scale manufacturing of noisome nanodispersions. For example, high-pressure homogenization was capable of producing small size niosomes (209 nm) using a short single-step of size reduction (6 min) as compared with the time-consuming process of sonication (237 nm in >18 min) and the BDP entrapment efficiency was 29.65% ± 4.04 and 36.4% ± 2.8. In addition, for homogenization, the output rate of the high-pressure homogenization was 10 ml/min compared with 0.83 ml/min using the sonication protocol. In conclusion, a facile, applicable, and highly efficient approach for preparing niosome nanodispersions has been established using proniosome technology and high-pressure homogenization.
Local sample thickness determination via scanning transmission electron microscopy defocus series.
Beyer, A; Straubinger, R; Belz, J; Volz, K
2016-05-01
The usable aperture sizes in (scanning) transmission electron microscopy ((S)TEM) have significantly increased in the past decade due to the introduction of aberration correction. In parallel with the consequent increase of convergence angle the depth of focus has decreased severely and optical sectioning in the STEM became feasible. Here we apply STEM defocus series to derive the local sample thickness of a TEM sample. To this end experimental as well as simulated defocus series of thin Si foils were acquired. The systematic blurring of high resolution high angle annular dark field images is quantified by evaluating the standard deviation of the image intensity for each image of a defocus series. The derived dependencies exhibit a pronounced maximum at the optimum defocus and drop to a background value for higher or lower values. The full width half maximum (FWHM) of the curve is equal to the sample thickness above a minimum thickness given by the size of the used aperture and the chromatic aberration of the microscope. The thicknesses obtained from experimental defocus series applying the proposed method are in good agreement with the values derived from other established methods. The key advantages of this method compared to others are its high spatial resolution and that it does not involve any time consuming simulations. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Deep-Sea Macrobenthos Community Structure Proximal to the 2010 Macondo Well Blowout (2010-2011)
NASA Astrophysics Data System (ADS)
Briggs, K. B.; Brunner, C. A.; Yeager, K. M.
2017-12-01
Macrobenthos, polycyclic aromatic hydrocarbons (PAH) and sedimentary organic carbon (SOC) were sampled by multicorer in the vicinity of the Deepwater Horizon well head in October 2010 and 2011 to assess the effects of the April 2010 spill. Four stations were sampled east of the well head, four stations were sampled west of the well head, and "control" stations were sampled 58 and 65 km to the southwest. The macrobenthos community, as expected for continental slope/bathyal (water depth 1160-1760 m) benthos, was highly diverse. Polychaetes dominated at all stations, with either crustaceans or mollusks comprising the next most abundant taxon. The stations within five km of the well head showed slightly lower diversity than the more distal stations six months after the blowout. Compared to the "control" station, proportions of suspension feeders were generally depressed at stations with high PAH concentrations. Anomalously high values for abundance and diversity (and PAH) were found at one station 20 km west of the well head. The median body size of macrobenthos was negatively correlated with total PAH concentration, with 74% of the variation in median size explained by variation in PAH, when the anomalous station was excluded. Macrobenthos abundance did not appear to be influenced by SOC. Abundance and diversity of the macrobenthos was generally higher 18 months after the blowout, with measured PAH concentrations diminished to below background level.
Porosity characterization for heterogeneous shales using integrated multiscale microscopy
NASA Astrophysics Data System (ADS)
Rassouli, F.; Andrew, M.; Zoback, M. D.
2016-12-01
Pore size distribution analysis plays a critical role in gas storage capacity and fluid transport characterization of shales. Study of the diverse distribution of pore size and structure in such low permeably rocks is withheld by the lack of tools to visualize the microstructural properties of shale rocks. In this paper we try to use multiple techniques to investigate the full pore size range in different sample scales. Modern imaging techniques are combined with routine analytical investigations (x-ray diffraction, thin section analysis and mercury porosimetry) to describe pore size distribution of shale samples from Haynesville formation in East Texas to generate a more holistic understanding of the porosity structure in shales, ranging from standard core plug down to nm scales. Standard 1" diameter core plug samples were first imaged using a Versa 3D x-ray microscope at lower resolutions. Then we pick several regions of interest (ROIs) with various micro-features (such as micro-cracks and high organic matters) in the rock samples to run higher resolution CT scans using a non-destructive interior tomography scans. After this step, we cut the samples and drill 5 mm diameter cores out of the selected ROIs. Then we rescan the samples to measure porosity distribution of the 5 mm cores. We repeat this step for samples with diameter of 1 mm being cut out of the 5 mm cores using a laser cutting machine. After comparing the pore structure and distribution of the samples measured form micro-CT analysis, we move to nano-scale imaging to capture the ultra-fine pores within the shale samples. At this stage, the diameter of the 1 mm samples will be milled down to 70 microns using the laser beam. We scan these samples in a nano-CT Ultra x-ray microscope and calculate the porosity of the samples by image segmentation methods. Finally, we use images collected from focused ion beam scanning electron microscopy (FIB-SEM) to be able to compare the results of porosity measurements from all different imaging techniques. These multi-scale characterization techniques are then compared with traditional analytical techniques such as Mercury Porosimetry.
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat
2018-03-01
To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
Stability and bias of classification rates in biological applications of discriminant analysis
Williams, B.K.; Titus, K.; Hines, J.E.
1990-01-01
We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases
Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach
NASA Astrophysics Data System (ADS)
Xiao, T.
2012-12-01
One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Moreira, M. A.
1983-01-01
Using digitally processed MSS/LANDSAT data as auxiliary variable, a methodology to estimate wheat (Triticum aestivum L) area by means of sampling techniques was developed. To perform this research, aerial photographs covering 720 sq km in Cruz Alta test site at the NW of Rio Grande do Sul State, were visually analyzed. LANDSAT digital data were analyzed using non-supervised and supervised classification algorithms; as post-processing the classification was submitted to spatial filtering. To estimate wheat area, the regression estimation method was applied and different sample sizes and various sampling units (10, 20, 30, 40 and 60 sq km) were tested. Based on the four decision criteria established for this research, it was concluded that: (1) as the size of sampling units decreased the percentage of sampled area required to obtain similar estimation performance also decreased; (2) the lowest percentage of the area sampled for wheat estimation with relatively high precision and accuracy through regression estimation was 90% using 10 sq km s the sampling unit; and (3) wheat area estimation by direct expansion (using only aerial photographs) was less precise and accurate when compared to those obtained by means of regression estimation.
Characterisation of Fine Ash Fractions from the AD 1314 Kaharoa Eruption
NASA Astrophysics Data System (ADS)
Weaver, S. J.; Rust, A.; Carey, R. J.; Houghton, B. F.
2012-12-01
The AD 1314±12 yr Kaharoa eruption of Tarawera volcano, New Zealand, produced deposits exhibiting both plinian and subplinian characteristics (Nairn et al., 2001; 2004, Leonard et al., 2002, Hogg et al., 2003). Their widespread dispersal yielded volumes, column heights, and mass discharge rates of plinian magnitude and intensity (Sahetapy-Engel, 2002); however, vertical shifts in grain size suggest waxing and waning within single phases and time-breaks on the order of hours between phases. These grain size shifts were quantified using sieve, laser diffraction, and image analysis of the fine ash fractions (<1 mm in diameter) of some of the most explosive phases of the eruption. These analyses served two purposes: 1) to characterise the change in eruption intensity over time, and 2) to compare the three methods of grain size analysis. Additional analyses of the proportions of components and particle shape were also conducted to aid in the interpretation of the eruption and transport dynamics. 110 samples from a single location about 6 km from source were sieved at half phi intervals between -4φ to 4φ (16 mm - 63 μm). A single sample was then chosen to test the range of grain sizes to run through the Mastersizer 2000. Three aliquots were tested; the first consisted of each sieve size fraction ranging between 0φ (1000 μm) and <4φ (<63 μm, i.e. the pan). For example, 0, 0.5, 1, …, 4φ, and the pan were ran through the Mastersizer and then their results, weighted according to their sieve weight percents, were summed together to produce a total distribution. The second aliquot included 3 samples ranging between 0-2φ (1000-250 μm), 2.5-4φ (249-63 μm), and the pan. A single sample consisting of the total range of grain sizes between 0φ and the pan was used for the final aliquot. Their results were compared and it was determined that the single sample consisting of the broadest range of grain sizes yielded an accurate grain size distribution. This data was then compared with the sieve weight percent data, and revealed that there is a significant difference in size characterisation between sieving and the Mastersizer for size fractions between 0-3φ (1000-125 μm). This is due predominantly to the differing methods that sieving and the Mastersizer use to characterise a single particle, to inhomogeneity in grain density in each grain-size fraction, and to grain-shape irregularities. This led the Mastersizer to allocate grains from a certain sieve size fraction into coarser size fractions. Therefore, only the Mastersizer data from 3.5φ and below were combined with the coarser sieve data to yield total grain size distributions. This high-resolution analysis of the grain size data enabled subtle trends in grain size to be identified and related to short timescale eruptive processes.
Johnson, K E; McMorris, B J; Raynor, L A; Monsen, K A
2013-01-01
The Omaha System is a standardized interface terminology that is used extensively by public health nurses in community settings to document interventions and client outcomes. Researchers using Omaha System data to analyze the effectiveness of interventions have typically calculated p-values to determine whether significant client changes occurred between admission and discharge. However, p-values are highly dependent on sample size, making it difficult to distinguish statistically significant changes from clinically meaningful changes. Effect sizes can help identify practical differences but have not yet been applied to Omaha System data. We compared p-values and effect sizes (Cohen's d) for mean differences between admission and discharge for 13 client problems documented in the electronic health records of 1,016 young low-income parents. Client problems were documented anywhere from 6 (Health Care Supervision) to 906 (Caretaking/parenting) times. On a scale from 1 to 5, the mean change needed to yield a large effect size (Cohen's d ≥ 0.80) was approximately 0.60 (range = 0.50 - 1.03) regardless of p-value or sample size (i.e., the number of times a client problem was documented in the electronic health record). Researchers using the Omaha System should report effect sizes to help readers determine which differences are practical and meaningful. Such disclosures will allow for increased recognition of effective interventions.
Microfluidic integration of parallel solid-phase liquid chromatography.
Huft, Jens; Haynes, Charles A; Hansen, Carl L
2013-03-05
We report the development of a fully integrated microfluidic chromatography system based on a recently developed column geometry that allows for robust packing of high-performance separation columns in poly(dimethylsiloxane) microfluidic devices having integrated valves made by multilayer soft lithography (MSL). The combination of parallel high-performance separation columns and on-chip plumbing was used to achieve a fully integrated system for on-chip chromatography, including all steps of automated sample loading, programmable gradient generation, separation, fluorescent detection, and sample recovery. We demonstrate this system in the separation of fluorescently labeled DNA and parallel purification of reverse transcription polymerase chain reaction (RT-PCR) amplified variable regions of mouse immunoglobulin genes using a strong anion exchange (AEX) resin. Parallel sample recovery in an immiscible oil stream offers the advantage of low sample dilution and high recovery rates. The ability to perform nucleic acid size selection and recovery on subnanogram samples of DNA holds promise for on-chip genomics applications including sequencing library preparation, cloning, and sample fractionation for diagnostics.
Schönberg, Anna; Theunert, Christoph; Li, Mingkun; Stoneking, Mark; Nasidze, Ivan
2011-09-01
To investigate the demographic history of human populations from the Caucasus and surrounding regions, we used high-throughput sequencing to generate 147 complete mtDNA genome sequences from random samples of individuals from three groups from the Caucasus (Armenians, Azeri and Georgians), and one group each from Iran and Turkey. Overall diversity is very high, with 144 different sequences that fall into 97 different haplogroups found among the 147 individuals. Bayesian skyline plots (BSPs) of population size change through time show a population expansion around 40-50 kya, followed by a constant population size, and then another expansion around 15-18 kya for the groups from the Caucasus and Iran. The BSP for Turkey differs the most from the others, with an increase from 35 to 50 kya followed by a prolonged period of constant population size, and no indication of a second period of growth. An approximate Bayesian computation approach was used to estimate divergence times between each pair of populations; the oldest divergence times were between Turkey and the other four groups from the South Caucasus and Iran (~400-600 generations), while the divergence time of the three Caucasus groups from each other was comparable to their divergence time from Iran (average of ~360 generations). These results illustrate the value of random sampling of complete mtDNA genome sequences that can be obtained with high-throughput sequencing platforms.
The prevalence of terraced treescapes in analyses of phylogenetic data sets.
Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J
2018-04-04
The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.
SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.
Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman
2017-03-01
We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).
Development of a High-Resolution, Single-Photon X-Ray Detector
NASA Technical Reports Server (NTRS)
Seidel, George M.
1996-01-01
Research on the development of a low-temperature, magnetic bolometer for x-ray detection is reported. The principal accomplishments during the first phase of this research are as follows. (1) We have constructed SQUID magnetometers and detected both 122 keV and 6 keV x-rays in relatively larger metallic samples with high quantum efficiency. (2) The magnetic properties of a metal sample with localized paramagnetic spins have been measured and found to agree with theoretical expectations. (3) The size of the magnetic response of the sample to x-rays is in agreement with predictions based on the properties of the sample and sensitivity of the magnetometer, supporting the prediction that a resolution of 1 eV at 10 keV should be achievable.
The Neutron Tomography Studies of the Rocks from the Kola Superdeep Borehole
NASA Astrophysics Data System (ADS)
Kichanov, S. E.; Kozlenko, D. P.; Ivankina, T. I.; Rutkauskas, A. V.; Lukin, E. V.; Savenko, B. N.
The volume morphology of a gneiss sample K-8802 recovered from the deep of 8802 m of the Kola Superdeep Borehole and its surface homologue sample PL-36 have been studied by means of neutron radiography and tomography methods. The volumes and size distributions of a biotite-muscovite grains as well as grains orientation distribution have been obtained from experimental data. It was found that the average volumes of the biotite-muscovite grains in surface homologue sample is noticeably larger than the average volume of grains in the deep-seated gneiss sample K-8802. This drastically differences in grains volumes can be explained by the recrystallization processes in deep of the Kola Superdeep Borehole at high temperatures and high pressures.
High pressure inertial focusing for separating and concentrating bacteria at high throughput
NASA Astrophysics Data System (ADS)
Cruz, J.; Hooshmand Zadeh, S.; Graells, T.; Andersson, M.; Malmström, J.; Wu, Z. G.; Hjort, K.
2017-08-01
Inertial focusing is a promising microfluidic technology for concentration and separation of particles by size. However, there is a strong correlation of increased pressure with decreased particle size. Theory and experimental results for larger particles were used to scale down the phenomenon and find the conditions that focus 1 µm particles. High pressure experiments in robust glass chips were used to demonstrate the alignment. We show how the technique works for 1 µm spherical polystyrene particles and for Escherichia coli, not being harmful for the bacteria at 50 µl min-1. The potential to focus bacteria, simplicity of use and high throughput make this technology interesting for healthcare applications, where concentration and purification of a sample may be required as an initial step.
NASA Astrophysics Data System (ADS)
Stanford, McKenna W.
The High Altitude Ice Crystals - High Ice Water Content (HAIC-HIWC) field campaign produced aircraft retrievals of total condensed water content (TWC), hydrometeor particle size distributions, and vertical velocity (w) in high ice water content regions of tropical mesoscale convective systems (MCSs). These observations are used to evaluate deep convective updraft properties in high-resolution nested Weather Research and Forecasting (WRF) simulations of observed MCSs. Because simulated hydrometeor properties are highly sensitive to the parameterization of microphysics, three commonly used microphysical parameterizations are tested, including two bulk schemes (Thompson and Morrison) and one bin scheme (Fast Spectral Bin Microphysics). A commonly documented bias in cloud-resolving simulations is the exaggeration of simulated radar reflectivities aloft in tropical MCSs. This may result from overly strong convective updrafts that loft excessive condensate mass and from simplified approximations of hydrometeor size distributions, properties, species separation, and microphysical processes. The degree to which the reflectivity bias is a separate function of convective dynamics, condensate mass, and hydrometeor size has yet to be addressed. This research untangles these components by comparing simulated and observed relationships between w, TWC, and hydrometer size as a function of temperature. All microphysics schemes produce median mass diameters that are generally larger than observed for temperatures between -10 °C and -40 °C and TWC > 1 g m-3. Observations produce a prominent mode in the composite mass size distribution around 300 microm, but under most conditions, all schemes shift the distribution mode to larger sizes. Despite a much greater number of samples, all simulations fail to reproduce observed high TWC or high w conditions between -20 °C and -40 °C in which only a small fraction of condensate mass is found in relatively large particle sizes. Increasing model resolution and employing explicit cloud droplet nucleation decrease the size bias, but not nearly enough to reproduce observations. Because simulated particle sizes are too large across all schemes when controlling for temperature, w, and TWC, this bias is hypothesized to partly result from errors in parameterized microphysical processes in addition to overly simplified hydrometeor properties such as mass-size relationships and particle size distribution parameters.
NASA Astrophysics Data System (ADS)
Chua, Stephen; Gouramanis, Chris; Etchebes, Marie; Klinger, Yann; Gao, Mingxing; Switzer, Adam; Tapponnier, Paul
2016-04-01
High-resolution, late-Holocene climate patterns in arid central Asia, in particular the behaviour of the Asian Monsoon and occurrences of precipitation events, are not yet fully understood. In particular, few high-resolution palaeoenvironmental and palaeoclimate studies are available from the Junggar-Altay region in the Xinjiang Province, northwestern China. This area is tectonically active and the last large earthquake (Mw 7.9) occurred along the Fuyun strike-slip fault in 1931, resulting in ˜6m of right-lateral movement. South of the epicentre at Karaxingar, this earthquake resulted in the construction of large scarp-bounded ponds (46o43'N, 89o55'E) now filled with sediment. Sediment samples were collected every centimetre at a two-meter deep trench where the main pond was the deepest. The majority of the AMS 14C ages of charcoal and plant fibre samples are modern (56±34 to 171±34 yr BP) with the exception of a few much older carbon (842±26 to 2017±26 yr BP) at the base of the trench. The post-1931 age of the pond is validated by the 137Cs and 210Pb age-depth chronology. Each sediment sample was analysed for organic, carbonate and clastic contents and particle-size. This high-resolution analysis revealed eleven upward-fining sequences, with three prominent grain size peaks at depths of 1.7m, 0.95m and 0.6m below ground surface, suggesting three major modern precipitation events. The 11 grain-size peaks since 1931 in the pond coincide with 11 periods of increased precipitation measured in high-elevation tree-ring records ˜50 km north of the pond. Thus, low-altitude post-seismic sedimentary depocentres provide excellent high-resolution palaeoclimate archives that can fill a significant data gap where other proxy records are not available.
Jones, Jeffery I.; Gardner, Michael S.; Schieltz, David M.; Parks, Bryan A.; Toth, Christopher A.; Rees, Jon C.; Andrews, Michael L.; Carter, Kayla; Lehtikoski, Antony K.; McWilliams, Lisa G.; Williamson, Yulanda M.; Bierbaum, Kevin P.; Pirkle, James L.; Barr, John R.
2018-01-01
Lipoproteins are complex molecular assemblies that are key participants in the intricate cascade of extracellular lipid metabolism with important consequences in the formation of atherosclerotic lesions and the development of cardiovascular disease. Multiplexed mass spectrometry (MS) techniques have substantially improved the ability to characterize the composition of lipoproteins. However, these advanced MS techniques are limited by traditional pre-analytical fractionation techniques that compromise the structural integrity of lipoprotein particles during separation from serum or plasma. In this work, we applied a highly effective and gentle hydrodynamic size based fractionation technique, asymmetric flow field-flow fractionation (AF4), and integrated it into a comprehensive tandem mass spectrometry based workflow that was used for the measurement of apolipoproteins (apos A-I, A-II, A-IV, B, C-I, C-II, C-III and E), free cholesterol (FC), cholesterol esters (CE), triglycerides (TG), and phospholipids (PL) (phosphatidylcholine (PC), sphingomyelin (SM), phosphatidylethanolamine (PE), phosphatidylinositol (PI) and lysophosphatidylcholine (LPC)). Hydrodynamic size in each of 40 size fractions separated by AF4 was measured by dynamic light scattering. Measuring all major lipids and apolipoproteins in each size fraction and in the whole serum, using total of 0.1 ml, allowed the volumetric calculation of lipoprotein particle numbers and expression of composition in molar analyte per particle number ratios. Measurements in 110 serum samples showed substantive differences between size fractions of HDL and LDL. Lipoprotein composition within size fractions was expressed in molar ratios of analytes (A-I/A-II, C-II/C-I, C-II/C-III. E/C-III, FC/PL, SM/PL, PE/PL, and PI/PL), showing differences in sample categories with combinations of normal and high levels of Total-C and/or Total-TG. The agreement with previous studies indirectly validates the AF4-LC-MS/MS approach and demonstrates the potential of this workflow for characterization of lipoprotein composition in clinical studies using small volumes of archived frozen samples. PMID:29634782
NASA Astrophysics Data System (ADS)
Lauth, R.; Norcross, B.; Kotwicki, S.; Britt, L.
2016-02-01
Long-term monitoring of the high-Arctic marine biota is needed to understand how the ecosystem is changing in response to climate change, diminishing sea-ice, and increasing anthropogenic activity. Since 1959, bottom trawls (BT) have been a primary research tool for investigating fishes, crabs and other demersal macrofauna in the high-Arctic. However, sampling gears, methodologies, and the overall survey designs used have generally lacked consistency and/or have had limited spatial coverage. This has restricted the ability of scientists and managers to effectively use existing BT survey data for investigating historical trends and zoogeographic changes in high-Arctic marine populations. Two different BTs currently being used for surveying the high-Arctic are: 1) a small-mesh 3-m plumb-staff beam trawl (PSBT), and 2) a large-mesh 83-112 Eastern bottom trawl (EBT). A paired comparison study was conducted in 2012 to compare catch composition and the sampling characteristics of the two different trawl gears, and a size selectivity ratio statistic was used to investigate how the probability of fish and crab retention differs between the EBT and PBST. Obvious contrasting characteristics of the PSBT and EBT were mesh size, area-swept, tow speed, and vertical opening. The finer mesh and harder bottom-tending characteristics of the PSBT retained juvenile fishes and other smaller macroinvertebrates and it was also more efficient catching benthic infauna that were just below the surface. The EBT had a larger net opening with greater tow duration at a higher speed that covered a potentially wider range of benthic habitats during a single tow, and it was more efficient at capturing larger and more mobile organisms, as well as organisms that were further off bottom. The ratio statistic indicated large differences in size selectivity between the two gears for both fish and crab. Results from this investigation will provide a framework for scientists and mangers to better understand how to interpret and compare data from existing PBST and EBT surveys in the high-Arctic, and the results provide information on factors worth considering in choosing what BT gear to use for a standardized long-term BT sampling program to monitor fishes, crabs and other demersal macrofauna in the high-Arctic.
Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback
Hwang, Ing-Shiou; Lin, Yen-Ting; Huang, Wei-Min; Yang, Zong-Ru; Hu, Chia-Ling; Chen, Yi-Ching
2017-01-01
Discharge patterns from a population of motor units (MUs) were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF) to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF). In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13–35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band. PMID:28125658
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirtley, John R., E-mail: jkirtley@stanford.edu; Rosenberg, Aaron J.; Palmstrom, Johanna C.
Superconducting QUantum Interference Device (SQUID) microscopy has excellent magnetic field sensitivity, but suffers from modest spatial resolution when compared with other scanning probes. This spatial resolution is determined by both the size of the field sensitive area and the spacing between this area and the sample surface. In this paper we describe scanning SQUID susceptometers that achieve sub-micron spatial resolution while retaining a white noise floor flux sensitivity of ≈2μΦ{sub 0}/Hz{sup 1/2}. This high spatial resolution is accomplished by deep sub-micron feature sizes, well shielded pickup loops fabricated using a planarized process, and a deep etch step that minimizes themore » spacing between the sample surface and the SQUID pickup loop. We describe the design, modeling, fabrication, and testing of these sensors. Although sub-micron spatial resolution has been achieved previously in scanning SQUID sensors, our sensors not only achieve high spatial resolution but also have integrated modulation coils for flux feedback, integrated field coils for susceptibility measurements, and batch processing. They are therefore a generally applicable tool for imaging sample magnetization, currents, and susceptibilities with higher spatial resolution than previous susceptometers.« less
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology
Vavrek, Matthew J.
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770
Apparatus and method for measuring minority carrier lifetimes in semiconductor materials
Ahrenkiel, Richard K.; Johnston, Steven W.
2001-01-01
An apparatus for determining the minority carrier lifetime of a semiconductor sample includes a positioner for moving the sample relative to a coil. The coil is connected to a bridge circuit such that the impedance of one arm of the bridge circuit is varied as sample is positioned relative to the coil. The sample is positioned relative to the coil such that any change in the photoconductance of the sample created by illumination of the sample creates a linearly related change in the input impedance of the bridge circuit. In addition, the apparatus is calibrated to work at a fixed frequency so that the apparatus maintains a consistently high sensitivity and high linearity for samples of different sizes, shapes, and material properties. When a light source illuminates the sample, the impedance of the bridge circuit is altered as excess carriers are generated in the sample, thereby producing a measurable signal indicative of the minority carrier lifetimes or recombination rates of the sample.
Apparatus for measuring minority carrier lifetimes in semiconductor materials
Ahrenkiel, R.K.
1999-07-27
An apparatus for determining the minority carrier lifetime of a semiconductor sample includes a positioner for moving the sample relative to a coil. The coil is connected to a bridge circuit such that the impedance of one arm of the bridge circuit is varied as sample is positioned relative to the coil. The sample is positioned relative to the coil such that any change in the photoconductance of the sample created by illumination of the sample creates a linearly related change in the input impedance of the bridge circuit. In addition, the apparatus is calibrated to work at a fixed frequency so that the apparatus maintains a consistently high sensitivity and high linearly for samples of different sizes, shapes, and material properties. When a light source illuminates the sample, the impedance of the bridge circuit is altered as excess carriers are generated in the sample, thereby producing a measurable signal indicative of the minority carrier lifetimes or recombination rates of the sample. 17 figs.
Apparatus for measuring minority carrier lifetimes in semiconductor materials
Ahrenkiel, Richard K.
1999-01-01
An apparatus for determining the minority carrier lifetime of a semiconductor sample includes a positioner for moving the sample relative to a coil. The coil is connected to a bridge circuit such that the impedance of one arm of the bridge circuit is varied as sample is positioned relative to the coil. The sample is positioned relative to the coil such that any change in the photoconductance of the sample created by illumination of the sample creates a linearly related change in the input impedance of the bridge circuit. In addition, the apparatus is calibrated to work at a fixed frequency so that the apparatus maintains a consistently high sensitivity and high linearly for samples of different sizes, shapes, and material properties. When a light source illuminates the sample, the impedance of the bridge circuit is altered as excess carriers are generated in the sample, thereby producing a measurable signal indicative of the minority carrier lifetimes or recombination rates of the sample.
Key to enhance thermoelectric performance by controlling crystal size of strontium titanate
NASA Astrophysics Data System (ADS)
Wang, Jun; Ye, Xinxin; Yaer, Xinba; Wu, Yin; Zhang, Boyu; Miao, Lei
2015-09-01
One-step molten salt synthesis process was introduced to fabricate nano to micrometer sized SrTiO3 powders in which effects of synthesis temperature, oxide-to-flux ratios and raw materials on the generation of SrTiO3 powders were examined. 100 nm or above sized pure SrTiO3 particles were obtained at relatively lower temperature of 900∘C. Micro-sized rhombohedral crystals with a maximum size of approximately 12 μm were obtained from SrCO3 or Sr(NO3)2 strontium source with 1:1 O/S ratio. Controlled crystal size and morphology of Nb-doped SrTiO3 particles are prepared by using this method to confirm the performance of thermoelectric properties. The Seebeck coefficient obtained is significantly high when compared with the reported data, and the high ratio of nano particles in the sample has a positive effect on the increase of Seebeck coefficient too, which is likely due to the energy filtering effect at large numbers of grain boundaries resulting from largely distributed structure.
Schollenberger, Martin; Radke, Wolfgang
2011-10-28
A gradient ranging from methanol to tetrahydrofuran (THF) was applied to a series of poly(methyl methacrylate) (PMMA) standards, using the recently developed concept of SEC-gradients. Contrasting to conventional gradients the samples eluted before the solvent, i.e. within the elution range typical for separations by SEC, however, the high molar mass PMMAs were retarded as compared to experiments on the same column using pure THF as the eluent. The molar mass dependence on retention volume showed a complex behaviour with a nearly molar mass independent elution for high molar masses. This molar mass dependence was explained in terms of solubility and size exclusion effects. The solubility based SEC-gradient was proven to be useful to separate PMMA and poly(n-butyl crylate) (PnBuA) from a poly(t-butyl crylate) (PtBuA) sample. These samples could be separated neither by SEC in THF, due to their very similar hydrodynamic volumes, nor by an SEC-gradient at adsorbing conditions, due to a too low selectivity. The example shows that SEC-gradients can be applied not only in adsorption/desorption mode, but also in precipitation/dissolution mode without risking blocking capillaries or breakthrough peaks. Thus, the new approach is a valuable alternative to conventional gradient chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.
Reduction of the capillary water absorption of foamed concrete by using the porous aggregate
NASA Astrophysics Data System (ADS)
Namsone, E.; Sahmenko, G.; Namsone, E.; Korjakins, A.
2017-10-01
The article reports on the research of reduction of the capillary water absorption of foamed concrete (FC) by using the porous aggregate such as the granules of expanded glass (EG) and the cenospheres (CS). The EG granular aggregate is produced by using recycled glass and blowing agents, melted down in high temperature. The unique structure of the EG granules is obtained where the air is kept closed inside the pellet. The use of the porous aggregate in the preparation process of the FC samples provides an opportunity to improve some physical and mechanical properties of the FC, classifying it as a product of high-performance. In this research the FC samples were produced by adding the EG granules and the CS. The capillary water absorption of hardened samples has been verified. The pore size distribution has been determined by microscope. It is a very important characteristic, specifically in the cold climate territories-where temperature often falls below zero degrees. It is necessary to prevent forming of the micro sized pores in the final structure of the material as it reduces its water absorption capacity. In addition, at a below zero temperature water inside these micro sized pores can increase them by expanding the stress on their walls during the freezing process. Research of the capillary water absorption kinetics can be practical for prevision of the FC durability.
Berk, Lotte; van Boxtel, Martin; van Os, Jim
2017-11-01
An increased need exists to examine factors that protect against age-related cognitive decline. There is preliminary evidence that meditation can improve cognitive function. However, most studies are cross-sectional and examine a wide variety of meditation techniques. This review focuses on the standard eight-week mindfulness-based interventions (MBIs) such as mindfulness-based stress reduction (MBSR) and mindfulness-based cognitive therapy (MBCT). We searched the PsychINFO, CINAHL, Web of Science, COCHRANE, and PubMed databases to identify original studies investigating the effects of MBI on cognition in older adults. Six reports were included in the review of which three were randomized controlled trials. Studies reported preliminary positive effects on memory, executive function and processing speed. However, most reports had a high risk of bias and sample sizes were small. The only study with low risk of bias, large sample size and active control group reported no significant findings. We conclude that eight-week MBI for older adults are feasible, but results on cognitive improvement are inconclusive due a limited number of studies, small sample sizes, and a high risk of bias. Rather than a narrow focus on cognitive training per se, future research may productively shift to investigate MBI as a tool to alleviate suffering in older adults, and to prevent cognitive problems in later life already in younger target populations.
NASA Astrophysics Data System (ADS)
Zhang, X.; Roman, M.; Kimmel, D.; McGilliard, C.; Boicourt, W.
2006-05-01
High-resolution, axial sampling surveys were conducted in Chesapeake Bay during April, July, and October from 1996 to 2000 using a towed sampling device equipped with sensors for depth, temperature, conductivity, oxygen, fluorescence, and an optical plankton counter (OPC). The results suggest that the axial distribution and variability of hydrographic and biological parameters in Chesapeake Bay were primarily influenced by the source and magnitude of freshwater input. Bay-wide spatial trends in the water column-averaged values of salinity were linear functions of distance from the main source of freshwater, the Susquehanna River, at the head of the bay. However, spatial trends in the water column-averaged values of temperature, dissolved oxygen, chlorophyll-a and zooplankton biomass were nonlinear along the axis of the bay. Autocorrelation analysis and the residuals of linear and quadratic regressions between each variable and latitude were used to quantify the patch sizes for each axial transect. The patch sizes of each variable depended on whether the data were detrended, and the detrending techniques applied. However, the patch size of each variable was generally larger using the original data compared to the detrended data. The patch sizes of salinity were larger than those for dissolved oxygen, chlorophyll-a and zooplankton biomass, suggesting that more localized processes influence the production and consumption of plankton. This high-resolution quantification of the zooplankton spatial variability and patch size can be used for more realistic assessments of the zooplankton forage base for larval fish species.
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Apel, Charles T.; Layman, Lawrence R.; Gallimore, David L.
1988-01-01
A nebulizer for generating aerosol having small droplet sizes and high efficiency at low sample introduction rates. The nebulizer has a cylindrical gas permeable active surface. A sleeve is disposed around the cylinder and gas is provided from the sleeve to the interior of the cylinder formed by the active surface. In operation, a liquid is provided to the inside of the gas permeable surface. The gas contacts the wetted surface and forms small bubbles which burst to form an aerosol. Those bubbles which are large are carried by momentum to another part of the cylinder where they are renebulized. This process continues until the entire sample is nebulized into aerosol sized droplets.
Micro Electron MicroProbe and Sample Analyzer
NASA Technical Reports Server (NTRS)
Manohara, Harish; Bearman, Gregory; Douglas, Susanne; Bronikowski, Michael; Urgiles, Eduardo; Kowalczyk, Robert; Bryson, Charles
2009-01-01
A proposed, low-power, backpack-sized instrument, denoted the micro electron microprobe and sample analyzer (MEMSA), would serve as a means of rapidly performing high-resolution microscopy and energy-dispersive x-ray spectroscopy (EDX) of soil, dust, and rock particles in the field. The MEMSA would be similar to an environmental scanning electron microscope (ESEM) but would be much smaller and designed specifically for field use in studying effects of geological alteration at the micrometer scale. Like an ESEM, the MEMSA could be used to examine uncoated, electrically nonconductive specimens. In addition to the difference in size, other significant differences between the MEMSA and an ESEM lie in the mode of scanning and the nature of the electron source.
Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong
2017-09-01
Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.
Sample size and power for cost-effectiveness analysis (part 1).
Glick, Henry A
2011-03-01
Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.
Estimation of sample size and testing power (Part 4).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-01-01
Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.
An estimate of field size distributions for selected sites in the major grain producing countries
NASA Technical Reports Server (NTRS)
Podwysocki, M. H.
1977-01-01
The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.
NASA Astrophysics Data System (ADS)
El-Sayed, Karimat; Mohamed, Mohamed Bakr; Hamdy, Sh.; Ata-Allah, S. S.
2017-02-01
Nano-crystalline NiFe2O4 was synthesized by citrate and sol-gel methods at different annealing temperatures and the results were compared with a bulk sample prepared by ceramic method. The effect of methods of preparation and different annealing temperatures on the crystallize size, strain, bond lengths, bond angles, cations distribution and degree of inversions were investigated by X-ray powder diffraction, high resolution transmission electron microscope, Mössbauer effect spectrometer and vibrating sample magnetometer. The cations distributions were determined at both octahedral and tetrahedral sites using both Mössbauer effect spectroscopy and a modified Bertaut method using Rietveld method. The Mössbauer effect spectra showed a regular decrease in the hyperfine field with decreasing particle size. Saturation magnetization and coercivity are found to be affected by the particle size and the cations distribution.
Modeling and Simulation of A Microchannel Cooling System for Vitrification of Cells and Tissues.
Wang, Y; Zhou, X M; Jiang, C J; Yu, Y T
The microchannel heat exchange system has several advantages and can be used to enhance heat transfer for vitrification. To evaluate the microchannel cooling method and to analyze the effects of key parameters such as channel structure, flow rate and sample size. A computational flow dynamics model is applied to study the two-phase flow in microchannels and its related heat transfer process. The fluid-solid coupling problem is solved with a whole field solution method (i.e., flow profile in channels and temperature distribution in the system being simulated simultaneously). Simulation indicates that a cooling rate >10 4 C/min is easily achievable using the microchannel method with the high flow rate for a board range of sample sizes. Channel size and material used have significant impact on cooling performance. Computational flow dynamics is useful for optimizing the design and operation of the microchannel system.
Closed percutaneous pleural biopsy. A lost art in the new era.
Khadadah, Mousa E; Muqim, Abdulaziz T; Al-Mutairi, Abdulla D; Nahar, Ibrahim K; Sharma, Prem N; Behbehani, Nasser H; El-Maradni, Nabeel M
2009-06-01
To assess the association between size and number of biopsy specimens obtained by percutaneous closed pleural biopsy, with overall diagnostic yield in general, and histopathological evidence of tuberculosis pleurisy, in particular. One hundred and forty-three patients, with a high index of clinically having tuberculous pleurisy, were referred to the respiratory division of Mubarak Al-Kabeer Hospital in Kuwait during a 9-year period (January 1999 to December 2007). All subjects with exudative lymphocytic predominant effusion underwent percutaneous closed pleural biopsy, looking for tuberculous granulomas. The clinical diagnosis and pathological characteristics (number and size of biopsy samples) were analyzed. Overall diagnostic yield of percutaneous closed pleural biopsy in all cases was noticed to be 52%. The larger biopsy sample size of 3 mm and more, and the higher number of specimens (> or = 4) were significantly associated with an increased diagnostic yield for tuberculous pleurisy (p=0.007 and 0.047). Obtaining 4 or more biopsy samples, and larger specimens of 3mm and more for histopathological evaluation, through percutaneous pleural biopsy, results in a better diagnostic yield for tuberculous pleurisy.
Aphesteguy, Juan Carlos; Jacobo, Silvia E; Lezama, Luis; Kurlyandskaya, Galina V; Schegoleva, Nina N
2014-06-19
Fe3O4 and ZnxFe3-xO4 pure and doped magnetite magnetic nanoparticles (NPs) were prepared in aqueous solution (Series A) or in a water-ethyl alcohol mixture (Series B) by the co-precipitation method. Only one ferromagnetic resonance line was observed in all cases under consideration indicating that the materials are magnetically uniform. The shortfall in the resonance fields from 3.27 kOe (for the frequency of 9.5 GHz) expected for spheres can be understood taking into account the dipolar forces, magnetoelasticity, or magnetocrystalline anisotropy. All samples show non-zero low field absorption. For Series A samples the grain size decreases with an increase of the Zn content. In this case zero field absorption does not correlate with the changes of the grain size. For Series B samples the grain size and zero field absorption behavior correlate with each other. The highest zero-field absorption corresponded to 0.2 zinc concentration in both A and B series. High zero-field absorption of Fe3O4 ferrite magnetic NPs can be interesting for biomedical applications.
Effect sizes and cut-off points: a meta-analytical review of burnout in latin American countries.
García-Arroyo, Jose; Osca Segovia, Amparo
2018-05-02
Burnout is a highly prevalent globalized health issue that causes significant physical and psychological health problems. In Latin America research on this topic has increased in recent years, however there are no studies comparing results across countries, nor normative reference cut-offs. The present meta-analysis examines the intensity of burnout (emotional exhaustion, cynicism and personal accomplishment) in 58 adult nonclinical samples from 8 countries (Argentina, Brazil, Chile, Colombia, Ecuador, Mexico, Peru and Venezuela). We found low intensity of burnout but there are significant differences between countries in emotional exhaustion explained by occupation and language. Social and human service professionals (police officers, social workers, public administration staff) are more exhausted than health professionals (physicians, nurses) or teachers. The samples with Portuguese language score higher in emotional exhaustion than Spanish, supporting the theory of cultural relativism. Demographics (sex, age) and study variables (sample size, instrument), were not found significant to predict burnout. The effect size and confidence intervals found are proposed as a useful baseline for research and medical diagnosis of burnout in Latin American countries.
The effect of exit beam phase aberrations on parallel beam coherent x-ray reconstructions
NASA Astrophysics Data System (ADS)
Hruszkewycz, S. O.; Harder, R.; Xiao, X.; Fuoss, P. H.
2010-12-01
Diffraction artifacts from imperfect x-ray windows near the sample are an important consideration in the design of coherent x-ray diffraction measurements. In this study, we used simulated and experimental diffraction patterns in two and three dimensions to explore the effect of phase imperfections in a beryllium window (such as a void or inclusion) on the convergence behavior of phasing algorithms and on the ultimate reconstruction. A predictive relationship between beam wavelength, sample size, and window position was derived to explain the dependence of reconstruction quality on beryllium defect size. Defects corresponding to this prediction cause the most damage to the sample exit wave and induce signature error oscillations during phasing that can be used as a fingerprint of experimental x-ray window artifacts. The relationship between x-ray window imperfection size and coherent x-ray diffractive imaging reconstruction quality explored in this work can play an important role in designing high-resolution in situ coherent imaging instrumentation and will help interpret the phasing behavior of coherent diffraction measured in these in situ environments.