Sample records for analysis sample size

  1. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  2. Sample size and power for cost-effectiveness analysis (part 1).

    PubMed

    Glick, Henry A

    2011-03-01

    Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.

  3. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  4. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  5. Grays Harbor and Chehalis River Improvements to Navigation Environmental Studies. Grays Harbor Ocean Disposal Study. Literature Review and Preliminary Benthic Sampling,

    DTIC Science & Technology

    1980-05-01

    transects extending approximately 16 kilometers from the mouth of Grays Harbor. Sub- samples were taken for grain size analysis and wood content. The...samples were thert was".d on a 1.0 mm screen to separate benthic organisms from non-living materials. Consideration of the grain size analysis ...Nutrients 17 B. Field Study 18 Methods 18 Grain Size Analysis 18 Wood Analysis 21 Wood Fragments 21 Sediment Types 21 Discussion 24 IV. BIOLOGICAL

  6. Electrical and magnetic properties of nano-sized magnesium ferrite

    NASA Astrophysics Data System (ADS)

    T, Smitha; X, Sheena; J, Binu P.; Mohammed, E. M.

    2015-02-01

    Nano-sized magnesium ferrite was synthesized using sol-gel techniques. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. Vibration Sample Magnetometer was used to record the magnetic measurements. XRD analysis reveals the prepared sample is single phasic without any impurity. Particle size calculation shows the average crystallite size of the sample is 19nm. FTIR analysis confirmed spinel structure of the prepared samples. Magnetic measurement study shows that the sample is ferromagnetic with high degree of isotropy. Hysterisis loop was traced at temperatures 100K and 300K. DC electrical resistivity measurements show semiconducting nature of the sample.

  7. Sample size and power considerations in network meta-analysis

    PubMed Central

    2012-01-01

    Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327

  8. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2015-01-01

    Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…

  10. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  11. Neuromuscular dose-response studies: determining sample size.

    PubMed

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  12. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  13. Assessment of sampling stability in ecological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.

    1988-01-01

    A simulation study was undertaken to assess the sampling stability of the variable loadings in linear discriminant function analysis. A factorial design was used for the factors of multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. A review of 60 published studies and 142 individual analyses indicated that sample sizes in ecological studies often have met that requirement. However, individual group sample sizes frequently were very unequal, and checks of assumptions usually were not reported. The authors recommend that ecologists obtain group sample sizes that are at least three times as large as the number of variables measured.

  14. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  15. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  16. The Precision Efficacy Analysis for Regression Sample Size Method.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The general purpose of this study was to examine the efficiency of the Precision Efficacy Analysis for Regression (PEAR) method for choosing appropriate sample sizes in regression studies used for precision. The PEAR method, which is based on the algebraic manipulation of an accepted cross-validity formula, essentially uses an effect size to…

  17. Precision Efficacy Analysis for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.

    When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…

  18. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review

    PubMed Central

    Morris, Tom; Gray, Laura

    2017-01-01

    Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637

  19. Laboratory theory and methods for sediment analysis

    USGS Publications Warehouse

    Guy, Harold P.

    1969-01-01

    The diverse character of fluvial sediments makes the choice of laboratory analysis somewhat arbitrary and the pressing of sediment samples difficult. This report presents some theories and methods used by the Water Resources Division for analysis of fluvial sediments to determine the concentration of suspended-sediment samples and the particle-size distribution of both suspended-sediment and bed-material samples. Other analyses related to these determinations may include particle shape, mineral content, and specific gravity, the organic matter and dissolved solids of samples, and the specific weight of soils. The merits and techniques of both the evaporation and filtration methods for concentration analysis are discussed. Methods used for particle-size analysis of suspended-sediment samples may include the sieve pipet, the VA tube-pipet, or the BW tube-VA tube depending on the equipment available, the concentration and approximate size of sediment in the sample, and the settling medium used. The choice of method for most bed-material samples is usually limited to procedures suitable for sand or to some type of visual analysis for large sizes. Several tested forms are presented to help insure a well-ordered system in the laboratory to handle the samples, to help determine the kind of analysis required for each, to conduct the required processes, and to assist in the required computations. Use of the manual should further 'standardize' methods of fluvial sediment analysis among the many laboratories and thereby help to achieve uniformity and precision of the data.

  20. Sample size considerations for clinical research studies in nuclear cardiology.

    PubMed

    Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J

    2015-12-01

    Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.

  1. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir

    Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less

  3. Statistical Analysis Techniques for Small Sample Sizes

    NASA Technical Reports Server (NTRS)

    Navard, S. E.

    1984-01-01

    The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.

  4. Analysis of $sup 239$Pu and $sup 241$Am in NAEG large-sized bovine samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Major, W.J.; Lee, K.D.; Wessman, R.A.

    Methods are described for the analysis of environmental levels of $sup 239$Pu and $sup 241$Am in large-sized bovine samples. Special procedure modifications to overcome the complexities of sample preparation and analyses and special techniques employed to prepare and analyze different types of bovine samples, such as muscle, blood, liver, and bone are discussed. (CH)

  5. Analysis of YBCO high temperature superconductor doped with silver nanoparticles and carbon nanotubes using Williamson-Hall and size-strain plot

    NASA Astrophysics Data System (ADS)

    Dadras, Sedigheh; Davoudiniya, Masoumeh

    2018-05-01

    This paper sets out to investigate and compare the effects of Ag nanoparticles and carbon nanotubes (CNTs) doping on the mechanical properties of Y1Ba2Cu3O7-δ (YBCO) high temperature superconductor. For this purpose, the pure and doped YBCO samples were synthesized by sol-gel method. The microstructural analysis of the samples is performed using X-ray diffraction (XRD). The crystalline size, lattice strain and stress of the pure and doped YBCO samples were estimated by modified forms of Williamson-Hall analysis (W-H), namely, uniform deformation model (UDM), uniform deformation stress model (UDSM) and the size-strain plot method (SSP). These results show that the crystalline size, lattice strain and stress of the YBCO samples declined by Ag nanoparticles and CNTs doping.

  6. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  7. An Integrated Tool for System Analysis of Sample Return Vehicles

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Maddock, Robert W.; Winski, Richard G.

    2012-01-01

    The next important step in space exploration is the return of sample materials from extraterrestrial locations to Earth for analysis. Most mission concepts that return sample material to Earth share one common element: an Earth entry vehicle. The analysis and design of entry vehicles is multidisciplinary in nature, requiring the application of mass sizing, flight mechanics, aerodynamics, aerothermodynamics, thermal analysis, structural analysis, and impact analysis tools. Integration of a multidisciplinary problem is a challenging task; the execution process and data transfer among disciplines should be automated and consistent. This paper describes an integrated analysis tool for the design and sizing of an Earth entry vehicle. The current tool includes the following disciplines: mass sizing, flight mechanics, aerodynamics, aerothermodynamics, and impact analysis tools. Python and Java languages are used for integration. Results are presented and compared with the results from previous studies.

  8. Sampling surface and subsurface particle-size distributions in wadable gravel-and cobble-bed streams for analyses in sediment transport, hydraulics, and streambed monitoring

    Treesearch

    Kristin Bunte; Steven R. Abt

    2001-01-01

    This document provides guidance for sampling surface and subsurface sediment from wadable gravel-and cobble-bed streams. After a short introduction to streams types and classifications in gravel-bed rivers, the document explains the field and laboratory measurement of particle sizes and the statistical analysis of particle-size distributions. Analysis of particle...

  9. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review.

    PubMed

    Kristunas, Caroline; Morris, Tom; Gray, Laura

    2017-11-15

    To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Any, not limited to healthcare settings. Any taking part in an SW-CRT published up to March 2016. The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22-0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  11. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  12. [A Review on the Use of Effect Size in Nursing Research].

    PubMed

    Kang, Hyuncheol; Yeon, Kyupil; Han, Sang Tae

    2015-10-01

    The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.

  13. Using known map category marginal frequencies to improve estimates of thematic map accuracy

    NASA Technical Reports Server (NTRS)

    Card, D. H.

    1982-01-01

    By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.

  14. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  15. Analysis of variograms with various sample sizes from a multispectral image

    USDA-ARS?s Scientific Manuscript database

    Variogram plays a crucial role in remote sensing application and geostatistics. It is very important to estimate variogram reliably from sufficient data. In this study, the analysis of variograms with various sample sizes of remotely sensed data was conducted. A 100x100-pixel subset was chosen from ...

  16. Analysis of variograms with various sample sizes from a multispectral image

    USDA-ARS?s Scientific Manuscript database

    Variograms play a crucial role in remote sensing application and geostatistics. In this study, the analysis of variograms with various sample sizes of remotely sensed data was conducted. A 100 X 100 pixel subset was chosen from an aerial multispectral image which contained three wavebands, green, ...

  17. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  18. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  19. An empirical analysis of the quantitative effect of data when fitting quadratic and cubic polynomials

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.

  20. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  1. Heavy metals relationship with water and size-fractionated sediments in rivers using canonical correlation analysis (CCA) case study, rivers of south western Caspian Sea.

    PubMed

    Vosoogh, Ali; Saeedi, Mohsen; Lak, Raziyeh

    2016-11-01

    Some pollutants can qualitatively affect aquatic freshwater such as rivers, and heavy metals are one of the most important pollutants in aquatic fresh waters. Heavy metals can be found in the form of components dissolved in these waters or in compounds with suspended particles and surface sediments. It can be said that heavy metals are in equilibrium between water and sediment. In this study, the amount of heavy metals is determined in water and different sizes of sediment. To obtain the relationship between heavy metals in water and size-fractionated sediments, a canonical correlation analysis (CCA) was utilized in rivers of the southwestern Caspian Sea. In this research, a case study was carried out on 18 sampling stations in nine rivers. In the first step, the concentrations of heavy metals (Cu, Zn, Cr, Fe, Mn, Pb, Ni, and Cd) were determined in water and size-fractionated sediment samples. Water sampling sites were classified by hierarchical cluster analysis (HCA) utilizing squared Euclidean distance with Ward's method. In addition, for interpreting the obtained results and the relationships between the concentration of heavy metals in the tested river water and sample sediments, canonical correlation analysis (CCA) was utilized. The rivers were grouped into two classes (those having no pollution and those having low pollution) based on the HCA results obtained for river water samples. CCA results found numerous relationships between rivers in Iran's Guilan province and their size-fractionated sediments samples. The heavy metals of sediments with 0.038 to 0.125 mm size in diameter are slightly correlated with those of water samples.

  2. Qualitative Meta-Analysis on the Hospital Task: Implications for Research

    ERIC Educational Resources Information Center

    Noll, Jennifer; Sharma, Sashi

    2014-01-01

    The "law of large numbers" indicates that as sample size increases, sample statistics become less variable and more closely estimate their corresponding population parameters. Different research studies investigating how people consider sample size when evaluating the reliability of a sample statistic have found a wide range of…

  3. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  4. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    PubMed

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  5. Exploratory Factor Analysis with Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.

    2009-01-01

    Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…

  6. High-resolution, submicron particle size distribution analysis using gravitational-sweep sedimentation.

    PubMed Central

    Mächtle, W

    1999-01-01

    Sedimentation velocity is a powerful tool for the analysis of complex solutions of macromolecules. However, sample turbidity imposes an upper limit to the size of molecular complexes currently amenable to such analysis. Furthermore, the breadth of the particle size distribution, combined with possible variations in the density of different particles, makes it difficult to analyze extremely complex mixtures. These same problems are faced in the polymer industry, where dispersions of latices, pigments, lacquers, and emulsions must be characterized. There is a rich history of methods developed for the polymer industry finding use in the biochemical sciences. Two such methods are presented. These use analytical ultracentrifugation to determine the density and size distributions for submicron-sized particles. Both methods rely on Stokes' equations to estimate particle size and density, whereas turbidity, corrected using Mie's theory, provides the concentration measurement. The first method uses the sedimentation time in dispersion media of different densities to evaluate the particle density and size distribution. This method works provided the sample is chemically homogeneous. The second method splices together data gathered at different sample concentrations, thus permitting the high-resolution determination of the size distribution of particle diameters ranging from 10 to 3000 nm. By increasing the rotor speed exponentially from 0 to 40,000 rpm over a 1-h period, size distributions may be measured for extremely broadly distributed dispersions. Presented here is a short history of particle size distribution analysis using the ultracentrifuge, along with a description of the newest experimental methods. Several applications of the methods are provided that demonstrate the breadth of its utility, including extensions to samples containing nonspherical and chromophoric particles. PMID:9916040

  7. Determining the Population Size of Pond Phytoplankton.

    ERIC Educational Resources Information Center

    Hummer, Paul J.

    1980-01-01

    Discusses methods for determining the population size of pond phytoplankton, including water sampling techniques, laboratory analysis of samples, and additional studies worthy of investigation in class or as individual projects. (CS)

  8. [Practical aspects regarding sample size in clinical research].

    PubMed

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  9. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Sampling of illicit drugs for quantitative analysis--part II. Study of particle size and its influence on mass reduction.

    PubMed

    Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L

    2014-01-01

    The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit drugs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    ERIC Educational Resources Information Center

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  12. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

    ERIC Educational Resources Information Center

    Dong, Nianbo; Maynard, Rebecca

    2013-01-01

    This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

  13. Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.

    PubMed

    Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra

    2016-11-20

    The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Only pick the right grains: Modelling the bias due to subjective grain-size interval selection for chronometric and fingerprinting approaches.

    NASA Astrophysics Data System (ADS)

    Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian

    2016-04-01

    Many modern approaches of radiometric dating or geochemical fingerprinting rely on sampling sedimentary deposits. A key assumption of most concepts is that the extracted grain-size fraction of the sampled sediment adequately represents the actual process to be dated or the source area to be fingerprinted. However, these assumptions are not always well constrained. Rather, they have to align with arbitrary, method-determined size intervals, such as "coarse grain" or "fine grain" with partly even different definitions. Such arbitrary intervals violate principal process-based concepts of sediment transport and can thus introduce significant bias to the analysis outcome (i.e., a deviation of the measured from the true value). We present a flexible numerical framework (numOlum) for the statistical programming language R that allows quantifying the bias due to any given analysis size interval for different types of sediment deposits. This framework is applied to synthetic samples from the realms of luminescence dating and geochemical fingerprinting, i.e. a virtual reworked loess section. We show independent validation data from artificially dosed and subsequently mixed grain-size proportions and we present a statistical approach (end-member modelling analysis, EMMA) that allows accounting for the effect of measuring the compound dosimetric history or geochemical composition of a sample. EMMA separates polymodal grain-size distributions into the underlying transport process-related distributions and their contribution to each sample. These underlying distributions can then be used to adjust grain-size preparation intervals to minimise the incorporation of "undesired" grain-size fractions.

  15. Analysis of Duplicated Multiple-Samples Rank Data Using the Mack-Skillings Test.

    PubMed

    Carabante, Kennet Mariano; Alonso-Marenco, Jose Ramon; Chokumnoyporn, Napapan; Sriwattana, Sujinda; Prinyawiwatkul, Witoon

    2016-07-01

    Appropriate analysis for duplicated multiple-samples rank data is needed. This study compared analysis of duplicated rank preference data using the Friedman versus Mack-Skillings tests. Panelists (n = 125) ranked twice 2 orange juice sets: different-samples set (100%, 70%, vs. 40% juice) and similar-samples set (100%, 95%, vs. 90%). These 2 sample sets were designed to get contrasting differences in preference. For each sample set, rank sum data were obtained from (1) averaged rank data of each panelist from the 2 replications (n = 125), (2) rank data of all panelists from each of the 2 separate replications (n = 125 each), (3) jointed rank data of all panelists from the 2 replications (n = 125), and (4) rank data of all panelists pooled from the 2 replications (n = 250); rank data (1), (2), and (4) were separately analyzed by the Friedman test, although those from (3) by the Mack-Skillings test. The effect of sample sizes (n = 10 to 125) was evaluated. For the similar-samples set, higher variations in rank data from the 2 replications were observed; therefore, results of the main effects were more inconsistent among methods and sample sizes. Regardless of analysis methods, the larger the sample size, the higher the χ(2) value, the lower the P-value (testing H0 : all samples are not different). Analyzing rank data (2) separately by replication yielded inconsistent conclusions across sample sizes, hence this method is not recommended. The Mack-Skillings test was more sensitive than the Friedman test. Furthermore, it takes into account within-panelist variations and is more appropriate for analyzing duplicated rank data. © 2016 Institute of Food Technologists®

  16. Schematic of Sample Analysis at Mars SAM Instrument

    NASA Image and Video Library

    2011-01-18

    This schematic illustration for NASA Mars Science Laboratory Sample Analysis at Mars SAM instrument shows major components of the microwave-oven-size instrument, which will examine samples of Martian rocks, soil and atmosphere.

  17. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  18. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Adequacy of laser diffraction for soil particle size analysis

    PubMed Central

    Fisher, Peter; Aumann, Colin; Chia, Kohleth; O'Halloran, Nick; Chandra, Subhash

    2017-01-01

    Sedimentation has been a standard methodology for particle size analysis since the early 1900s. In recent years laser diffraction is beginning to replace sedimentation as the prefered technique in some industries, such as marine sediment analysis. However, for the particle size analysis of soils, which have a diverse range of both particle size and shape, laser diffraction still requires evaluation of its reliability. In this study, the sedimentation based sieve plummet balance method and the laser diffraction method were used to measure the particle size distribution of 22 soil samples representing four contrasting Australian Soil Orders. Initially, a precise wet riffling methodology was developed capable of obtaining representative samples within the recommended obscuration range for laser diffraction. It was found that repeatable results were obtained even if measurements were made at the extreme ends of the manufacturer’s recommended obscuration range. Results from statistical analysis suggested that the use of sample pretreatment to remove soil organic carbon (and possible traces of calcium-carbonate content) made minor differences to the laser diffraction particle size distributions compared to no pretreatment. These differences were found to be marginally statistically significant in the Podosol topsoil and Vertosol subsoil. There are well known reasons why sedimentation methods may be considered to ‘overestimate’ plate-like clay particles, while laser diffraction will ‘underestimate’ the proportion of clay particles. In this study we used Lin’s concordance correlation coefficient to determine the equivalence of laser diffraction and sieve plummet balance results. The results suggested that the laser diffraction equivalent thresholds corresponding to the sieve plummet balance cumulative particle sizes of < 2 μm, < 20 μm, and < 200 μm, were < 9 μm, < 26 μm, < 275 μm respectively. The many advantages of laser diffraction for soil particle size analysis, and the empirical results of this study, suggest that deployment of laser diffraction as a standard test procedure can provide reliable results, provided consistent sample preparation is used. PMID:28472043

  20. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    PubMed

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  1. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. A pretreatment method for grain size analysis of red mudstones

    NASA Astrophysics Data System (ADS)

    Jiang, Zaixing; Liu, Li'an

    2011-11-01

    Traditional sediment disaggregation methods work well for loose mud sediments, but not for tightly cemented mudstones by ferric oxide minerals. In this paper, a new pretreatment method for analyzing the grain size of red mudstones is presented. The experimental samples are Eocene red mudstones from the Dongying Depression, Bohai Bay Basin. The red mudstones are composed mainly of clay minerals, clastic sediments and ferric oxides that make the mudstones red and tightly compacted. The procedure of the method is as follows. Firstly, samples of the red mudstones were crushed into fragments with a diameter of 0.6-0.8 mm in size; secondly, the CBD (citrate-bicarbonate-dithionite) treatment was used to remove ferric oxides so that the cementation of intra-aggregates and inter-aggregates became weakened, and then 5% dilute hydrochloric acid was added to further remove the cements; thirdly, the fragments were further ground with a rubber pestle; lastly, an ultrasonicator was used to disaggregate the samples. After the treatment, the samples could then be used for grain size analysis or for other geological analyses of sedimentary grains. Compared with other pretreatment methods for size analysis of mudstones, this proposed method is more effective and has higher repeatability.

  3. Robust gene selection methods using weighting schemes for microarray data analysis.

    PubMed

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  4. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  5. Meta-analysis of genome-wide association from genomic prediction models

    USDA-ARS?s Scientific Manuscript database

    A limitation of many genome-wide association studies (GWA) in animal breeding is that there are many loci with small effect sizes; thus, larger sample sizes (N) are required to guarantee suitable power of detection. To increase sample size, results from different GWA can be combined in a meta-analys...

  6. Landsat image and sample design for water reservoirs (Rapel dam Central Chile).

    PubMed

    Lavanderos, L; Pozo, M E; Pattillo, C; Miranda, H

    1990-01-01

    Spatial heterogeneity of the Rapel reservoir surface waters is analyzed through Landsat images. The image digital counts are used with the aim or developing an aprioristic quantitative sample design.Natural horizontal stratification of the Rapel Reservoir (Central Chile) is produced mainly by suspended solids. The spatial heterogeneity conditions of the reservoir for the Spring 86-Summer 87 period were determined by qualitative analysis and image processing of the MSS Landsat, bands 1 and 3. The space-time variations of the different observed strata obtained with multitemporal image analysis.A random stratified sample design (r.s.s.d) was developed, based on the digital counts statistical analysis. Strata population size as well as the average, variance and sampling size of the digital counts were obtained by the r.s.s.d method.Stratification determined by analysis of satellite images were later correlated with ground data. Though the stratification of the reservoir is constant over time, the shape and size of the strata varys.

  7. Chance-corrected classification for use in discriminant analysis: Ecological applications

    USGS Publications Warehouse

    Titus, K.; Mosher, J.A.; Williams, B.K.

    1984-01-01

    A method for evaluating the classification table from a discriminant analysis is described. The statistic, kappa, is useful to ecologists in that it removes the effects of chance. It is useful even with equal group sample sizes although the need for a chance-corrected measure of prediction becomes greater with more dissimilar group sample sizes. Examples are presented.

  8. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Grain size statistics and depositional pattern of the Ecca Group sandstones, Karoo Supergroup in the Eastern Cape Province, South Africa

    NASA Astrophysics Data System (ADS)

    Baiyegunhi, Christopher; Liu, Kuiwu; Gwavava, Oswald

    2017-11-01

    Grain size analysis is a vital sedimentological tool used to unravel the hydrodynamic conditions, mode of transportation and deposition of detrital sediments. In this study, detailed grain-size analysis was carried out on thirty-five sandstone samples from the Ecca Group in the Eastern Cape Province of South Africa. Grain-size statistical parameters, bivariate analysis, linear discriminate functions, Passega diagrams and log-probability curves were used to reveal the depositional processes, sedimentation mechanisms, hydrodynamic energy conditions and to discriminate different depositional environments. The grain-size parameters show that most of the sandstones are very fine to fine grained, moderately well sorted, mostly near-symmetrical and mesokurtic in nature. The abundance of very fine to fine grained sandstones indicate the dominance of low energy environment. The bivariate plots show that the samples are mostly grouped, except for the Prince Albert samples that show scattered trend, which is due to the either mixture of two modes in equal proportion in bimodal sediments or good sorting in unimodal sediments. The linear discriminant function analysis is dominantly indicative of turbidity current deposits under shallow marine environments for samples from the Prince Albert, Collingham and Ripon Formations, while those samples from the Fort Brown Formation are lacustrine or deltaic deposits. The C-M plots indicated that the sediments were deposited mainly by suspension and saltation, and graded suspension. Visher diagrams show that saltation is the major process of transportation, followed by suspension.

  10. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    PubMed

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  11. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  12. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    PubMed

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. PIXE Analysis of Aerosol and Soil Samples Collected in the Adirondack Mountains

    NASA Astrophysics Data System (ADS)

    Yoskowitz, Joshua; Ali, Salina; Nadareski, Benjamin; Labrake, Scott; Vineyard, Michael

    2014-09-01

    We have performed an elemental analysis of aerosol and soil samples collected at Piseco Lake in Upstate New York using proton induced X-ray emission spectroscopy (PIXE). This work is part of a systematic study of airborne pollution in the Adirondack Mountains. Of particular interest is the sulfur content that can contribute to acid rain, a well-documented problem in the Adirondacks. We used a nine-stage cascade impactor to collect the aerosol samples near Piseco Lake and distribute the particulate matter onto Kapton foils by particle size. The soil samples were also collected at Piseco Lake and pressed into cylindrical pellets for experimentation. PIXE analysis of the aerosol and soil samples were performed with 2.2-MeV proton beams from the 1.1-MV Pelletron accelerator in the Union College Ion-Beam Analysis Laboratory. There are higher concentrations of sulfur at smaller particle sizes (0.25-1 μm), suggesting that it could be suspended in the air for days and originate from sources very far away. Other elements with significant concentrations peak at larger particle sizes (1-4 μm) and are found in the soil samples, suggesting that these elements could originate in the soil. The PIXE analysis will be described and the resulting data will be presented.

  14. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    PubMed

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  15. Considerations in Forest Growth Estimation Between Two Measurements of Mapped Forest Inventory Plots

    Treesearch

    Michael T. Thompson

    2006-01-01

    Several aspects of the enhanced Forest Inventory and Analysis (FIA) program?s national plot design complicate change estimation. The design incorporates up to three separate plot sizes (microplot, subplot, and macroplot) to sample trees of different sizes. Because multiple plot sizes are involved, change estimators designed for polyareal plot sampling, such as those...

  16. Sediment Grain Size Measurements: Is There a Differenc Between Digested and Un-digested Samples? And Does the Organic Carbon of the Sample Play a Role

    EPA Science Inventory

    Grain size is a physical measurement commonly made in the analysis of many benthic systems. Grain size influences benthic community composition, can influence contaminant loading and can indicate the energy regime of a system. We have recently investigated the relationship betw...

  17. Efficiency of Sampling and Analysis of Asbestos Fibers on Filter Media: Implications for Exposure Assessment

    EPA Science Inventory

    To measure airborne asbestos and other fibers, an air sample must represent the actual number and size of fibers. Typically, mixed cellulose ester (MCE, 0.45 or 0.8 µm pore size) and to a much lesser extent, capillary-pore polycarbonate (PC, 0.4 µm pore size) membrane filters are...

  18. Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size

    PubMed Central

    Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa

    2016-01-01

    Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913

  19. Quantitative characterisation of sedimentary grains

    NASA Astrophysics Data System (ADS)

    Tunwal, Mohit; Mulchrone, Kieran F.; Meere, Patrick A.

    2016-04-01

    Analysis of sedimentary texture helps in determining the formation, transportation and deposition processes of sedimentary rocks. Grain size analysis is traditionally quantitative, whereas grain shape analysis is largely qualitative. A semi-automated approach to quantitatively analyse shape and size of sand sized sedimentary grains is presented. Grain boundaries are manually traced from thin section microphotographs in the case of lithified samples and are automatically identified in the case of loose sediments. Shape and size paramters can then be estimated using a software package written on the Mathematica platform. While automated methodology already exists for loose sediment analysis, the available techniques for the case of lithified samples are limited to cases of high definition thin section microphotographs showing clear contrast between framework grains and matrix. Along with the size of grain, shape parameters such as roundness, angularity, circularity, irregularity and fractal dimension are measured. A new grain shape parameter developed using Fourier descriptors has also been developed. To test this new approach theoretical examples were analysed and produce high quality results supporting the accuracy of the algorithm. Furthermore sandstone samples from known aeolian and fluvial environments from the Dingle Basin, County Kerry, Ireland were collected and analysed. Modern loose sediments from glacial till from County Cork, Ireland and aeolian sediments from Rajasthan, India have also been collected and analysed. A graphical summary of the data is presented and allows for quantitative distinction between samples extracted from different sedimentary environments.

  20. Selective counting and sizing of single virus particles using fluorescent aptamer-based nanoparticle tracking analysis.

    PubMed

    Szakács, Zoltán; Mészáros, Tamás; de Jonge, Marien I; Gyurcsányi, Róbert E

    2018-05-30

    Detection and counting of single virus particles in liquid samples are largely limited to narrow size distribution of viruses and purified formulations. To address these limitations, here we propose a calibration-free method that enables concurrently the selective recognition, counting and sizing of virus particles as demonstrated through the detection of human respiratory syncytial virus (RSV), an enveloped virus with a broad size distribution, in throat swab samples. RSV viruses were selectively labeled through their attachment glycoproteins (G) with fluorescent aptamers, which further enabled their identification, sizing and counting at the single particle level by fluorescent nanoparticle tracking analysis. The proposed approach seems to be generally applicable to virus detection and quantification. Moreover, it could be successfully applied to detect single RSV particles in swab samples of diagnostic relevance. Since the selective recognition is associated with the sizing of each detected particle, this method enables to discriminate viral elements linked to the virus as well as various virus forms and associations.

  1. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    PubMed

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  2. Sample size requirements for indirect association studies of gene-environment interactions (G x E).

    PubMed

    Hein, Rebecca; Beckmann, Lars; Chang-Claude, Jenny

    2008-04-01

    Association studies accounting for gene-environment interactions (G x E) may be useful for detecting genetic effects. Although current technology enables very dense marker spacing in genetic association studies, the true disease variants may not be genotyped. Thus, causal genes are searched for by indirect association using genetic markers in linkage disequilibrium (LD) with the true disease variants. Sample sizes needed to detect G x E effects in indirect case-control association studies depend on the true genetic main effects, disease allele frequencies, whether marker and disease allele frequencies match, LD between loci, main effects and prevalence of environmental exposures, and the magnitude of interactions. We explored variables influencing sample sizes needed to detect G x E, compared these sample sizes with those required to detect genetic marginal effects, and provide an algorithm for power and sample size estimations. Required sample sizes may be heavily inflated if LD between marker and disease loci decreases. More than 10,000 case-control pairs may be required to detect G x E. However, given weak true genetic main effects, moderate prevalence of environmental exposures, as well as strong interactions, G x E effects may be detected with smaller sample sizes than those needed for the detection of genetic marginal effects. Moreover, in this scenario, rare disease variants may only be detectable when G x E is included in the analyses. Thus, the analysis of G x E appears to be an attractive option for the detection of weak genetic main effects of rare variants that may not be detectable in the analysis of genetic marginal effects only.

  3. QA/QC requirements for physical properties sampling and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Innis, B.E.

    1993-07-21

    This report presents results of an assessment of the available information concerning US Environmental Protection Agency (EPA) quality assurance/quality control (QA/QC) requirements and guidance applicable to sampling, handling, and analyzing physical parameter samples at Comprehensive Environmental Restoration, Compensation, and Liability Act (CERCLA) investigation sites. Geotechnical testing laboratories measure the following physical properties of soil and sediment samples collected during CERCLA remedial investigations (RI) at the Hanford Site: moisture content, grain size by sieve, grain size by hydrometer, specific gravity, bulk density/porosity, saturated hydraulic conductivity, moisture retention, unsaturated hydraulic conductivity, and permeability of rocks by flowing air. Geotechnical testing laboratories alsomore » measure the following chemical parameters of soil and sediment samples collected during Hanford Site CERCLA RI: calcium carbonate and saturated column leach testing. Physical parameter data are used for (1) characterization of vadose and saturated zone geology and hydrogeology, (2) selection of monitoring well screen sizes, (3) to support modeling and analysis of the vadose and saturated zones, and (4) for engineering design. The objectives of this report are to determine the QA/QC levels accepted in the EPA Region 10 for the sampling, handling, and analysis of soil samples for physical parameters during CERCLA RI.« less

  4. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  5. A Note on Maximized Posttest Contrasts.

    ERIC Educational Resources Information Center

    Williams, John D.

    1979-01-01

    Hollingsworth recently showed a posttest contrast for analysis of variance situations that, for equal sample sizes, had several favorable qualities. However, for unequal sample sizes, the contrast fails to achieve status as a maximized contrast; thus, separate testing of the contrast is required. (Author/GSK)

  6. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Meta-analysis of multiple outcomes: a multilevel approach.

    PubMed

    Van den Noortgate, Wim; López-López, José Antonio; Marín-Martínez, Fulgencio; Sánchez-Meca, Julio

    2015-12-01

    In meta-analysis, dependent effect sizes are very common. An example is where in one or more studies the effect of an intervention is evaluated on multiple outcome variables for the same sample of participants. In this paper, we evaluate a three-level meta-analytic model to account for this kind of dependence, extending the simulation results of Van den Noortgate, López-López, Marín-Martínez, and Sánchez-Meca Behavior Research Methods, 45, 576-594 (2013) by allowing for a variation in the number of effect sizes per study, in the between-study variance, in the correlations between pairs of outcomes, and in the sample size of the studies. At the same time, we explore the performance of the approach if the outcomes used in a study can be regarded as a random sample from a population of outcomes. We conclude that although this approach is relatively simple and does not require prior estimates of the sampling covariances between effect sizes, it gives appropriate mean effect size estimates, standard error estimates, and confidence interval coverage proportions in a variety of realistic situations.

  8. Statistical power analysis in wildlife research

    USGS Publications Warehouse

    Steidl, R.J.; Hayes, J.P.

    1997-01-01

    Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.

  9. Effects of Group Size and Lack of Sphericity on the Recovery of Clusters in K-Means Cluster Analysis

    ERIC Educational Resources Information Center

    de Craen, Saskia; Commandeur, Jacques J. F.; Frank, Laurence E.; Heiser, Willem J.

    2006-01-01

    K-means cluster analysis is known for its tendency to produce spherical and equally sized clusters. To assess the magnitude of these effects, a simulation study was conducted, in which populations were created with varying departures from sphericity and group sizes. An analysis of the recovery of clusters in the samples taken from these…

  10. Sampling intraspecific variability in leaf functional traits: Practical suggestions to maximize collected information.

    PubMed

    Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni

    2017-12-01

    The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.

  11. Using meta-analysis to inform the design of subsequent studies of diagnostic test accuracy.

    PubMed

    Hinchliffe, Sally R; Crowther, Michael J; Phillips, Robert S; Sutton, Alex J

    2013-06-01

    An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial evidence to underpin reliable clinical decision-making. Very few investigators consider any sample size calculations when designing a new diagnostic accuracy study. However, it is important to consider the number of subjects in a new study in order to achieve a precise measure of accuracy. Sutton et al. have suggested previously that when designing a new therapeutic trial, it could be more beneficial to consider the power of the updated meta-analysis including the new trial rather than of the new trial itself. The methodology involves simulating new studies for a range of sample sizes and estimating the power of the updated meta-analysis with each new study added. Plotting the power values against the range of sample sizes allows the clinician to make an informed decision about the sample size of a new trial. This paper extends this approach from the trial setting and applies it to diagnostic accuracy studies. Several meta-analytic models are considered including bivariate random effects meta-analysis that models the correlation between sensitivity and specificity. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  12. What is an adequate sample size? Operationalising data saturation for theory-based interview studies.

    PubMed

    Francis, Jill J; Johnston, Marie; Robertson, Clare; Glidewell, Liz; Entwistle, Vikki; Eccles, Martin P; Grimshaw, Jeremy M

    2010-12-01

    In interview studies, sample size is often justified by interviewing participants until reaching 'data saturation'. However, there is no agreed method of establishing this. We propose principles for deciding saturation in theory-based interview studies (where conceptual categories are pre-established by existing theory). First, specify a minimum sample size for initial analysis (initial analysis sample). Second, specify how many more interviews will be conducted without new ideas emerging (stopping criterion). We demonstrate these principles in two studies, based on the theory of planned behaviour, designed to identify three belief categories (Behavioural, Normative and Control), using an initial analysis sample of 10 and stopping criterion of 3. Study 1 (retrospective analysis of existing data) identified 84 shared beliefs of 14 general medical practitioners about managing patients with sore throat without prescribing antibiotics. The criterion for saturation was achieved for Normative beliefs but not for other beliefs or studywise saturation. In Study 2 (prospective analysis), 17 relatives of people with Paget's disease of the bone reported 44 shared beliefs about taking genetic testing. Studywise data saturation was achieved at interview 17. We propose specification of these principles for reporting data saturation in theory-based interview studies. The principles may be adaptable for other types of studies.

  13. Relationships between media use, body fatness and physical activity in children and youth: a meta-analysis.

    PubMed

    Marshall, S J; Biddle, S J H; Gorely, T; Cameron, N; Murdey, I

    2004-10-01

    To review the empirical evidence of associations between television (TV) viewing, video/computer game use and (a) body fatness, and (b) physical activity. Meta-analysis. Published English-language studies were located from computerized literature searches, bibliographies of primary studies and narrative reviews, and manual searches of personal archives. Included studies presented at least one empirical association between TV viewing, video/computer game use and body fatness or physical activity among samples of children and youth aged 3-18 y. The mean sample-weighted corrected effect size (Pearson r). Based on data from 52 independent samples, the mean sample-weighted effect size between TV viewing and body fatness was 0.066 (95% CI=0.056-0.078; total N=44,707). The sample-weighted fully corrected effect size was 0.084. Based on data from six independent samples, the mean sample-weighted effect size between video/computer game use and body fatness was 0.070 (95% CI=-0.048 to 0.188; total N=1,722). The sample-weighted fully corrected effect size was 0.128. Based on data from 39 independent samples, the mean sample-weighted effect size between TV viewing and physical activity was -0.096 (95% CI=-0.080 to -0.112; total N=141,505). The sample-weighted fully corrected effect size was -0.129. Based on data from 10 independent samples, the mean sample-weighted effect size between video/computer game use and physical activity was -0.104 (95% CI=-0.080 to -0.128; total N=119,942). The sample-weighted fully corrected effect size was -0.141. A statistically significant relationship exists between TV viewing and body fatness among children and youth although it is likely to be too small to be of substantial clinical relevance. The relationship between TV viewing and physical activity is small but negative. The strength of these relationships remains virtually unchanged even after correcting for common sources of bias known to impact study outcomes. While the total amount of time per day engaged in sedentary behavior is inevitably prohibitive of physical activity, media-based inactivity may be unfairly implicated in recent epidemiologic trends of overweight and obesity among children and youth. Relationships between sedentary behavior and health are unlikely to be explained using single markers of inactivity, such as TV viewing or video/computer game use.

  14. A LDR-PCR approach for multiplex polymorphisms genotyping of severely degraded DNA with fragment sizes <100 bp.

    PubMed

    Zhang, Zhen; Wang, Bao-Jie; Guan, Hong-Yu; Pang, Hao; Xuan, Jin-Feng

    2009-11-01

    Reducing amplicon sizes has become a major strategy for analyzing degraded DNA typical of forensic samples. However, amplicon sizes in current mini-short tandem repeat-polymerase chain reaction (PCR) and mini-sequencing assays are still not suitable for analysis of severely degraded DNA. In this study, we present a multiplex typing method that couples ligase detection reaction with PCR that can be used to identify single nucleotide polymorphisms and small-scale insertion/deletions in a sample of severely fragmented DNA. This method adopts thermostable ligation for allele discrimination and subsequent PCR for signal enhancement. In this study, four polymorphic loci were used to assess the ability of this technique to discriminate alleles in an artificially degraded sample of DNA with fragment sizes <100 bp. Our results showed clear allelic discrimination of single or multiple loci, suggesting that this method might aid in the analysis of extremely degraded samples in which allelic drop out of larger fragments is observed.

  15. Visual accumulation tube for size analysis of sands

    USGS Publications Warehouse

    Colby, B.C.; Christensen, R.P.

    1956-01-01

    The visual-accumulation-tube method was developed primarily for making size analyses of the sand fractions of suspended-sediment and bed-material samples. Because the fundamental property governing the motion of a sediment particle in a fluid is believed to be its fall velocity. the analysis is designed to determine the fall-velocity-frequency distribution of the individual particles of the sample. The analysis is based on a stratified sedimentation system in which the sample is introduced at the top of a transparent settling tube containing distilled water. The procedure involves the direct visual tracing of the height of sediment accumulation in a contracted section at the bottom of the tube. A pen records the height on a moving chart. The method is simple and fast, provides a continuous and permanent record, gives highly reproducible results, and accurately determines the fall-velocity characteristics of the sample. The apparatus, procedure, results, and accuracy of the visual-accumulation-tube method for determining the sedimentation-size distribution of sands are presented in this paper.

  16. Analysis of particulates on tape lift samples

    NASA Astrophysics Data System (ADS)

    Moision, Robert M.; Chaney, John A.; Panetta, Chris J.; Liu, De-Ling

    2014-09-01

    Particle counts on tape lift samples taken from a hardware surface exceeded threshold requirements in six successive tests despite repeated cleaning of the surface. Subsequent analysis of the particle size distributions of the failed tests revealed that the handling and processing of the tape lift samples may have played a role in the test failures. In order to explore plausible causes for the observed size distribution anomalies, scanning electron microscopy (SEM), energy dispersive X-ray spectroscopy (EDX), and time-of-flight secondary ion mass spectrometry (ToF-SIMS) were employed to perform chemical analysis on collected particulates. SEM/EDX identified Na and S containing particles on the hardware samples in a size range identified as being responsible for the test failures. ToF-SIMS was employed to further examine the Na and S containing particulates and identified the molecular signature of sodium alkylbenzene sulfonates, a common surfactant used in industrial detergent. The root cause investigation suggests that the tape lift test failures originated from detergent residue left behind on the glass slides used to mount and transport the tape following sampling and not from the hardware surface.

  17. Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size

    USGS Publications Warehouse

    Rubin, David M.; Chezar, Henry; Harney, Jodi N.; Topping, David J.; Melis, Theodore S.; Sherwood, Christopher R.

    2007-01-01

    For more than a century, studies of sedimentology and sediment transport have measured bed-sediment grain size by collecting samples and transporting them back to the laboratory for grain-size analysis. This process is slow and expensive. Moreover, most sampling systems are not selective enough to sample only the surficial grains that interact with the flow; samples typically include sediment from at least a few centimeters beneath the bed surface. New hardware and software are available for in situ measurement of grain size. The new technology permits rapid measurement of surficial bed sediment. Here we describe several systems we have deployed by boat, by hand, and by tripod in rivers, oceans, and on beaches.

  18. Underwater Microscope for Measuring Spatial and Temporal Changes in Bed-Sediment Grain Size

    USGS Publications Warehouse

    Rubin, David M.; Chezar, Henry; Harney, Jodi N.; Topping, David J.; Melis, Theodore S.; Sherwood, Christopher R.

    2006-01-01

    For more than a century, studies of sedimentology and sediment transport have measured bed-sediment grain size by collecting samples and transporting them back to the lab for grain-size analysis. This process is slow and expensive. Moreover, most sampling systems are not selective enough to sample only the surficial grains that interact with the flow; samples typically include sediment from at least a few centimeters beneath the bed surface. New hardware and software are available for in-situ measurement of grain size. The new technology permits rapid measurement of surficial bed sediment. Here we describe several systems we have deployed by boat, by hand, and by tripod in rivers, oceans, and on beaches.

  19. Methods for Determining Particle Size Distributions from Nuclear Detonations.

    DTIC Science & Technology

    1987-03-01

    Debris . . . 30 IV. Summary of Sample Preparation Method . . . . 35 V. Set Parameters for PCS ... ........... 39 VI. Analysis by Vendors...54 XV. Results From Brookhaven Analysis Using The Method of Cumulants ... ........... . 54 XVI. Results From Brookhaven Analysis of Sample...R-3 Using Histogram Method ......... .55 XVII. Results From Brookhaven Analysis of Sample R-8 Using Histogram Method ........... 56 XVIII.TEM Particle

  20. Device for high spatial resolution chemical analysis of a sample and method of high spatial resolution chemical analysis

    DOEpatents

    Van Berkel, Gary J.

    2015-10-06

    A system and method for analyzing a chemical composition of a specimen are described. The system can include at least one pin; a sampling device configured to contact a liquid with a specimen on the at least one pin to form a testing solution; and a stepper mechanism configured to move the at least one pin and the sampling device relative to one another. The system can also include an analytical instrument for determining a chemical composition of the specimen from the testing solution. In particular, the systems and methods described herein enable chemical analysis of specimens, such as tissue, to be evaluated in a manner that the spatial-resolution is limited by the size of the pins used to obtain tissue samples, not the size of the sampling device used to solubilize the samples coupled to the pins.

  1. Major and trace element chemistry of Luna 24 samples from Mare Crisium

    NASA Technical Reports Server (NTRS)

    Blanchard, D. P.; Brannon, J. C.; Aaboe, E.; Budahn, J. R.

    1978-01-01

    Atomic absorption spectrometry and instrumental neutron activation analysis were employed to analyze six Luna 24 soils for major and trace elements. The analysis revealed well-mixed soils, though size fractions of each of the soils showed quite dissimilar compositions. Thus the regolith apparently has not been extensively reworked. Noritic breccia admixed preferentially to the finest size fractions and differential comminution of one or more other soil components accounted for the observed elemental distributions as a function of grain size. The ferrobasalt composition and one or more components with higher MgO contents have been identified in the samples.

  2. Annual design-based estimation for the annualized inventories of forest inventory and analysis: sample size determination

    Treesearch

    Hans T. Schreuder; Jin-Mann S. Lin; John Teply

    2000-01-01

    The Forest Inventory and Analysis units in the USDA Forest Service have been mandated by Congress to go to an annualized inventory where a certain percentage of plots, say 20 percent, will be measured in each State each year. Although this will result in an annual sample size that will be too small for reliable inference for many areas, it is a sufficiently large...

  3. The impact of multiple endpoint dependency on Q and I(2) in meta-analysis.

    PubMed

    Thompson, Christopher Glen; Becker, Betsy Jane

    2014-09-01

    A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures Q and I(2) in scenarios using the unbiased standardized-mean-difference effect size. Univariate and multivariate meta-analysis methods are examined. Conditions included different overall outcome effects, study sample sizes, numbers of studies, between-outcomes correlations, dependency structures, and ways of computing the correlation. The univariate approach used typical fixed-effects analyses whereas the multivariate approach used generalized least-squares (GLS) estimates of a fixed-effects model, weighted by the inverse variance-covariance matrix. Increased dependence among effect sizes led to increased Type I error rates from univariate models. When effect sizes were strongly dependent, error rates were drastically higher than nominal levels regardless of study sample size and number of studies. In contrast, using GLS estimation to account for multiple-endpoint dependency maintained error rates within nominal levels. Conversely, mean I(2) values were not greatly affected by increased amounts of dependency. Last, we point out that the between-outcomes correlation should be estimated as a pooled within-groups correlation rather than using a full-sample estimator that does not consider treatment/control group membership. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Particle size analysis of sediments, soils and related particulate materials for forensic purposes using laser granulometry.

    PubMed

    Pye, Kenneth; Blott, Simon J

    2004-08-11

    Particle size is a fundamental property of any sediment, soil or dust deposit which can provide important clues to nature and provenance. For forensic work, the particle size distribution of sometimes very small samples requires precise determination using a rapid and reliable method with a high resolution. The Coulter trade mark LS230 laser granulometer offers rapid and accurate sizing of particles in the range 0.04-2000 microm for a variety of sample types, including soils, unconsolidated sediments, dusts, powders and other particulate materials. Reliable results are possible for sample weights of just 50 mg. Discrimination between samples is performed on the basis of the shape of the particle size curves and statistical measures of the size distributions. In routine forensic work laser granulometry data can rarely be used in isolation and should be considered in combination with results from other techniques to reach an overall conclusion.

  5. Accounting for treatment by center interaction in sample size determinations and the use of surrogate outcomes in the pessary for the prevention of preterm birth trial: a simulation study.

    PubMed

    Willan, Andrew R

    2016-07-05

    The Pessary for the Prevention of Preterm Birth Study (PS3) is an international, multicenter, randomized clinical trial designed to examine the effectiveness of the Arabin pessary in preventing preterm birth in pregnant women with a short cervix. During the design of the study two methodological issues regarding power and sample size were raised. Since treatment in the Standard Arm will vary between centers, it is anticipated that so too will the probability of preterm birth in that arm. This will likely result in a treatment by center interaction, and the issue of how this will affect the sample size requirements was raised. The sample size requirements to examine the effect of the pessary on the baby's clinical outcome was prohibitively high, so the second issue is how best to examine the effect on clinical outcome. The approaches taken to address these issues are presented. Simulation and sensitivity analysis were used to address the sample size issue. The probability of preterm birth in the Standard Arm was assumed to vary between centers following a Beta distribution with a mean of 0.3 and a coefficient of variation of 0.3. To address the second issue a Bayesian decision model is proposed that combines the information regarding the between-treatment difference in the probability of preterm birth from PS3 with the data from the Multiple Courses of Antenatal Corticosteroids for Preterm Birth Study that relate preterm birth and perinatal mortality/morbidity. The approach provides a between-treatment comparison with respect to the probability of a bad clinical outcome. The performance of the approach was assessed using simulation and sensitivity analysis. Accounting for a possible treatment by center interaction increased the sample size from 540 to 700 patients per arm for the base case. The sample size requirements increase with the coefficient of variation and decrease with the number of centers. Under the same assumptions used for determining the sample size requirements, the simulated mean probability that pessary reduces the risk of perinatal mortality/morbidity is 0.98. The simulated mean decreased with coefficient of variation and increased with the number of clinical sites. Employing simulation and sensitivity analysis is a useful approach for determining sample size requirements while accounting for the additional uncertainty due to a treatment by center interaction. Using a surrogate outcome in conjunction with a Bayesian decision model is an efficient way to compare important clinical outcomes in a randomized clinical trial in situations where the direct approach requires a prohibitively high sample size.

  6. QESA: Quarantine Extraterrestrial Sample Analysis Methodology

    NASA Astrophysics Data System (ADS)

    Simionovici, A.; Lemelle, L.; Beck, P.; Fihman, F.; Tucoulou, R.; Kiryukhina, K.; Courtade, F.; Viso, M.

    2018-04-01

    Our nondestructive, nm-sized, hyperspectral analysis methodology of combined X-rays/Raman/IR probes in BSL4 quarantine, renders our patented mini-sample holder ideal for detecting extraterrestrial life. Our Stardust and Archean results validate it.

  7. Image analysis of representative food structures: application of the bootstrap method.

    PubMed

    Ramírez, Cristian; Germain, Juan C; Aguilera, José M

    2009-08-01

    Images (for example, photomicrographs) are routinely used as qualitative evidence of the microstructure of foods. In quantitative image analysis it is important to estimate the area (or volume) to be sampled, the field of view, and the resolution. The bootstrap method is proposed to estimate the size of the sampling area as a function of the coefficient of variation (CV(Bn)) and standard error (SE(Bn)) of the bootstrap taking sub-areas of different sizes. The bootstrap method was applied to simulated and real structures (apple tissue). For simulated structures, 10 computer-generated images were constructed containing 225 black circles (elements) and different coefficient of variation (CV(image)). For apple tissue, 8 images of apple tissue containing cellular cavities with different CV(image) were analyzed. Results confirmed that for simulated and real structures, increasing the size of the sampling area decreased the CV(Bn) and SE(Bn). Furthermore, there was a linear relationship between the CV(image) and CV(Bn) (.) For example, to obtain a CV(Bn) = 0.10 in an image with CV(image) = 0.60, a sampling area of 400 x 400 pixels (11% of whole image) was required, whereas if CV(image) = 1.46, a sampling area of 1000 x 100 pixels (69% of whole image) became necessary. This suggests that a large-size dispersion of element sizes in an image requires increasingly larger sampling areas or a larger number of images.

  8. The use of group sequential, information-based sample size re-estimation in the design of the PRIMO study of chronic kidney disease.

    PubMed

    Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi

    2011-04-01

    Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.

  9. Estimating the quadratic mean diameter of fine woody debris for forest type groups of the United States

    Treesearch

    Christopher W. Woodall; Vicente J. Monleon

    2009-01-01

    The Forest Inventory and Analysis program of the Forest Service, U.S. Department of Agriculture conducts a national inventory of fine woody debris (FWD); however, the sampling protocols involve tallying only the number of FWD pieces by size class that intersect a sampling transect with no measure of actual size. The line intersect estimator used with those samples...

  10. Sample preparation techniques for the determination of trace residues and contaminants in foods.

    PubMed

    Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M

    2007-06-15

    The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.

  11. Fish assemblages

    USGS Publications Warehouse

    McGarvey, Daniel J.; Falke, Jeffrey A.; Li, Hiram W.; Li, Judith; Hauer, F. Richard; Lamberti, G.A.

    2017-01-01

    Methods to sample fishes in stream ecosystems and to analyze the raw data, focusing primarily on assemblage-level (all fish species combined) analyses, are presented in this chapter. We begin with guidance on sample site selection, permitting for fish collection, and information-gathering steps to be completed prior to conducting fieldwork. Basic sampling methods (visual surveying, electrofishing, and seining) are presented with specific instructions for estimating population sizes via visual, capture-recapture, and depletion surveys, in addition to new guidance on environmental DNA (eDNA) methods. Steps to process fish specimens in the field including the use of anesthesia and preservation of whole specimens or tissue samples (for genetic or stable isotope analysis) are also presented. Data analysis methods include characterization of size-structure within populations, estimation of species richness and diversity, and application of fish functional traits. We conclude with three advanced topics in assemblage-level analysis: multidimensional scaling (MDS), ecological networks, and loop analysis.

  12. Structure of Nano-sized CeO 2 Materials: Combined Scattering and Spectroscopic Investigations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchbank, Huw R.; Clark, Adam H.; Hyde, Timothy I.

    Here, the nature of nano-sized ceria, CeO 2, systems were investigated using neutron and X-ray diffraction and X-ray absorption spectroscopy. Whilst both diffraction andtotal pair distribution functions (PDFs) revealed that in all the samples the occupancy of both Ce 4+ and O 2- are very close to the ideal stoichiometry, the analysis using reverse Monte Carlo technique revealedsignificant disorder around oxygen atoms in the nano sized ceria samples in comparison to the highly crystalline NIST standard.In addition, the analysis reveal that the main differences observed in the pair correlations from various X-ray and neutron diffraction techniques were attributed to themore » particle size of the CeO 2 prepared by the reported three methods. Furthermore, detailed analysis of the Ce L 3– and K-edge EXAFS data support this finding; in particular the decrease in higher shell coordination numbers with respect to the NIST standard, are attributed to differences in particle size.« less

  13. Structure of Nano-sized CeO 2 Materials: Combined Scattering and Spectroscopic Investigations

    DOE PAGES

    Marchbank, Huw R.; Clark, Adam H.; Hyde, Timothy I.; ...

    2016-08-29

    Here, the nature of nano-sized ceria, CeO 2, systems were investigated using neutron and X-ray diffraction and X-ray absorption spectroscopy. Whilst both diffraction andtotal pair distribution functions (PDFs) revealed that in all the samples the occupancy of both Ce 4+ and O 2- are very close to the ideal stoichiometry, the analysis using reverse Monte Carlo technique revealedsignificant disorder around oxygen atoms in the nano sized ceria samples in comparison to the highly crystalline NIST standard.In addition, the analysis reveal that the main differences observed in the pair correlations from various X-ray and neutron diffraction techniques were attributed to themore » particle size of the CeO 2 prepared by the reported three methods. Furthermore, detailed analysis of the Ce L 3– and K-edge EXAFS data support this finding; in particular the decrease in higher shell coordination numbers with respect to the NIST standard, are attributed to differences in particle size.« less

  14. Nanometer-sized alumina packed microcolumn solid-phase extraction combined with field-amplified sample stacking-capillary electrophoresis for the speciation analysis of inorganic selenium in environmental water samples.

    PubMed

    Duan, Jiankuan; Hu, Bin; He, Man

    2012-10-01

    In this paper, a new method of nanometer-sized alumina packed microcolumn SPE combined with field-amplified sample stacking (FASS)-CE-UV detection was developed for the speciation analysis of inorganic selenium in environmental water samples. Self-synthesized nanometer-sized alumina was packed in a microcolumn as the SPE adsorbent to retain Se(IV) and Se(VI) simultaneously at pH 6 and the retained inorganic selenium was eluted by concentrated ammonia. The eluent was used for FASS-CE-UV analysis after NH₃ evaporation. The factors affecting the preconcentration of both Se(IV) and Se(VI) by SPE and FASS were studied and the optimal CE separation conditions for Se(IV) and Se(VI) were obtained. Under the optimal conditions, the LODs of 57 ng L⁻¹ (Se(IV)) and 71 ng L⁻¹ (Se(VI)) were obtained, respectively. The developed method was validated by the analysis of a certified reference material of GBW(E)080395 environmental water and the determined value was in a good agreement with the certified value. It was also successfully applied to the speciation analysis of inorganic selenium in environmental water samples, including Yangtze River water, spring water, and tap water. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. EXAFS analysis of cations distribution in structure of Co1-xNixFe2O4 nanoparticles obtained by hydrothermal method in aloe vera extract solution

    NASA Astrophysics Data System (ADS)

    Wongpratat, Unchista; Maensiri, Santi; Swatsitang, Ekaphan

    2016-09-01

    Effect of cations distribution upon EXAFS analysis on magnetic properties of Co1-xNixFe2O4 (x = 0, 0.25, 0.50, 0.75 and 1.0) nanoparticles prepared by the hydrothermal method in aloe vera extract solution were studied. XRD analysis confirmed a pure phase of cubic spinel ferrite of all samples. Changes in lattice parameter and particle size depended on the Ni content with partial substitution and site distributions of Co2+, Ni2+ ions of different ionic radii at both tetrahedral and octahedral sites in the crystal structure. Particle sizes of samples estimated by TEM images were found to be in the range of 10.87-62.50 nm. The VSM results at room temperature indicated the ferrimagnetic behavior of all samples. Superparamagnetic behavior was observed in NiFe2O4 sample. The coercivity (Hc) and remanance (Mr) values were related to the particle sizes of samples. The saturation magnetization (Ms) was increased by a factor of 1.4 to a value of 57.57 emu/g, whereas the coercivity (Hc) was decreased by a factor of 20 to a value of 63.15 Oe for a sample with x = 0.75. In addition to the cations distribution, the increase of aspect ratio (surface to volume ratio) due to the decrease of particle size could significantly affect the magnetic properties of the materials.

  16. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  17. Laboratory Spectrometer for Wear Metal Analysis of Engine Lubricants.

    DTIC Science & Technology

    1986-04-01

    analysis, the acid digestion technique for sample pretreatment is the best approach available to date because of its relatively large sample size (1000...microliters or more). However, this technique has two major shortcomings limiting its application: (1) it requires the use of hydrofluoric acid (a...accuracy. Sample preparation including filtration or acid digestion may increase analysis times by 20 minutes or more. b. Repeatability In the analysis

  18. Internal pilots for a class of linear mixed models with Gaussian and compound symmetric data

    PubMed Central

    Gurka, Matthew J.; Coffey, Christopher S.; Muller, Keith E.

    2015-01-01

    SUMMARY An internal pilot design uses interim sample size analysis, without interim data analysis, to adjust the final number of observations. The approach helps to choose a sample size sufficiently large (to achieve the statistical power desired), but not too large (which would waste money and time). We report on recent research in cerebral vascular tortuosity (curvature in three dimensions) which would benefit greatly from internal pilots due to uncertainty in the parameters of the covariance matrix used for study planning. Unfortunately, observations correlated across the four regions of the brain and small sample sizes preclude using existing methods. However, as in a wide range of medical imaging studies, tortuosity data have no missing or mistimed data, a factorial within-subject design, the same between-subject design for all responses, and a Gaussian distribution with compound symmetry. For such restricted models, we extend exact, small sample univariate methods for internal pilots to linear mixed models with any between-subject design (not just two groups). Planning a new tortuosity study illustrates how the new methods help to avoid sample sizes that are too small or too large while still controlling the type I error rate. PMID:17318914

  19. Sampling design and required sample size for evaluating contamination levels of 137Cs in Japanese fir needles in a mixed deciduous forest stand in Fukushima, Japan.

    PubMed

    Oba, Yurika; Yamada, Toshihiro

    2017-05-01

    We estimated the sample size (the number of samples) required to evaluate the concentration of radiocesium ( 137 Cs) in Japanese fir (Abies firma Sieb. & Zucc.), 5 years after the outbreak of the Fukushima Daiichi Nuclear Power Plant accident. We investigated the spatial structure of the contamination levels in this species growing in a mixed deciduous broadleaf and evergreen coniferous forest stand. We sampled 40 saplings with a tree height of 150 cm-250 cm in a Fukushima forest community. The results showed that: (1) there was no correlation between the 137 Cs concentration in needles and soil, and (2) the difference in the spatial distribution pattern of 137 Cs concentration between needles and soil suggest that the contribution of root uptake to 137 Cs in new needles of this species may be minor in the 5 years after the radionuclides were released into the atmosphere. The concentration of 137 Cs in needles showed a strong positive spatial autocorrelation in the distance class from 0 to 2.5 m, suggesting that the statistical analysis of data should consider spatial autocorrelation in the case of an assessment of the radioactive contamination of forest trees. According to our sample size analysis, a sample size of seven trees was required to determine the mean contamination level within an error in the means of no more than 10%. This required sample size may be feasible for most sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Power and Precision in Confirmatory Factor Analytic Tests of Measurement Invariance

    ERIC Educational Resources Information Center

    Meade, Adam W.; Bauer, Daniel J.

    2007-01-01

    This study investigates the effects of sample size, factor overdetermination, and communality on the precision of factor loading estimates and the power of the likelihood ratio test of factorial invariance in multigroup confirmatory factor analysis. Although sample sizes are typically thought to be the primary determinant of precision and power,…

  1. Lowering sample size in comparative analyses can indicate a correlation where there is none: example from Rensch's rule in primates.

    PubMed

    Lindenfors, P; Tullberg, B S

    2006-07-01

    The fact that characters may co-vary in organism groups because of shared ancestry and not always because of functional correlations was the initial rationale for developing phylogenetic comparative methods. Here we point out a case where similarity due to shared ancestry can produce an undesired effect when conducting an independent contrasts analysis. Under special circumstances, using a low sample size will produce results indicating an evolutionary correlation between characters where an analysis of the same pattern utilizing a larger sample size will show that this correlation does not exist. This is the opposite effect of increased sample size to that expected; normally an increased sample size increases the chance of finding a correlation. The situation where the problem occurs is when co-variation between the two continuous characters analysed is clumped in clades; e.g. when some phylogenetically conservative factors affect both characters simultaneously. In such a case, the correlation between the two characters becomes contingent on the number of clades sharing this conservative factor that are included in the analysis, in relation to the number of species contained within these clades. Removing species scattered evenly over the phylogeny will in this case remove the exact variation that diffuses the evolutionary correlation between the two characters - the variation contained within the clades sharing the conservative factor. We exemplify this problem by discussing a parallel in nature where the described problem may be of importance. This concerns the question of the presence or absence of Rensch's rule in primates.

  2. Assessment of optimum threshold and particle shape parameter for the image analysis of aggregate size distribution of concrete sections

    NASA Astrophysics Data System (ADS)

    Ozen, Murat; Guler, Murat

    2014-02-01

    Aggregate gradation is one of the key design parameters affecting the workability and strength properties of concrete mixtures. Estimating aggregate gradation from hardened concrete samples can offer valuable insights into the quality of mixtures in terms of the degree of segregation and the amount of deviation from the specified gradation limits. In this study, a methodology is introduced to determine the particle size distribution of aggregates from 2D cross sectional images of concrete samples. The samples used in the study were fabricated from six mix designs by varying the aggregate gradation, aggregate source and maximum aggregate size with five replicates of each design combination. Each sample was cut into three pieces using a diamond saw and then scanned to obtain the cross sectional images using a desktop flatbed scanner. An algorithm is proposed to determine the optimum threshold for the image analysis of the cross sections. A procedure was also suggested to determine a suitable particle shape parameter to be used in the analysis of aggregate size distribution within each cross section. Results of analyses indicated that the optimum threshold hence the pixel distribution functions may be different even for the cross sections of an identical concrete sample. Besides, the maximum ferret diameter is the most suitable shape parameter to estimate the size distribution of aggregates when computed based on the diagonal sieve opening. The outcome of this study can be of practical value for the practitioners to evaluate concrete in terms of the degree of segregation and the bounds of mixture's gradation achieved during manufacturing.

  3. Phase transformations in a Cu−Cr alloy induced by high pressure torsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korneva, Anna, E-mail: a.korniewa@imim.pl; Straumal, Boris; Institut für Nanotechnologie, Karlsruher Institut für Technologie, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen

    2016-04-15

    Phase transformations induced by high pressure torsion (HPT) at room temperature in two samples of the Cu-0.86 at.% Cr alloy, pre-annealed at 550 °C and 1000 °C, were studied in order to obtain two different initial states for the HPT procedure. Observation of microstructure of the samples before HPT revealed that the sample annealed at 550 °C contained two types of Cr precipitates in the Cu matrix: large particles (size about 500 nm) and small ones (size about 70 nm). The sample annealed at 1000 °C showed only a little fraction of Cr precipitates (size about 2 μm). The subsequentmore » HPT process resulted in the partial dissolution of Cr precipitates in the first sample and dissolution of Cr precipitates with simultaneous decomposition of the supersaturated solid solution in another. However, the resulting microstructure of the samples after HPT was very similar from the standpoint of grain size, phase composition, texture analysis and hardness measurements. - Highlights: • Cu−Cr alloy with two different initial states was deformed by HPT. • Phase transformations in the deformed materials were studied. • SEM, TEM and X-ray diffraction techniques were used for microstructure analysis. • HPT leads to formation the same microstructure independent of the initial state.« less

  4. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features.

    PubMed

    McDonald, Linda S; Panozzo, Joseph F; Salisbury, Phillip A; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective.

  5. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features

    PubMed Central

    McDonald, Linda S.; Panozzo, Joseph F.; Salisbury, Phillip A.; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective. PMID:27176469

  6. Effect of soil texture and chemical properties on laboratory-generated dust emissions from SW North America

    NASA Astrophysics Data System (ADS)

    Mockford, T.; Zobeck, T. M.; Lee, J. A.; Gill, T. E.; Dominguez, M. A.; Peinado, P.

    2012-12-01

    Understanding the controls of mineral dust emissions and their particle size distributions during wind-erosion events is critical as dust particles play a significant impact in shaping the earth's climate. It has been suggested that emission rates and particle size distributions are independent of soil chemistry and soil texture. In this study, 45 samples of wind-erodible surface soils from the Southern High Plains and Chihuahuan Desert regions of Texas, New Mexico, Colorado and Chihuahua were analyzed by the Lubbock Dust Generation, Analysis and Sampling System (LDGASS) and a Beckman-Coulter particle multisizer. The LDGASS created dust emissions in a controlled laboratory setting using a rotating arm which allows particle collisions. The emitted dust was transferred to a chamber where particulate matter concentration was recorded using a DataRam and MiniVol filter and dust particle size distribution was recorded using a GRIMM particle analyzer. Particle size analysis was also determined from samples deposited on the Mini-Vol filters using a Beckman-Coulter particle multisizer. Soil textures of source samples ranged from sands and sandy loams to clays and silts. Initial results suggest that total dust emissions increased with increasing soil clay and silt content and decreased with increasing sand content. Particle size distribution analysis showed a similar relationship; soils with high silt content produced the widest range of dust particle sizes and the smallest dust particles. Sand grains seem to produce the largest dust particles. Chemical control of dust emissions by calcium carbonate content will also be discussed.

  7. A Fracture Mechanics Approach to Thermal Shock Investigation in Alumina-Based Refractory

    NASA Astrophysics Data System (ADS)

    Volkov-Husović, T.; Heinemann, R. Jančić; Mitraković, D.

    2008-02-01

    The thermal shock behavior of large grain size, alumina-based refractories was investigated experimentally using a standard water quench test. A mathematical model was employed to simulate the thermal stability behavior. Behavior of the samples under repeated thermal shock was monitored using ultrasonic measurements of dynamic Young's modulus. Image analysis was used to observe the extent of surface degradation. Analysis of the obtained results for the behavior of large grain size samples under conditions of rapid temperature changes is given.

  8. Device and technique for in-process sampling and analysis of molten metals and other liquids presenting harsh sampling conditions

    DOEpatents

    Alvarez, J.L.; Watson, L.D.

    1988-01-21

    An apparatus and method for continuously analyzing liquids by creating a supersonic spray which is shaped and sized prior to delivery of the spray to a analysis apparatus. The gas and liquid is sheared into small particles which are of a size and uniformity to form a spray which can be controlled through adjustment of pressures and gas velocity. The spray is shaped by a concentric supplemental flow of gas. 5 figs.

  9. Statistical theory and methodology for remote sensing data analysis

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1974-01-01

    A model is developed for the evaluation of acreages (proportions) of different crop-types over a geographical area using a classification approach and methods for estimating the crop acreages are given. In estimating the acreages of a specific croptype such as wheat, it is suggested to treat the problem as a two-crop problem: wheat vs. nonwheat, since this simplifies the estimation problem considerably. The error analysis and the sample size problem is investigated for the two-crop approach. Certain numerical results for sample sizes are given for a JSC-ERTS-1 data example on wheat identification performance in Hill County, Montana and Burke County, North Dakota. Lastly, for a large area crop acreages inventory a sampling scheme is suggested for acquiring sample data and the problem of crop acreage estimation and the error analysis is discussed.

  10. Biomass Compositional Analysis Laboratory Procedures | Bioenergy | NREL

    Science.gov Websites

    Compositional Analysis This procedure describes methods for sample drying and size reduction, obtaining samples methods used to determine the amount of solids or moisture present in a solid or slurry biomass sample as values? We have found that neutral detergent fiber (NDF) and acid detergent fiber (ADF) methods report

  11. Topological Analysis and Gaussian Decision Tree: Effective Representation and Classification of Biosignals of Small Sample Size.

    PubMed

    Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong

    2017-09-01

    Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.

  12. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  13. Methods to increase reproducibility in differential gene expression via meta-analysis

    PubMed Central

    Sweeney, Timothy E.; Haynes, Winston A.; Vallania, Francesco; Ioannidis, John P.; Khatri, Purvesh

    2017-01-01

    Findings from clinical and biological studies are often not reproducible when tested in independent cohorts. Due to the testing of a large number of hypotheses and relatively small sample sizes, results from whole-genome expression studies in particular are often not reproducible. Compared to single-study analysis, gene expression meta-analysis can improve reproducibility by integrating data from multiple studies. However, there are multiple choices in designing and carrying out a meta-analysis. Yet, clear guidelines on best practices are scarce. Here, we hypothesized that studying subsets of very large meta-analyses would allow for systematic identification of best practices to improve reproducibility. We therefore constructed three very large gene expression meta-analyses from clinical samples, and then examined meta-analyses of subsets of the datasets (all combinations of datasets with up to N/2 samples and K/2 datasets) compared to a ‘silver standard’ of differentially expressed genes found in the entire cohort. We tested three random-effects meta-analysis models using this procedure. We showed relatively greater reproducibility with more-stringent effect size thresholds with relaxed significance thresholds; relatively lower reproducibility when imposing extraneous constraints on residual heterogeneity; and an underestimation of actual false positive rate by Benjamini–Hochberg correction. In addition, multivariate regression showed that the accuracy of a meta-analysis increased significantly with more included datasets even when controlling for sample size. PMID:27634930

  14. Estimating the Size of a Large Network and its Communities from a Random Sample

    PubMed Central

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W.

    2017-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios. PMID:28867924

  15. Estimating the Size of a Large Network and its Communities from a Random Sample.

    PubMed

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = ( V, E ) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G ( W ) be the induced subgraph in G of the vertices in W . In addition to G ( W ), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K , and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.

  16. Direct Analysis of Low-Volatile Molecular Marker Extract from Airborne Particulate Matter Using Sensitivity Correction Method

    PubMed Central

    Irei, Satoshi

    2016-01-01

    Molecular marker analysis of environmental samples often requires time consuming preseparation steps. Here, analysis of low-volatile nonpolar molecular markers (5-6 ring polycyclic aromatic hydrocarbons or PAHs, hopanoids, and n-alkanes) without the preseparation procedure is presented. Analysis of artificial sample extracts was directly conducted by gas chromatography-mass spectrometry (GC-MS). After every sample injection, a standard mixture was also analyzed to make a correction on the variation of instrumental sensitivity caused by the unfavorable matrix contained in the extract. The method was further validated for the PAHs using the NIST standard reference materials (SRMs) and then applied to airborne particulate matter samples. Tests with the SRMs showed that overall our methodology was validated with the uncertainty of ~30%. The measurement results of airborne particulate matter (PM) filter samples showed a strong correlation between the PAHs, implying the contributions from the same emission source. Analysis of size-segregated PM filter samples showed that their size distributions were found to be in the PM smaller than 0.4 μm aerodynamic diameter. The observations were consistent with our expectation of their possible sources. Thus, the method was found to be useful for molecular marker studies. PMID:27127511

  17. Lipid Vesicle Shape Analysis from Populations Using Light Video Microscopy and Computer Vision

    PubMed Central

    Zupanc, Jernej; Drašler, Barbara; Boljte, Sabina; Kralj-Iglič, Veronika; Iglič, Aleš; Erdogmus, Deniz; Drobne, Damjana

    2014-01-01

    We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1–50 µm in diameter). For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness). This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected. PMID:25426933

  18. Measures of precision for dissimilarity-based multivariate analysis of ecological communities

    PubMed Central

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826

  19. Using Structural Equation Modeling to Assess Functional Connectivity in the Brain: Power and Sample Size Considerations

    ERIC Educational Resources Information Center

    Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack

    2014-01-01

    The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…

  20. Using Sieving and Unknown Sand Samples for a Sedimentation-Stratigraphy Class Project with Linkage to Introductory Courses

    ERIC Educational Resources Information Center

    Videtich, Patricia E.; Neal, William J.

    2012-01-01

    Using sieving and sample "unknowns" for instructional grain-size analysis and interpretation of sands in undergraduate sedimentology courses has advantages over other techniques. Students (1) learn to calculate and use statistics; (2) visually observe differences in the grain-size fractions, thereby developing a sense of specific size…

  1. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    NASA Astrophysics Data System (ADS)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  2. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    PubMed

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  3. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  4. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    USGS Publications Warehouse

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.

  5. Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.

    Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). Wemore » explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.« less

  6. Sample manipulation and data assembly for robust microcrystal synchrotron crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Gongrui; Fuchs, Martin R.; Shi, Wuxian

    With the recent developments in microcrystal handling, synchrotron microdiffraction beamline instrumentation and data analysis, microcrystal crystallography with crystal sizes of less than 10 µm is appealing at synchrotrons. However, challenges remain in sample manipulation and data assembly for robust microcrystal synchrotron crystallography. Here, the development of micro-sized polyimide well-mounts for the manipulation of microcrystals of a few micrometres in size and the implementation of a robust data-analysis method for the assembly of rotational microdiffraction data sets from many microcrystals are described. Here, the method demonstrates that microcrystals may be routinely utilized for the acquisition and assembly of complete data setsmore » from synchrotron microdiffraction beamlines.« less

  7. Sample manipulation and data assembly for robust microcrystal synchrotron crystallography

    DOE PAGES

    Guo, Gongrui; Fuchs, Martin R.; Shi, Wuxian; ...

    2018-04-19

    With the recent developments in microcrystal handling, synchrotron microdiffraction beamline instrumentation and data analysis, microcrystal crystallography with crystal sizes of less than 10 µm is appealing at synchrotrons. However, challenges remain in sample manipulation and data assembly for robust microcrystal synchrotron crystallography. Here, the development of micro-sized polyimide well-mounts for the manipulation of microcrystals of a few micrometres in size and the implementation of a robust data-analysis method for the assembly of rotational microdiffraction data sets from many microcrystals are described. Here, the method demonstrates that microcrystals may be routinely utilized for the acquisition and assembly of complete data setsmore » from synchrotron microdiffraction beamlines.« less

  8. A field instrument for quantitative determination of beryllium by activation analysis

    USGS Publications Warehouse

    Vaughn, William W.; Wilson, E.E.; Ohm, J.M.

    1960-01-01

    A low-cost instrument has been developed for quantitative determinations of beryllium in the field by activation analysis. The instrument makes use of the gamma-neutron reaction between gammas emitted by an artificially radioactive source (Sb124) and beryllium as it occurs in nature. The instrument and power source are mounted in a panel-type vehicle. Samples are prepared by hand-crushing the rock to approximately ?-inch mesh size and smaller. Sample volumes are kept constant by means of a standard measuring cup. Instrument calibration, made by using standards of known BeO content, indicates the analyses are reproducible and accurate to within ? 0.25 percent BeO in the range from 1 to 20 percent BeO with a sample counting time of 5 minutes. Sensitivity of the instrument maybe increased somewhat by increasing the source size, the sample size, or by enlarging the cross-sectional area of the neutron-sensitive phosphor normal to the neutron flux.

  9. Particle size analysis of amalgam powder and handpiece generated specimens.

    PubMed

    Drummond, J L; Hathorn, R M; Cailas, M D; Karuhn, R

    2001-07-01

    The increasing interest in the elimination of amalgam particles from the dental waste (DW) stream, requires efficient devices to remove these particles. The major objective of this project was to perform a comparative evaluation of five basic methods of particle size analysis in terms of the instrument's ability to quantify the size distribution of the various components within the DW stream. The analytical techniques chosen were image analysis via scanning electron microscopy, standard wire mesh sieves, X-ray sedigraphy, laser diffraction, and electrozone analysis. The DW particle stream components were represented by amalgam powders and handpiece/diamond bur generated specimens of enamel; dentin, whole tooth, and condensed amalgam. Each analytical method quantified the examined DW particle stream components. However, X-ray sedigraphy, electrozone, and laser diffraction particle analyses provided similar results for determining particle distributions of DW samples. These three methods were able to more clearly quantify the properties of the examined powder and condensed amalgam samples. Furthermore, these methods indicated that a significant fraction of the DW stream contains particles less than 20 microm. The findings of this study indicated that the electrozone method is likely to be the most effective technique for quantifying the particle size distribution in the DW particle stream. This method required a relative small volume of sample, was not affected by density, shape factors or optical properties, and measured a sufficient number of particles to provide a reliable representation of the particle size distribution curve.

  10. A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.

    PubMed

    Bord, Séverine; Bioche, Christèle; Druilhet, Pierre

    2018-05-01

    We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Role of Sample Processing Strategies at the European Union National Reference Laboratories (NRLs) Concerning the Analysis of Pesticide Residues.

    PubMed

    Hajeb, Parvaneh; Herrmann, Susan S; Poulsen, Mette E

    2017-07-19

    The guidance document SANTE 11945/2015 recommends that cereal samples be milled to a particle size preferably smaller than 1.0 mm and that extensive heating of the samples should be avoided. The aim of the present study was therefore to investigate the differences in milling procedures, obtained particle size distributions, and the resulting pesticide residue recovery when cereal samples were milled at the European Union National Reference Laboratories (NRLs) with their routine milling procedures. A total of 23 NRLs participated in the study. The oat and rye samples milled by each NRL were sent to the European Union Reference Laboratory on Cereals and Feedingstuff (EURL) for the determination of the particle size distribution and pesticide residue recovery. The results showed that the NRLs used several different brands and types of mills. Large variations in the particle size distributions and pesticide extraction efficiencies were observed even between samples milled by the same type of mill.

  12. Methodological quality of behavioural weight loss studies: a systematic review

    PubMed Central

    Lemon, S. C.; Wang, M. L.; Haughton, C. F.; Estabrook, D. P.; Frisard, C. F.; Pagoto, S. L.

    2018-01-01

    Summary This systematic review assessed the methodological quality of behavioural weight loss intervention studies conducted among adults and associations between quality and statistically significant weight loss outcome, strength of intervention effectiveness and sample size. Searches for trials published between January, 2009 and December, 2014 were conducted using PUBMED, MEDLINE and PSYCINFO and identified ninety studies. Methodological quality indicators included study design, anthropometric measurement approach, sample size calculations, intent-to-treat (ITT) analysis, loss to follow-up rate, missing data strategy, sampling strategy, report of treatment receipt and report of intervention fidelity (mean = 6.3). Indicators most commonly utilized included randomized design (100%), objectively measured anthropometrics (96.7%), ITT analysis (86.7%) and reporting treatment adherence (76.7%). Most studies (62.2%) had a follow-up rate >75% and reported a loss to follow-up analytic strategy or minimal missing data (69.9%). Describing intervention fidelity (34.4%) and sampling from a known population (41.1%) were least common. Methodological quality was not associated with reporting a statistically significant result, effect size or sample size. This review found the published literature of behavioural weight loss trials to be of high quality for specific indicators, including study design and measurement. Identified for improvement include utilization of more rigorous statistical approaches to loss to follow up and better fidelity reporting. PMID:27071775

  13. Global Sensitivity Analysis with Small Sample Sizes: Ordinary Least Squares Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Liu, Wei; Sivaramakrishnan, Raghu

    2016-12-21

    A new version of global sensitivity analysis is developed in this paper. This new version coupled with tools from statistics, machine learning, and optimization can devise small sample sizes that allow for the accurate ordering of sensitivity coefficients for the first 10-30 most sensitive chemical reactions in complex chemical-kinetic mechanisms, and is particularly useful for studying the chemistry in realistic devices. A key part of the paper is calibration of these small samples. Because these small sample sizes are developed for use in realistic combustion devices, the calibration is done over the ranges of conditions in such devices, with amore » test case being the operating conditions of a compression ignition engine studied earlier. Compression ignition engines operate under low-temperature combustion conditions with quite complicated chemistry making this calibration difficult, leading to the possibility of false positives and false negatives in the ordering of the reactions. So an important aspect of the paper is showing how to handle the trade-off between false positives and false negatives using ideas from the multiobjective optimization literature. The combination of the new global sensitivity method and the calibration are sample sizes a factor of approximately 10 times smaller than were available with our previous algorithm.« less

  14. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. L2 Reading Comprehension and Its Correlates: A Meta-Analysis

    ERIC Educational Resources Information Center

    Jeon, Eun Hee; Yamashita, Junko

    2014-01-01

    The present meta-analysis examined the overall average correlation (weighted for sample size and corrected for measurement error) between passage-level second language (L2) reading comprehension and 10 key reading component variables investigated in the research domain. Four high-evidence correlates (with 18 or more accumulated effect sizes: L2…

  16. Exact tests using two correlated binomial variables in contemporary cancer clinical trials.

    PubMed

    Yu, Jihnhee; Kepner, James L; Iyer, Renuka

    2009-12-01

    New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.

  17. An improved methodology of asymmetric flow field flow fractionation hyphenated with inductively coupled mass spectrometry for the determination of size distribution of gold nanoparticles in dietary supplements.

    PubMed

    Mudalige, Thilak K; Qu, Haiou; Linder, Sean W

    2015-11-13

    Engineered nanoparticles are available in large numbers of commercial products claiming various health benefits. Nanoparticle absorption, distribution, metabolism, excretion, and toxicity in a biological system are dependent on particle size, thus the determination of size and size distribution is essential for full characterization. Number based average size and size distribution is a major parameter for full characterization of the nanoparticle. In the case of polydispersed samples, large numbers of particles are needed to obtain accurate size distribution data. Herein, we report a rapid methodology, demonstrating improved nanoparticle recovery and excellent size resolution, for the characterization of gold nanoparticles in dietary supplements using asymmetric flow field flow fractionation coupled with visible absorption spectrometry and inductively coupled plasma mass spectrometry. A linear relationship between gold nanoparticle size and retention times was observed, and used for characterization of unknown samples. The particle size results from unknown samples were compared to results from traditional size analysis by transmission electron microscopy, and found to have less than a 5% deviation in size for unknown product over the size range from 7 to 30 nm. Published by Elsevier B.V.

  18. Device and technique for in-process sampling and analysis of molten metals and other liquids presenting harsh sampling conditions

    DOEpatents

    Alvarez, Joseph L.; Watson, Lloyd D.

    1989-01-01

    An apparatus and method for continuously analyzing liquids by creating a supersonic spray which is shaped and sized prior to delivery of the spray to a analysis apparatus. The gas and liquid are mixed in a converging-diverging nozzle where the liquid is sheared into small particles which are of a size and uniformly to form a spray which can be controlled through adjustment of pressures and gas velocity. The spray is shaped by a concentric supplemental flow of gas.

  19. Instrumental neutron activation analysis for studying size-fractionated aerosols

    NASA Astrophysics Data System (ADS)

    Salma, Imre; Zemplén-Papp, Éva

    1999-10-01

    Instrumental neutron activation analysis (INAA) was utilized for studying aerosol samples collected into a coarse and a fine size fraction on Nuclepore polycarbonate membrane filters. As a result of the panoramic INAA, 49 elements were determined in an amount of about 200-400 μg of particulate matter by two irradiations and four γ-spectrometric measurements. The analytical calculations were performed by the absolute ( k0) standardization method. The calibration procedures, application protocol and the data evaluation process are described and discussed. They make it possible now to analyse a considerable number of samples, with assuring the quality of the results. As a means of demonstrating the system's analytical capabilities, the concentration ranges, median or mean atmospheric concentrations and detection limits are presented for an extensive series of aerosol samples collected within the framework of an urban air pollution study in Budapest. For most elements, the precision of the analysis was found to be beyond the uncertainty represented by the sampling techniques and sample variability.

  20. Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research

    ERIC Educational Resources Information Center

    Cook, David A.; Hatala, Rose

    2015-01-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…

  1. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  2. Photographic techniques for characterizing streambed particle sizes

    USGS Publications Warehouse

    Whitman, Matthew S.; Moran, Edward H.; Ourso, Robert T.

    2003-01-01

    We developed photographic techniques to characterize coarse (>2-mm) and fine (≤2-mm) streambed particle sizes in 12 streams in Anchorage, Alaska. Results were compared with current sampling techniques to assess which provided greater sampling efficiency and accuracy. The streams sampled were wadeable and contained gravel—cobble streambeds. Gradients ranged from about 5% at the upstream sites to about 0.25% at the downstream sites. Mean particle sizes and size-frequency distributions resulting from digitized photographs differed significantly from those resulting from Wolman pebble counts for five sites in the analysis. Wolman counts were biased toward selecting larger particles. Photographic analysis also yielded a greater number of measured particles (mean = 989) than did the Wolman counts (mean = 328). Stream embeddedness ratings assigned from field and photographic observations were significantly different at 5 of the 12 sites, although both types of ratings showed a positive relationship with digitized surface fines. Visual estimates of embeddedness and digitized surface fines may both be useful indicators of benthic conditions, but digitizing surface fines produces quantitative rather than qualitative data. Benefits of the photographic techniques include reduced field time, minimal streambed disturbance, convenience of postfield processing, easy sample archiving, and improved accuracy and replication potential.

  3. Electrochemical Behavior Assessment of Micro- and Nano-Grained Commercial Pure Titanium in H2SO4 Solutions

    NASA Astrophysics Data System (ADS)

    Fattah-alhosseini, Arash; Ansari, Ali Reza; Mazaheri, Yousef; Karimi, Mohsen

    2017-02-01

    In this study, the electrochemical behavior of commercial pure titanium with both coarse-grained (annealed sample with the average grain size of about 45 µm) and nano-grained microstructure was compared by potentiodynamic polarization, electrochemical impedance spectroscopy (EIS), and Mott-Schottky analysis. Nano-grained Ti, which typically has a grain size of about 90 nm, is successfully made by six-cycle accumulative roll-bonding process at room temperature. Potentiodynamic polarization plots and impedance measurements revealed that as a result of grain refinement, the passive behavior of the nano-grained sample was improved compared to that of annealed pure Ti in H2SO4 solutions. Mott-Schottky analysis indicated that the passive films behaved as n-type semiconductors in H2SO4 solutions and grain refinement did not change the semiconductor type of passive films. Also, Mott-Schottky analysis showed that the donor densities decreased as the grain size of the samples reduced. Finally, all electrochemical tests showed that the electrochemical behavior of the nano-grained sample was improved compared to that of annealed pure Ti, mainly due to the formation of thicker and less defective oxide film.

  4. Automated fluid analysis apparatus and techniques

    DOEpatents

    Szecsody, James E.

    2004-03-16

    An automated device that couples a pair of differently sized sample loops with a syringe pump and a source of degassed water. A fluid sample is mounted at an inlet port and delivered to the sample loops. A selected sample from the sample loops is diluted in the syringe pump with the degassed water and fed to a flow through detector for analysis. The sample inlet is also directly connected to the syringe pump to selectively perform analysis without dilution. The device is airtight and used to detect oxygen-sensitive species, such as dithionite in groundwater following a remedial injection to treat soil contamination.

  5. Headspace Single-Drop Microextraction Gas Chromatography Mass Spectrometry for the Analysis of Volatile Compounds from Herba Asari

    PubMed Central

    Wang, Guan-Jie; Tian, Li; Fan, Yu-Ming; Qi, Mei-Ling

    2013-01-01

    A rapid headspace single-drop microextraction gas chromatography mass spectrometry (SDME-GC-MS) for the analysis of the volatile compounds in Herba Asari was developed in this study. The extraction solvent, extraction temperature and time, sample amount, and particle size were optimized. A mixed solvent of n-tridecane and butyl acetate (1 : 1) was finally used for the extraction with sample amount of 0.750 g and 100-mesh particle size at 70°C for 15 min. Under the determined conditions, the pound samples of Herba Asari were directly applied for the analysis. The result showed that SDME-GC–MS method was a simple, effective, and inexpensive way to measure the volatile compounds in Herba Asari and could be used for the analysis of volatile compounds in Chinese medicine. PMID:23607049

  6. An Analysis of Methods Used to Examine Gender Differences in Computer-Related Behavior.

    ERIC Educational Resources Information Center

    Kay, Robin

    1992-01-01

    Review of research investigating gender differences in computer-related behavior examines statistical and methodological flaws. Issues addressed include sample selection, sample size, scale development, scale quality, the use of univariate and multivariate analyses, regressional analysis, construct definition, construct testing, and the…

  7. Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument

    NASA Astrophysics Data System (ADS)

    Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory

    2014-10-01

    The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.

  8. Effect of annealing temperature on the size and magnetic properties of CoFe2O4 nanoparticle

    NASA Astrophysics Data System (ADS)

    Sunny, Annrose; Akshay, V. R.; Vasundhara, M.

    2018-05-01

    CoFe2O4 (CFO) nanoparticles (NPs) are synthesized using sol gel method and are annealed at 400, 600 and 800 °C for 4h. The crystal structure and morphology of the NPs are investigated through XRD and TEM analysis. The X- ray diffraction analysis shows that all the samples are well formed and attain a cubic structure with Fd-3m space group. The morphology of the material is found to be polygonal and the particle size of the NPs is increased with increase of annealing temperature as 400, 600 and 800 to be 20 nm, 30 nm and 70 nm respectively. The magnetic properties of the NPs are investigated using VSM and observed that the curie temperature for 400, 600 and 800 °C annealing temperature are 762 K, 780 K, 769 K respectively. The Ms of 600 sample is 80 emu/g. The 400 and 800 sample shows lower Ms value this is due to poor crystalanity and exaggerated grain growth at the respective temperatures. The coercivity of the sample shows linear dependence with particle size of the material the highest coercivity is obtained for 400 sample and low value for 800 sample.

  9. Flow field-flow fractionation for the analysis of nanoparticles used in drug delivery.

    PubMed

    Zattoni, Andrea; Roda, Barbara; Borghi, Francesco; Marassi, Valentina; Reschiglian, Pierluigi

    2014-01-01

    Structured nanoparticles (NPs) with controlled size distribution and novel physicochemical features present fundamental advantages as drug delivery systems with respect to bulk drugs. NPs can transport and release drugs to target sites with high efficiency and limited side effects. Regulatory institutions such as the US Food and Drug Administration (FDA) and the European Commission have pointed out that major limitations to the real application of current nanotechnology lie in the lack of homogeneous, pure and well-characterized NPs, also because of the lack of well-assessed, robust routine methods for their quality control and characterization. Many properties of NPs are size-dependent, thus the particle size distribution (PSD) plays a fundamental role in determining the NP properties. At present, scanning and transmission electron microscopy (SEM, TEM) are among the most used techniques to size characterize NPs. Size-exclusion chromatography (SEC) is also applied to the size separation of complex NP samples. SEC selectivity is, however, quite limited for very large molar mass analytes such as NPs, and interactions with the stationary phase can alter NP morphology. Flow field-flow fractionation (F4) is increasingly used as a mature separation method to size sort and characterize NPs in native conditions. Moreover, the hyphenation with light scattering (LS) methods can enhance the accuracy of size analysis of complex samples. In this paper, the applications of F4-LS to NP analysis used as drug delivery systems for their size analysis, and the study of stability and drug release effects are reviewed. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Sources of variability in collection and preparation of paint and lead-coating samples.

    PubMed

    Harper, S L; Gutknecht, W F

    2001-06-01

    Chronic exposure of children to lead (Pb) can result in permanent physiological impairment. Since surfaces coated with lead-containing paints and varnishes are potential sources of exposure, it is extremely important that reliable methods for sampling and analysis be available. The sources of variability in the collection and preparation of samples were investigated to improve the performance and comparability of methods and to ensure that data generated will be adequate for its intended use. Paint samples of varying sizes (areas and masses) were collected at different locations across a variety of surfaces including metal, plaster, concrete, and wood. A variety of grinding techniques were compared. Manual mortar and pestle grinding for at least 1.5 min and mechanized grinding techniques were found to generate similar homogenous particle size distributions required for aliquots as small as 0.10 g. When 342 samples were evaluated for sample weight loss during mortar and pestle grinding, 4% had 20% or greater loss with a high of 41%. Homogenization and sub-sampling steps were found to be the principal sources of variability related to the size of the sample collected. Analysis of samples from different locations on apparently identical surfaces were found to vary by more than a factor of two both in Pb concentration (mg cm-2 or %) and areal coating density (g cm-2). Analyses of substrates were performed to determine the Pb remaining after coating removal. Levels as high as 1% Pb were found in some substrate samples, corresponding to more than 35 mg cm-2 Pb. In conclusion, these sources of variability must be considered in development and/or application of any sampling and analysis methodologies.

  11. Discerning some Tylenol brands using attenuated total reflection Fourier transform infrared data and multivariate analysis techniques.

    PubMed

    Msimanga, Huggins Z; Ollis, Robert J

    2010-06-01

    Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were used to classify acetaminophen-containing medicines using their attenuated total reflection Fourier transform infrared (ATR-FT-IR) spectra. Four formulations of Tylenol (Arthritis Pain Relief, Extra Strength Pain Relief, 8 Hour Pain Relief, and Extra Strength Pain Relief Rapid Release) along with 98% pure acetaminophen were selected for this study because of the similarity of their spectral features, with correlation coefficients ranging from 0.9857 to 0.9988. Before acquiring spectra for the predictor matrix, the effects on spectral precision with respect to sample particle size (determined by sieve size opening), force gauge of the ATR accessory, sample reloading, and between-tablet variation were examined. Spectra were baseline corrected and normalized to unity before multivariate analysis. Analysis of variance (ANOVA) was used to study spectral precision. The large particles (35 mesh) showed large variance between spectra, while fine particles (120 mesh) indicated good spectral precision based on the F-test. Force gauge setting did not significantly affect precision. Sample reloading using the fine particle size and a constant force gauge setting of 50 units also did not compromise precision. Based on these observations, data acquisition for the predictor matrix was carried out with the fine particles (sieve size opening of 120 mesh) at a constant force gauge setting of 50 units. After removing outliers, PCA successfully classified the five samples in the first and second components, accounting for 45.0% and 24.5% of the variances, respectively. The four-component PLS-DA model (R(2)=0.925 and Q(2)=0.906) gave good test spectra predictions with an overall average of 0.961 +/- 7.1% RSD versus the expected 1.0 prediction for the 20 test spectra used.

  12. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

    PubMed Central

    Duncanson, L.; Rourke, O.; Dubayah, R.

    2015-01-01

    Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from −4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation. PMID:26598233

  13. [A comparison of convenience sampling and purposive sampling].

    PubMed

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  14. Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach

    NASA Astrophysics Data System (ADS)

    Xiao, T.

    2012-12-01

    One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.

  15. Measurement of marine picoplankton cell size by using a cooled, charge-coupled device camera with image-analyzed fluorescence microscopy.

    PubMed Central

    Viles, C L; Sieracki, M E

    1992-01-01

    Accurate measurement of the biomass and size distribution of picoplankton cells (0.2 to 2.0 microns) is paramount in characterizing their contribution to the oceanic food web and global biogeochemical cycling. Image-analyzed fluorescence microscopy, usually based on video camera technology, allows detailed measurements of individual cells to be taken. The application of an imaging system employing a cooled, slow-scan charge-coupled device (CCD) camera to automated counting and sizing of individual picoplankton cells from natural marine samples is described. A slow-scan CCD-based camera was compared to a video camera and was superior for detecting and sizing very small, dim particles such as fluorochrome-stained bacteria. Several edge detection methods for accurately measuring picoplankton cells were evaluated. Standard fluorescent microspheres and a Sargasso Sea surface water picoplankton population were used in the evaluation. Global thresholding was inappropriate for these samples. Methods used previously in image analysis of nanoplankton cells (2 to 20 microns) also did not work well with the smaller picoplankton cells. A method combining an edge detector and an adaptive edge strength operator worked best for rapidly generating accurate cell sizes. A complete sample analysis of more than 1,000 cells averages about 50 min and yields size, shape, and fluorescence data for each cell. With this system, the entire size range of picoplankton can be counted and measured. Images PMID:1610183

  16. Sample Size Estimation for Alzheimer's Disease Trials from Japanese ADNI Serial Magnetic Resonance Imaging.

    PubMed

    Fujishima, Motonobu; Kawaguchi, Atsushi; Maikusa, Norihide; Kuwano, Ryozo; Iwatsubo, Takeshi; Matsuda, Hiroshi

    2017-01-01

    Little is known about the sample sizes required for clinical trials of Alzheimer's disease (AD)-modifying treatments using atrophy measures from serial brain magnetic resonance imaging (MRI) in the Japanese population. The primary objective of the present study was to estimate how large a sample size would be needed for future clinical trials for AD-modifying treatments in Japan using atrophy measures of the brain as a surrogate biomarker. Sample sizes were estimated from the rates of change of the whole brain and hippocampus by the k-means normalized boundary shift integral (KN-BSI) and cognitive measures using the data of 537 Japanese Alzheimer's Neuroimaging Initiative (J-ADNI) participants with a linear mixed-effects model. We also examined the potential use of ApoE status as a trial enrichment strategy. The hippocampal atrophy rate required smaller sample sizes than cognitive measures of AD and mild cognitive impairment (MCI). Inclusion of ApoE status reduced sample sizes for AD and MCI patients in the atrophy measures. These results show the potential use of longitudinal hippocampal atrophy measurement using automated image analysis as a progression biomarker and ApoE status as a trial enrichment strategy in a clinical trial of AD-modifying treatment in Japanese people.

  17. Identification of the Properties of Gum Arabic Used as a Binder in 7.62-mm Ammunition Primers

    DTIC Science & Technology

    2010-06-01

    Solution - LCC Testing (ATK Task 700) 51 Cartridge - Ballistic Testing (ATK Task 800) 51 ATK Elemental Analysis 52 Moisture Loss and Friability...Hummel sample 7 3 SDT summary for Quadra sample 8 4 Particle size analysis summary for gum arabic samples 9 5 SEM images of Colony gum arabic at 230x...strengths 21 16 Color analysis : Colony after 5.0 hrs 23 17 Color analysis : Hummel after 5.0 hrs 23 18 Color analysis : Brenntag after 5.0 hrs 23 19 Gel

  18. Inertial impaction air sampling device

    DOEpatents

    Dewhurst, Katharine H.

    1990-01-01

    An inertial impactor to be used in an air sampling device for collection of respirable size particles in ambient air which may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry.

  19. Inertial impaction air sampling device

    DOEpatents

    Dewhurst, K.H.

    1987-12-10

    An inertial impactor to be used in an air sampling device for collection of respirable size particles in ambient air which may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry. 3 figs.

  20. Sample size in psychological research over the past 30 years.

    PubMed

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  1. Porosity characterization for heterogeneous shales using integrated multiscale microscopy

    NASA Astrophysics Data System (ADS)

    Rassouli, F.; Andrew, M.; Zoback, M. D.

    2016-12-01

    Pore size distribution analysis plays a critical role in gas storage capacity and fluid transport characterization of shales. Study of the diverse distribution of pore size and structure in such low permeably rocks is withheld by the lack of tools to visualize the microstructural properties of shale rocks. In this paper we try to use multiple techniques to investigate the full pore size range in different sample scales. Modern imaging techniques are combined with routine analytical investigations (x-ray diffraction, thin section analysis and mercury porosimetry) to describe pore size distribution of shale samples from Haynesville formation in East Texas to generate a more holistic understanding of the porosity structure in shales, ranging from standard core plug down to nm scales. Standard 1" diameter core plug samples were first imaged using a Versa 3D x-ray microscope at lower resolutions. Then we pick several regions of interest (ROIs) with various micro-features (such as micro-cracks and high organic matters) in the rock samples to run higher resolution CT scans using a non-destructive interior tomography scans. After this step, we cut the samples and drill 5 mm diameter cores out of the selected ROIs. Then we rescan the samples to measure porosity distribution of the 5 mm cores. We repeat this step for samples with diameter of 1 mm being cut out of the 5 mm cores using a laser cutting machine. After comparing the pore structure and distribution of the samples measured form micro-CT analysis, we move to nano-scale imaging to capture the ultra-fine pores within the shale samples. At this stage, the diameter of the 1 mm samples will be milled down to 70 microns using the laser beam. We scan these samples in a nano-CT Ultra x-ray microscope and calculate the porosity of the samples by image segmentation methods. Finally, we use images collected from focused ion beam scanning electron microscopy (FIB-SEM) to be able to compare the results of porosity measurements from all different imaging techniques. These multi-scale characterization techniques are then compared with traditional analytical techniques such as Mercury Porosimetry.

  2. Are Parents' Gender Schemas Related to Their Children's Gender-Related Cognitions? A Meta-Analysis.

    ERIC Educational Resources Information Center

    Tenenbaum, Harriet R.; Leaper, Campbell

    2002-01-01

    Used meta-analysis to examine relationship of parents' gender schemas and their offspring's gender-related cognitions, with samples ranging in age from infancy through early adulthood. Found a small but meaningful effect size (r=.16) indicating a positive correlation between parent gender schema and offspring measures. Effect sizes were influenced…

  3. A novel measure of effect size for mediation analysis.

    PubMed

    Lachowicz, Mark J; Preacher, Kristopher J; Kelley, Ken

    2018-06-01

    Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Observed oil and gas field size distributions: A consequence of the discovery process and prices of oil and gas

    USGS Publications Warehouse

    Drew, L.J.; Attanasi, E.D.; Schuenemeyer, J.H.

    1988-01-01

    If observed oil and gas field size distributions are obtained by random samplings, the fitted distributions should approximate that of the parent population of oil and gas fields. However, empirical evidence strongly suggests that larger fields tend to be discovered earlier in the discovery process than they would be by random sampling. Economic factors also can limit the number of small fields that are developed and reported. This paper examines observed size distributions in state and federal waters of offshore Texas. Results of the analysis demonstrate how the shape of the observable size distributions change with significant hydrocarbon price changes. Comparison of state and federal observed size distributions in the offshore area shows how production cost differences also affect the shape of the observed size distribution. Methods for modifying the discovery rate estimation procedures when economic factors significantly affect the discovery sequence are presented. A primary conclusion of the analysis is that, because hydrocarbon price changes can significantly affect the observed discovery size distribution, one should not be confident about inferring the form and specific parameters of the parent field size distribution from the observed distributions. ?? 1988 International Association for Mathematical Geology.

  5. Characteristics of randomised trials on diseases in the digestive system registered in ClinicalTrials.gov: a retrospective analysis.

    PubMed

    Wildt, Signe; Krag, Aleksander; Gluud, Liselotte

    2011-01-01

    Objectives To evaluate the adequacy of reporting of protocols for randomised trials on diseases of the digestive system registered in http://ClinicalTrials.gov and the consistency between primary outcomes, secondary outcomes and sample size specified in http://ClinicalTrials.gov and published trials. Methods Randomised phase III trials on adult patients with gastrointestinal diseases registered before January 2009 in http://ClinicalTrials.gov were eligible for inclusion. From http://ClinicalTrials.gov all data elements in the database required by the International Committee of Medical Journal Editors (ICMJE) member journals were extracted. The subsequent publications for registered trials were identified. For published trials, data concerning publication date, primary and secondary endpoint, sample size, and whether the journal adhered to ICMJE principles were extracted. Differences between primary and secondary outcomes, sample size and sample size calculations data in http://ClinicalTrials.gov and in the published paper were registered. Results 105 trials were evaluated. 66 trials (63%) were published. 30% of trials were registered incorrectly after their completion date. Several data elements of the required ICMJE data list were not filled in, with missing data in 22% and 11%, respectively, of cases concerning the primary outcome measure and sample size. In 26% of the published papers, data on sample size calculations were missing and discrepancies between sample size reporting in http://ClinicalTrials.gov and published trials existed. Conclusion The quality of registration of randomised controlled trials still needs improvement.

  6. Development of size-selective sampling of Bacillus anthracis surrogate spores from simulated building air intake mixtures for analysis via laser-induced breakdown spectroscopy.

    PubMed

    Gibb-Snyder, Emily; Gullett, Brian; Ryan, Shawn; Oudejans, Lukas; Touati, Abderrahmane

    2006-08-01

    Size-selective sampling of Bacillus anthracis surrogate spores from realistic, common aerosol mixtures was developed for analysis by laser-induced breakdown spectroscopy (LIBS). A two-stage impactor was found to be the preferential sampling technique for LIBS analysis because it was able to concentrate the spores in the mixtures while decreasing the collection of potentially interfering aerosols. Three common spore/aerosol scenarios were evaluated, diesel truck exhaust (to simulate a truck running outside of a building air intake), urban outdoor aerosol (to simulate common building air), and finally a protein aerosol (to simulate either an agent mixture (ricin/anthrax) or a contaminated anthrax sample). Two statistical methods, linear correlation and principal component analysis, were assessed for differentiation of surrogate spore spectra from other common aerosols. Criteria for determining percentages of false positives and false negatives via correlation analysis were evaluated. A single laser shot analysis of approximately 4 percent of the spores in a mixture of 0.75 m(3) urban outdoor air doped with approximately 1.1 x 10(5) spores resulted in a 0.04 proportion of false negatives. For that same sample volume of urban air without spores, the proportion of false positives was 0.08.

  7. Epistemological Issues in Astronomy Education Research: How Big of a Sample is "Big Enough"?

    NASA Astrophysics Data System (ADS)

    Slater, Stephanie; Slater, T. F.; Souri, Z.

    2012-01-01

    As astronomy education research (AER) continues to evolve into a sophisticated enterprise, we must begin to grapple with defining our epistemological parameters. Moreover, as we attempt to make pragmatic use of our findings, we must make a concerted effort to communicate those parameters in a sensible way to the larger astronomical community. One area of much current discussion involves a basic discussion of methodologies, and subsequent sample sizes, that should be considered appropriate for generating knowledge in the field. To address this question, we completed a meta-analysis of nearly 1,000 peer-reviewed studies published in top tier professional journals. Data related to methodologies and sample sizes were collected from "hard science” and "human science” journals to compare the epistemological systems of these two bodies of knowledge. Working back in time from August 2011, the 100 most recent studies reported in each journal were used as a data source: Icarus, ApJ and AJ, NARST, IJSE and SciEd. In addition, data was collected from the 10 most recent AER dissertations, a set of articles determined by the science education community to be the most influential in the field, and the nearly 400 articles used as reference materials for the NRC's Taking Science to School. Analysis indicates these bodies of knowledge have a great deal in common; each relying on a large variety of methodologies, and each building its knowledge through studies that proceed from surprisingly low sample sizes. While both fields publish a small percentage of studies with large sample sizes, the vast majority of top tier publications consist of rich studies of a small number of objects. We conclude that rigor in each field is determined not by a circumscription of methodologies and sample sizes, but by peer judgments that the methods and sample sizes are appropriate to the research question.

  8. Measures of precision for dissimilarity-based multivariate analysis of ecological communities.

    PubMed

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.

  9. Comparison of hard tissues that are useful for DNA analysis in forensic autopsy.

    PubMed

    Kaneko, Yu; Ohira, Hiroshi; Tsuda, Yukio; Yamada, Yoshihiro

    2015-11-01

    Forensic analysis of DNA from hard tissues can be important when investigating a variety of cases resulting from mass disaster or criminal cases. This study was conducted to evaluate the most suitable tissues, method and sample size for processing of hard tissues prior to DNA isolation. We also evaluated the elapsed time after death in relation to the quantity of DNA extracted. Samples of hard tissues (37 teeth, 42 skull, 42 rib, and 39 nails) from 42 individuals aged between 50 and 83 years were used. The samples were taken from remains following forensic autopsy (from 2 days to 2 years after death). To evaluate the integrity of the nuclear DNA isolated, the percentage of allele calls for short tandem repeat profiles were compared between the hard tissues. DNA typing results indicated that until 1 month after death, any of the four hard tissue samples could be used as an alternative to teeth, allowing analysis of all of the loci. However, in terms of the sampling site, collection method and sample size adjustment, the rib appeared to be the best choice in view of the ease of specimen preparation. Our data suggest that the rib could be an alternative hard tissue sample for DNA analysis of human remains. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Particle size analysis on density, surface morphology and specific capacitance of carbon electrode from rubber wood sawdust

    NASA Astrophysics Data System (ADS)

    Taer, E.; Kurniasih, B.; Sari, F. P.; Zulkifli, Taslim, R.; Sugianto, Purnama, A.; Apriwandi, Susanti, Y.

    2018-02-01

    The particle size analysis for supercapacitor carbon electrodes from rubber wood sawdust (SGKK) has been done successfully. The electrode particle size was reviewed against the properties such as density, degree of crystallinity, surface morphology and specific capacitance. The variations in particle size were made by different treatment on the grinding and sieving process. The sample particle size was distinguished as 53-100 µm for 20 h (SA), 38-53 µm for 20 h (SB) and < 38 µm with variations of grinding time for 40 h (SC) and 80 h (SD) respectively. All of the samples were activated by 0.4 M KOH solution. Carbon electrodes were carbonized at temperature of 600oC in N2 gas environment and then followed by CO2 gas activation at a temperature of 900oC for 2 h. The densities for each variation in the particle size were 1.034 g cm-3, 0.849 g cm-3, 0.892 g cm-3 and 0.982 g cm-3 respectively. The morphological study identified the distance between the particles more closely at 38-53 µm (SB) particle size. The electrochemical properties of supercapacitor cells have been investigated using electrochemical methods such as impedance spectroscopy and charge-discharge at constant current using Solatron 1280 tools. Electrochemical properties testing results have shown SB samples with a particle size of 38-53 µm produce supercapacitor cells with optimum capacitive performance.

  11. Sampling designs for contaminant temporal trend analyses using sedentary species exemplified by the snails Bellamya aeruginosa and Viviparus viviparus.

    PubMed

    Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders

    2017-10-01

    Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.

    2016-12-01

    VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.

  13. Storage effects on quantity and composition of dissolved organic carbon and nitrogen of lake water, leaf leachate and peat soil water.

    PubMed

    Heinz, Marlen; Zak, Dominik

    2018-03-01

    This study aimed to evaluate the effects of freezing and cold storage at 4 °C on bulk dissolved organic carbon (DOC) and nitrogen (DON) concentration and SEC fractions determined with size exclusion chromatography (SEC), as well as on spectral properties of dissolved organic matter (DOM) analyzed with fluorescence spectroscopy. In order to account for differences in DOM composition and source we analyzed storage effects for three different sample types, including a lake water sample representing freshwater DOM, a leaf litter leachate of Phragmites australis representing a terrestrial, 'fresh' DOM source and peatland porewater samples. According to our findings one week of cold storage can bias DOC and DON determination. Overall, the determination of DOC and DON concentration with SEC analysis for all three sample types were little susceptible to alterations due to freezing. The findings derived for the sampling locations investigated here may not apply for other sampling locations and/or sample types. However, DOC size fractions and DON concentration of formerly frozen samples should be interpreted with caution when sample concentrations are high. Alteration of some optical properties (HIX and SUVA 254 ) due to freezing were evident, and therefore we recommend immediate analysis of samples for spectral analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Measurement of Vibrated Bulk Density of Coke Particle Blends Using Image Texture Analysis

    NASA Astrophysics Data System (ADS)

    Azari, Kamran; Bogoya-Forero, Wilinthon; Duchesne, Carl; Tessier, Jayson

    2017-09-01

    A rapid and nondestructive machine vision sensor was developed for predicting the vibrated bulk density (VBD) of petroleum coke particles based on image texture analysis. It could be used for making corrective adjustments to a paste plant operation to reduce green anode variability (e.g., changes in binder demand). Wavelet texture analysis (WTA) and gray level co-occurrence matrix (GLCM) algorithms were used jointly for extracting the surface textural features of coke aggregates from images. These were correlated with the VBD using partial least-squares (PLS) regression. Coke samples of several sizes and from different sources were used to test the sensor. Variations in the coke surface texture introduced by coke size and source allowed for making good predictions of the VBD of individual coke samples and mixtures of them (blends involving two sources and different sizes). Promising results were also obtained for coke blends collected from an industrial-baked carbon anode manufacturer.

  15. Sedimentology and geochemistry of mud volcanoes in the Anaximander Mountain Region from the Eastern Mediterranean Sea.

    PubMed

    Talas, Ezgi; Duman, Muhammet; Küçüksezgin, Filiz; Brennan, Michael L; Raineault, Nicole A

    2015-06-15

    Investigations carried out on surface sediments collected from the Anaximander mud volcanoes in the Eastern Mediterranean Sea to determine sedimentary and geochemical properties. The sediment grain size distribution and geochemical contents were determined by grain size analysis, organic carbon, carbonate contents and element analysis. The results of element contents were compared to background levels of Earth's crust. The factors that affect element distribution in sediments were calculated by the nine push core samples taken from the surface of mud volcanoes by the E/V Nautilus. The grain size of the samples varies from sand to sandy silt. Enrichment and Contamination factor analysis showed that these analyses can also be used to evaluate of deep sea environmental and source parameters. It is concluded that the biological and cold seep effects are the main drivers of surface sediment characteristics from the Anaximander mud volcanoes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Developing optimum sample size and multistage sampling plans for Lobesia botrana (Lepidoptera: Tortricidae) larval infestation and injury in northern Greece.

    PubMed

    Ifoulis, A A; Savopoulou-Soultani, M

    2006-10-01

    The purpose of this research was to quantify the spatial pattern and develop a sampling program for larvae of Lobesia botrana Denis and Schiffermüller (Lepidoptera: Tortricidae), an important vineyard pest in northern Greece. Taylor's power law and Iwao's patchiness regression were used to model the relationship between the mean and the variance of larval counts. Analysis of covariance was carried out, separately for infestation and injury, with combined second and third generation data, for vine and half-vine sample units. Common regression coefficients were estimated to permit use of the sampling plan over a wide range of conditions. Optimum sample sizes for infestation and injury, at three levels of precision, were developed. An investigation of a multistage sampling plan with a nested analysis of variance showed that if the goal of sampling is focusing on larval infestation, three grape clusters should be sampled in a half-vine; if the goal of sampling is focusing on injury, then two grape clusters per half-vine are recommended.

  17. Steep discounting of delayed monetary and food rewards in obesity: a meta-analysis.

    PubMed

    Amlung, M; Petker, T; Jackson, J; Balodis, I; MacKillop, J

    2016-08-01

    An increasing number of studies have investigated delay discounting (DD) in relation to obesity, but with mixed findings. This meta-analysis synthesized the literature on the relationship between monetary and food DD and obesity, with three objectives: (1) to characterize the relationship between DD and obesity in both case-control comparisons and continuous designs; (2) to examine potential moderators, including case-control v. continuous design, money v. food rewards, sample sex distribution, and sample age (18 years); and (3) to evaluate publication bias. From 134 candidate articles, 39 independent investigations yielded 29 case-control and 30 continuous comparisons (total n = 10 278). Random-effects meta-analysis was conducted using Cohen's d as the effect size. Publication bias was evaluated using fail-safe N, Begg-Mazumdar and Egger tests, meta-regression of publication year and effect size, and imputation of missing studies. The primary analysis revealed a medium effect size across studies that was highly statistically significant (d = 0.43, p < 10-14). None of the moderators examined yielded statistically significant differences, although notably larger effect sizes were found for studies with case-control designs, food rewards and child/adolescent samples. Limited evidence of publication bias was present, although the Begg-Mazumdar test and meta-regression suggested a slightly diminishing effect size over time. Steep DD of food and money appears to be a robust feature of obesity that is relatively consistent across the DD assessment methodologies and study designs examined. These findings are discussed in the context of research on DD in drug addiction, the neural bases of DD in obesity, and potential clinical applications.

  18. Acceptability of Dry Dog Food Visual Characteristics by Consumer Segments Based on Overall Liking: a Case Study in Poland.

    PubMed

    Gomez Baquero, David; Koppel, Kadri; Chambers, Delores; Hołda, Karolina; Głogowski, Robert; Chambers, Edgar

    2018-05-23

    Sensory analysis of pet foods has been emerging as an important field of study for the pet food industry over the last few decades. Few studies have been conducted on understanding the pet owners’ perception of pet foods. The objective of this study is to gain a deeper understanding on the perception of the visual characteristics of dry dog foods by dog owners in different consumer segments. A total of 120 consumers evaluated the appearance of 30 dry dog food samples with varying visual characteristics. The consumers rated the acceptance of the samples and associated each one with a list of positive and negative beliefs. Cluster Analysis, ANOVA and Correspondence Analysis were used to analyze the consumer responses. The acceptability of the appearance of dry dog foods was affected by the number of different kibbles present, color(s), shape(s), and size(s) of the kibbles in the product. Three consumer clusters were identified. Consumers rated highest single-kibble samples of medium sizes, traditional shapes, and brown colors. Participants disliked extra-small or extra-large kibble sizes, shapes with high-dimensional contrast, and kibbles of light brown color. These findings can help dry dog food manufacturers to meet consumers’ needs with increasing benefits to the pet food and commodity industries.

  19. Sizing for the apparel industry using statistical analysis - a Brazilian case study

    NASA Astrophysics Data System (ADS)

    Capelassi, C. H.; Carvalho, M. A.; El Kattel, C.; Xu, B.

    2017-10-01

    The study of the body measurements of Brazilian women used the Kinect Body Imaging system for 3D body scanning. The result of the study aims to meet the needs of the apparel industry for accurate measurements. Data was statistically treated using the IBM SPSS 23 system, with 95% confidence (P<0,05) for the inferential analysis, with the purpose of grouping the measurements in sizes, so that a smaller number of sizes can cover a greater number of people. The sample consisted of 101 volunteers aged between 19 and 62 years. A cluster analysis was performed to identify the main body shapes of the sample. The results were divided between the top and bottom body portions; For the top portion, were used the measurements of the abdomen, waist and bust circumferences, as well as the height; For the bottom portion, were used the measurements of the hip circumference and the height. Three sizing systems were developed for the researched sample from the Abdomen-to-Height Ratio - AHR (top portion): Small (AHR < 0,52), Medium (AHR: 0,52-0,58), Large (AHR > 0,58) and from the Hip-to-Height Ratio - HHR (bottom portion): Small (HHR < 0,62), Medium (HHR: 0,62-0,68), Large (HHR > 0,68).

  20. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  1. Spatial variations in annual cycles of body-size spectra of planktonic ciliates and their environmental drivers in marine ecosystems.

    PubMed

    Xu, Henglong; Jiang, Yong; Xu, Guangjian

    2016-11-15

    Body-size spectra has proved to be a useful taxon-free resolution to summarize a community structure for bioassessment. The spatial variations in annual cycles of body-size spectra of planktonic ciliates and their environmental drivers were studied based on an annual dataset. Samples were biweekly collected at five stations in a bay of the Yellow Sea, northern China during a 1-year cycle. Based on a multivariate approach, the second-stage analysis, it was shown that the annual cycles of the body-size spectra were significantly different among five sampling stations. Correlation analysis demonstrated that the spatial variations in the body-size spectra were significantly related to changes of environmental conditions, especially dissolved nitrogen, alone or in combination with salinity and dissolve oxygen. Based on results, it is suggested that the nutrients may be the environmental drivers to shape the spatial variations in annual cycles of planktonic ciliates in terms of body-size spectra in marine ecosystems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Characterisation of Fine Ash Fractions from the AD 1314 Kaharoa Eruption

    NASA Astrophysics Data System (ADS)

    Weaver, S. J.; Rust, A.; Carey, R. J.; Houghton, B. F.

    2012-12-01

    The AD 1314±12 yr Kaharoa eruption of Tarawera volcano, New Zealand, produced deposits exhibiting both plinian and subplinian characteristics (Nairn et al., 2001; 2004, Leonard et al., 2002, Hogg et al., 2003). Their widespread dispersal yielded volumes, column heights, and mass discharge rates of plinian magnitude and intensity (Sahetapy-Engel, 2002); however, vertical shifts in grain size suggest waxing and waning within single phases and time-breaks on the order of hours between phases. These grain size shifts were quantified using sieve, laser diffraction, and image analysis of the fine ash fractions (<1 mm in diameter) of some of the most explosive phases of the eruption. These analyses served two purposes: 1) to characterise the change in eruption intensity over time, and 2) to compare the three methods of grain size analysis. Additional analyses of the proportions of components and particle shape were also conducted to aid in the interpretation of the eruption and transport dynamics. 110 samples from a single location about 6 km from source were sieved at half phi intervals between -4φ to 4φ (16 mm - 63 μm). A single sample was then chosen to test the range of grain sizes to run through the Mastersizer 2000. Three aliquots were tested; the first consisted of each sieve size fraction ranging between 0φ (1000 μm) and <4φ (<63 μm, i.e. the pan). For example, 0, 0.5, 1, …, 4φ, and the pan were ran through the Mastersizer and then their results, weighted according to their sieve weight percents, were summed together to produce a total distribution. The second aliquot included 3 samples ranging between 0-2φ (1000-250 μm), 2.5-4φ (249-63 μm), and the pan. A single sample consisting of the total range of grain sizes between 0φ and the pan was used for the final aliquot. Their results were compared and it was determined that the single sample consisting of the broadest range of grain sizes yielded an accurate grain size distribution. This data was then compared with the sieve weight percent data, and revealed that there is a significant difference in size characterisation between sieving and the Mastersizer for size fractions between 0-3φ (1000-125 μm). This is due predominantly to the differing methods that sieving and the Mastersizer use to characterise a single particle, to inhomogeneity in grain density in each grain-size fraction, and to grain-shape irregularities. This led the Mastersizer to allocate grains from a certain sieve size fraction into coarser size fractions. Therefore, only the Mastersizer data from 3.5φ and below were combined with the coarser sieve data to yield total grain size distributions. This high-resolution analysis of the grain size data enabled subtle trends in grain size to be identified and related to short timescale eruptive processes.

  3. Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples

    NASA Astrophysics Data System (ADS)

    Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.

    2014-12-01

    Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.

  4. Estuarine sediment toxicity tests on diatoms: Sensitivity comparison for three species

    NASA Astrophysics Data System (ADS)

    Moreno-Garrido, Ignacio; Lubián, Luis M.; Jiménez, Begoña; Soares, Amadeu M. V. M.; Blasco, Julián

    2007-01-01

    Experimental populations of three marine and estuarine diatoms were exposed to sediments with different levels of pollutants, collected from the Aveiro Lagoon (NW of Portugal). The species selected were Cylindrotheca closterium, Phaeodactylum tricornutum and Navicula sp. Previous experiments were designed to determine the influence of the sediment particle size distribution on growth of the species assayed. Percentage of silt-sized sediment affect to growth of the selected species in the experimental conditions: the higher percentage of silt-sized sediment, the lower growth. In any case, percentages of silt-sized sediment less than 10% did not affect growth. In general, C. closterium seems to be slightly more sensitive to the selected sediments than the other two species. Two groups of sediment samples were determined as a function of the general response of the exposed microalgal populations: three of the six samples used were more toxic than the other three. Chemical analysis of the samples was carried out in order to determine the specific cause of differences in toxicity. After a statistical analysis, concentrations of Sn, Zn, Hg, Cu and Cr (among all physico-chemical analyzed parameters), in order of importance, were the most important factors that divided the two groups of samples (more and less toxic samples). Benthic diatoms seem to be sensitive organisms in sediment toxicity tests. Toxicity data from bioassays involving microphytobentos should be taken into account when environmental risks are calculated.

  5. Improving the accuracy of sediment-associated constituent concentrations in whole storm water samples by wet-sieving

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.; Bowman, G.

    2007-01-01

    Sand-sized particles (>63 ??m) in whole storm water samples collected from urban runoff have the potential to produce data with substantial bias and/or poor precision both during sample splitting and laboratory analysis. New techniques were evaluated in an effort to overcome some of the limitations associated with sample splitting and analyzing whole storm water samples containing sand-sized particles. Wet-sieving separates sand-sized particles from a whole storm water sample. Once separated, both the sieved solids and the remaining aqueous (water suspension of particles less than 63 ??m) samples were analyzed for total recoverable metals using a modification of USEPA Method 200.7. The modified version digests the entire sample, rather than an aliquot, of the sample. Using a total recoverable acid digestion on the entire contents of the sieved solid and aqueous samples improved the accuracy of the derived sediment-associated constituent concentrations. Concentration values of sieved solid and aqueous samples can later be summed to determine an event mean concentration. ?? ASA, CSSA, SSSA.

  6. The effect of sample holder material on ion mobility spectrometry reproducibility

    NASA Technical Reports Server (NTRS)

    Jadamec, J. Richard; Su, Chih-Wu; Rigdon, Stephen; Norwood, Lavan

    1995-01-01

    When a positive detection of a narcotic occurs during the search of a vessel, a decision has to be made whether further intensive search is warranted. This decision is based in part on the results of a second sample collected from the same area. Therefore, the reproducibility of both sampling and instrumental analysis is critical in terms of justifying an in depth search. As reported at the 2nd Annual IMS Conference in Quebec City, the U.S. Coast Guard has determined that when paper is utilized as the sample desorption medium for the Barringer IONSCAN, the analytical results using standard reference samples are reproducible. A study was conducted utilizing papers of varying pore sizes and comparing their performance as a desorption material relative to the standard Barringer 50 micron Teflon. Nominal pore sizes ranged from 30 microns down to 2 microns. Results indicate that there is some peak instability in the first two to three windows during the analysis. The severity of the instability was observed to increase as the pore size of the paper is decreased. However, the observed peak instability does not create a situation that results in a decreased reliability or reproducibility in the analytical result.

  7. Development and Validation of the Caring Loneliness Scale.

    PubMed

    Karhe, Liisa; Kaunonen, Marja; Koivisto, Anna-Maija

    2016-12-01

    The Caring Loneliness Scale (CARLOS) includes 5 categories derived from earlier qualitative research. This article assesses the reliability and construct validity of a scale designed to measure patient experiences of loneliness in a professional caring relationship. Statistical analysis with 4 different sample sizes included Cronbach's alpha and exploratory factor analysis with principal axis factoring extraction. The sample size of 250 gave the most useful and comprehensible structure, but all 4 samples yielded underlying content of loneliness experiences. The initial 5 categories were reduced to 4 factors with 24 items and Cronbach's alpha ranging from .77 to .90. The findings support the reliability and validity of CARLOS for the assessment of Finnish breast cancer and heart surgery patients' experiences but as all instruments, further validation is needed.

  8. Item Analysis Appropriate for Domain-Referenced Classroom Testing. (Project Technical Report Number 1).

    ERIC Educational Resources Information Center

    Nitko, Anthony J.; Hsu, Tse-chi

    Item analysis procedures appropriate for domain-referenced classroom testing are described. A conceptual framework within which item statistics can be considered and promising statistics in light of this framework are presented. The sampling fluctuations of the more promising item statistics for sample sizes comparable to the typical classroom…

  9. Effect of sulfate and carbonate minerals on particle-size distributions in arid soils

    USGS Publications Warehouse

    Goossens, Dirk; Buck, Brenda J.; Teng, Yuazxin; Robins, Colin; Goldstein, Harland L.

    2014-01-01

    Arid soils pose unique problems during measurement and interpretation of particle-size distributions (PSDs) because they often contain high concentrations of water-soluble salts. This study investigates the effects of sulfate and carbonate minerals on grain-size analysis by comparing analyses in water, in which the minerals dissolve, and isopropanol (IPA), in which they do not. The presence of gypsum, in particular, substantially affects particle-size analysis once the concentration of gypsum in the sample exceeds the mineral’s solubility threshold. For smaller concentrations particle-size results are unaffected. This is because at concentrations above the solubility threshold fine particles cement together or bind to coarser particles or aggregates already present in the sample, or soluble mineral coatings enlarge grains. Formation of discrete crystallites exacerbates the problem. When soluble minerals are dissolved the original, insoluble grains will become partly or entirely liberated. Thus, removing soluble minerals will result in an increase in measured fine particles. Distortion of particle-size analysis is larger for sulfate minerals than for carbonate minerals because of the much higher solubility in water of the former. When possible, arid soils should be analyzed using a liquid in which the mineral grains do not dissolve, such as IPA, because the results will more accurately reflect the PSD under most arid soil field conditions. This is especially important when interpreting soil and environmental processes affected by particle size.

  10. Ethnicity and Body Dissatisfaction among Women in the United States: A Meta-Analysis

    ERIC Educational Resources Information Center

    Grabe, Shelly; Hyde, Janet Shibley

    2005-01-01

    The prevailing view in popular culture and the psychological literature is that White women have greater body dissatisfaction than women of color. In this meta-analysis, 6 main effect sizes were obtained for differences among Asian American, Black, Hispanic, and White women with a sample of 98 studies, yielding 222 effect sizes. The average d for…

  11. The Relation of Empathy and Defending in Bullying: A Meta-Analytic Investigation

    ERIC Educational Resources Information Center

    Nickerson, Amanda B.; Aloe, Ariel M.; Werth, Jilynn M.

    2015-01-01

    This meta-analysis synthesized results about the association between empathy and defending in bullying. A total of 20 studies were included, with 22 effect sizes from 6 studies that separated findings by the defender's gender, and 31 effect sizes from 18 studies that provided effects for the total sample were included in the analysis. The weighted…

  12. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  13. Upward counterfactual thinking and depression: A meta-analysis.

    PubMed

    Broomhall, Anne Gene; Phillips, Wendy J; Hine, Donald W; Loi, Natasha M

    2017-07-01

    This meta-analysis examined the strength of association between upward counterfactual thinking and depressive symptoms. Forty-two effect sizes from a pooled sample of 13,168 respondents produced a weighted average effect size of r=.26, p<.001. Moderator analyses using an expanded set of 96 effect sizes indicated that upward counterfactuals and regret produced significant positive effects that were similar in strength. Effects also did not vary as a function of the theme of the counterfactual-inducing situation or study design (cross-sectional versus longitudinal). Significant effect size heterogeneity was observed across sample types, methods of assessing upward counterfactual thinking, and types of depression scale. Significant positive effects were found in studies that employed samples of bereaved individuals, older adults, terminally ill patients, or university students, but not adolescent mothers or mixed samples. Both number-based and Likert-based upward counterfactual thinking assessments produced significant positive effects, with the latter generating a larger effect. All depression scales produced significant positive effects, except for the Psychiatric Epidemiology Research Interview. Research and theoretical implications are discussed in relation to cognitive theories of depression and the functional theory of upward counterfactual thinking, and important gaps in the extant research literature are identified. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Polymorphism in magic-sized Au144(SR)60 clusters

    NASA Astrophysics Data System (ADS)

    Jensen, Kirsten M. Ø.; Juhas, Pavol; Tofanelli, Marcus A.; Heinecke, Christine L.; Vaughan, Gavin; Ackerson, Christopher J.; Billinge, Simon J. L.

    2016-06-01

    Ultra-small, magic-sized metal nanoclusters represent an important new class of materials with properties between molecules and particles. However, their small size challenges the conventional methods for structure characterization. Here we present the structure of ultra-stable Au144(SR)60 magic-sized nanoclusters obtained from atomic pair distribution function analysis of X-ray powder diffraction data. The study reveals structural polymorphism in these archetypal nanoclusters. In addition to confirming the theoretically predicted icosahedral-cored cluster, we also find samples with a truncated decahedral core structure, with some samples exhibiting a coexistence of both cluster structures. Although the clusters are monodisperse in size, structural diversity is apparent. The discovery of polymorphism may open up a new dimension in nanoscale engineering.

  15. Optimally estimating the sample mean from the sample size, median, mid-range, and/or mid-quartile range.

    PubMed

    Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun

    2018-06-01

    The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.

  16. Effects of Group Size and Lack of Sphericity on the Recovery of Clusters in K-means Cluster Analysis.

    PubMed

    Craen, Saskia de; Commandeur, Jacques J F; Frank, Laurence E; Heiser, Willem J

    2006-06-01

    K-means cluster analysis is known for its tendency to produce spherical and equally sized clusters. To assess the magnitude of these effects, a simulation study was conducted, in which populations were created with varying departures from sphericity and group sizes. An analysis of the recovery of clusters in the samples taken from these populations showed a significant effect of lack of sphericity and group size. This effect was, however, not as large as expected, with still a recovery index of more than 0.5 in the "worst case scenario." An interaction effect between the two data aspects was also found. The decreasing trend in the recovery of clusters for increasing departures from sphericity is different for equal and unequal group sizes.

  17. Are power calculations useful? A multicentre neuroimaging study

    PubMed Central

    Suckling, John; Henty, Julian; Ecker, Christine; Deoni, Sean C; Lombardo, Michael V; Baron-Cohen, Simon; Jezzard, Peter; Barnes, Anna; Chakrabarti, Bhismadev; Ooi, Cinly; Lai, Meng-Chuan; Williams, Steven C; Murphy, Declan GM; Bullmore, Edward

    2014-01-01

    There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources. PMID:24644267

  18. Quantitative Reflectance Spectra of Solid Powders as a Function of Particle Size

    DOE PAGES

    Myers, Tanya L.; Brauer, Carolyn S.; Su, Yin-Fong; ...

    2015-05-19

    We have recently developed vetted methods for obtaining quantitative infrared directional-hemispherical reflectance spectra using a commercial integrating sphere. In this paper, the effects of particle size on the spectral properties are analyzed for several samples such as ammonium sulfate, calcium carbonate, and sodium sulfate as well as one organic compound, lactose. We prepared multiple size fractions for each sample and confirmed the mean sizes using optical microscopy. Most species displayed a wide range of spectral behavior depending on the mean particle size. General trends of reflectance vs. particle size are observed such as increased albedo for smaller particles: for mostmore » wavelengths, the reflectivity drops with increased size, sometimes displaying a factor of 4 or more drop in reflectivity along with a loss of spectral contrast. In the longwave infrared, several species with symmetric anions or cations exhibited reststrahlen features whose amplitude was nearly invariant with particle size, at least for intermediate- and large-sized sample fractions; that is, > ~150 microns. Trends of other types of bands (Christiansen minima, transparency features) are also investigated as well as quantitative analysis of the observed relationship between reflectance vs. particle diameter.« less

  19. Quantitative Reflectance Spectra of Solid Powders as a Function of Particle Size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, Tanya L.; Brauer, Carolyn S.; Su, Yin-Fong

    We have recently developed vetted methods for obtaining quantitative infrared directional-hemispherical reflectance spectra using a commercial integrating sphere. In this paper, the effects of particle size on the spectral properties are analyzed for several samples such as ammonium sulfate, calcium carbonate, and sodium sulfate as well as one organic compound, lactose. We prepared multiple size fractions for each sample and confirmed the mean sizes using optical microscopy. Most species displayed a wide range of spectral behavior depending on the mean particle size. General trends of reflectance vs. particle size are observed such as increased albedo for smaller particles: for mostmore » wavelengths, the reflectivity drops with increased size, sometimes displaying a factor of 4 or more drop in reflectivity along with a loss of spectral contrast. In the longwave infrared, several species with symmetric anions or cations exhibited reststrahlen features whose amplitude was nearly invariant with particle size, at least for intermediate- and large-sized sample fractions; that is, > ~150 microns. Trends of other types of bands (Christiansen minima, transparency features) are also investigated as well as quantitative analysis of the observed relationship between reflectance vs. particle diameter.« less

  20. Heavy metal speciation in various grain sizes of industrially contaminated street dust using multivariate statistical analysis.

    PubMed

    Yıldırım, Gülşen; Tokalıoğlu, Şerife

    2016-02-01

    A total of 36 street dust samples were collected from the streets of the Organised Industrial District in Kayseri, Turkey. This region includes a total of 818 work places in various industrial areas. The modified BCR (the European Community Bureau of Reference) sequential extraction procedure was applied to evaluate the mobility and bioavailability of trace elements (Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb and Zn) in street dusts of the study area. The BCR was classified into three steps: water/acid soluble fraction, reducible and oxidisable fraction. The remaining residue was dissolved by using aqua regia. The concentrations of the metals in street dust samples were determined by flame atomic absorption spectrometry. Also the effect of the different grain sizes (<38µm, 38-53µm and 53-74µm) of the 36 street dust samples on the mobility of the metals was investigated using the modified BCR procedure. The mobility sequence based on the sum of the first three phases (for <74µm grain size) was: Cd (71.3)>Cu (48.9)>Pb (42.8)=Cr (42.1)>Ni (41.4)>Zn (40.9)>Co (36.6)=Mn (36.3)>Fe (3.1). No significant difference was observed among metal partitioning for the three particle sizes. Correlation, principal component and cluster analysis were applied to identify probable natural and anthropogenic sources in the region. The principal component analysis results showed that this industrial district was influenced by traffic, industrial activities, air-borne emissions and natural sources. The accuracy of the results was checked by analysis of both the BCR-701 certified reference material and by recovery studies in street dust samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Chemical analyses of micrometre-sized solids by a miniature laser ablation/ionisation mass spectrometer (LMS)

    NASA Astrophysics Data System (ADS)

    Tulej, Marek; Wiesendanger, Reto; Neuland, Maike; Meyer, Stefan; Wurz, Peter; Neubeck, Anna; Ivarsson, Magnus; Riedo, Valentine; Moreno-Garcia, Pavel; Riedo, Andreas; Knopp, Gregor

    2017-04-01

    Investigation of elemental and isotope compositions of planetary solids with high spatial resolution are of considerable interest to current space research. Planetary materials are typically highly heterogenous and such studies can deliver detailed chemical information of individual sample components with the sizes down to a few micrometres. The results of such investigations can yield mineralogical surface context including mineralogy of individual grains or the elemental composition of of other objects embedded in the sample surface such as micro-sized fossils. The identification of bio-relevant material can follow by the detection of bio-relevant elements and their isotope fractionation effects [1, 2]. For chemical analysis of heterogenous solid surfaces we have combined a miniature laser ablation mass spectrometer (LMS) (mass resolution (m/Dm) 400-600; dynamic range 105-108) with in situ microscope-camera system (spatial resolution ˜2um, depth 10 um). The microscope helps to find the micrometre-sized solids across the surface sample for the direct mass spectrometric analysis by the LMS instrument. The LMS instrument combines an fs-laser ion source and a miniature reflectron-type time-of-flight mass spectrometer. The mass spectrometric analysis of the selected on the sample surface objects followed after ablation, atomisation and ionisation of the sample by a focussed laser radiation (775 nm, 180 fs, 1 kHz; the spot size of ˜20 um) [4, 5, 6]. Mass spectra of almost all elements (isotopes) present in the investigated location are measured instantaneously. A number of heterogenous rock samples containing micrometre-sized fossils and mineralogical grains were investigated with high selectivity and sensitivity. Chemical analyses of filamentous structures observed in carbonate veins (in harzburgite) and amygdales in pillow basalt lava can be well characterised chemically yielding elemental and isotope composition of these objects [7, 8]. The investigation can be prepared with high selectivity since the host composition is typically readily different comparing to that of the analysed objects. In depth chemical analysis (chemical profiling) is found in particularly helpful allowing relatively easy isolation of the chemical composition of the host from the investigated objects [6]. Hence, both he chemical analysis of the environment and microstructures can be derived. Analysis of the isotope compositions can be measured with high level of confidence, nevertheless, presence of cluster of similar masses can make sometimes this analysis difficult. Based on this work, we are confident that similar studies can be conducted in situ planetary surfaces delivering important chemical context and evidences on bio-relevant processes. [1] Summons et al., Astrobiology, 11, 157, 2011. [2] Wurz et al., Sol. Sys. Res. 46, 408, 2012. [3] Riedo et al., J. Anal. Atom. Spectrom. 28, 1256, 2013. [4] Riedo et al., J. Mass Spectrom.48, 1, 2013. [5] Tulej et al., Geostand. Geoanal. Res., 38, 423, 2014. [6] Grimaudo et al., Anal. Chem. 87, 2041, 2015 [7] Tulej et al., Astrobiology, 15, 1, 2015. [8] Neubeck et al., Int. J. Astrobiology, 15, 133, 2016.

  2. Grain size dependence of dynamic mechanical behavior of AZ31B magnesium alloy sheet under compressive shock loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asgari, H., E-mail: hamed.asgari@usask.ca; Odeshi, A.G.; Szpunar, J.A.

    2015-08-15

    The effects of grain size on the dynamic deformation behavior of rolled AZ31B alloy at high strain rates were investigated. Rolled AZ31B alloy samples with grain sizes of 6, 18 and 37 μm, were subjected to shock loading tests using Split Hopkinson Pressure Bar at room temperature and at a strain rate of 1100 s{sup −} {sup 1}. It was found that a double-peak basal texture formed in the shock loaded samples. The strength and ductility of the alloy under the high strain-rate compressive loading increased with decreasing grain size. However, twinning fraction and strain hardening rate were found tomore » decrease with decreasing grain size. In addition, orientation imaging microscopy showed a higher contribution of double and contraction twins in the deformation process of the coarse-grained samples. Using transmission electron microscopy, pyramidal dislocations were detected in the shock loaded sample, proving the activation of pyramidal slip system under dynamic impact loading. - Highlights: • A double-peak basal texture developed in all shock loaded samples. • Both strength and ductility increased with decreasing grain size. • Twinning fraction and strain hardening rate decreased with decreasing grain size. • ‘g.b’ analysis confirmed the presence of dislocations in shock loaded alloy.« less

  3. Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias

    PubMed Central

    Chambers, David A.; Glasgow, Russell E.

    2014-01-01

    Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853

  4. Sub-sampling genetic data to estimate black bear population size: A case study

    USGS Publications Warehouse

    Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.

    2007-01-01

    Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.

  5. Using variance components to estimate power in a hierarchically nested sampling design improving monitoring of larval Devils Hole pupfish

    USGS Publications Warehouse

    Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey

    2013-01-01

    We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.

  6. Simultaneous collection of airborne particulate matter on several collection substrates with a high-volume cascade impactor

    NASA Astrophysics Data System (ADS)

    Chan, Y. C.; Vowles, P. D.; McTainsh, G. H.; Simpson, R. W.; Cohen, D. D.; Bailey, G. M.; McOrist, G. D.

    This paper describes a method for the simultaneous collection of size-fractionated aerosol samples on several collection substrates, including glass-fibre filter, carbon tape and silver tape, with a commercially available high-volume cascade impactor. This permitted various chemical analysis procedures, including ion beam analysis (IBA), instrumental neutron activation analysis (INAA), carbon analysis and scanning electron microscopy (SEM), to be carried out on the samples.

  7. Determining the linkage of disease-resistance genes to molecular markers: the LOD-SCORE method revisited with regard to necessary sample sizes.

    PubMed

    Hühn, M

    1995-05-01

    Some approaches to molecular marker-assisted linkage detection for a dominant disease-resistance trait based on a segregating F2 population are discussed. Analysis of two-point linkage is carried out by the traditional measure of maximum lod score. It depends on (1) the maximum-likelihood estimate of the recombination fraction between the marker and the disease-resistance gene locus, (2) the observed absolute frequencies, and (3) the unknown number of tested individuals. If one replaces the absolute frequencies by expressions depending on the unknown sample size and the maximum-likelihood estimate of recombination value, the conventional rule for significant linkage (maximum lod score exceeds a given linkage threshold) can be resolved for the sample size. For each sub-population used for linkage analysis [susceptible (= recessive) individuals, resistant (= dominant) individuals, complete F2] this approach gives a lower bound for the necessary number of individuals required for the detection of significant two-point linkage by the lod-score method.

  8. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Sediment Grain-Size and Loss-on-Ignition Analyses from 2002 Englebright Lake Coring and Sampling Campaigns

    USGS Publications Warehouse

    Snyder, Noah P.; Allen, James R.; Dare, Carlin; Hampton, Margaret A.; Schneider, Gary; Wooley, Ryan J.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.

    2004-01-01

    This report presents sedimentologic data from three 2002 sampling campaigns conducted in Englebright Lake on the Yuba River in northern California. This work was done to assess the properties of the material deposited in the reservoir between completion of Englebright Dam in 1940 and 2002, as part of the Upper Yuba River Studies Program. Included are the results of grain-size-distribution and loss-on-ignition analyses for 561 samples, as well as an error analysis based on replicate pairs of subsamples.

  10. Inertial impaction air sampling device

    DOEpatents

    Dewhurst, K.H.

    1990-05-22

    An inertial impactor is designed which is to be used in an air sampling device for collection of respirable size particles in ambient air. The device may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry. 3 figs.

  11. Invited Review Small is beautiful: The analysis of nanogram-sized astromaterials

    NASA Astrophysics Data System (ADS)

    Zolensky, M. E.; Pieters, C.; Clark, B.; Papike, J. J.

    2000-01-01

    The capability of modern methods to characterize ultra-small samples is well established from analysis of interplanetary dust particles (IDPs), interstellar grains recovered from meteorites, and other materials requiring ultra-sensitive analytical capabilities. Powerful analytical techniques are available that require, under favorable circumstances, single particles of only a few nanograms for entire suites of fairly comprehensive characterizations. A returned sample of >1,000 particles with total mass of just one microgram permits comprehensive quantitative geochemical measurements that are impractical to carry out in situ by flight instruments. The main goal of this paper is to describe the state-of-the-art in microanalysis of astromaterials. Given that we can analyze fantastically small quantities of asteroids and comets, etc., we have to ask ourselves how representative are microscopic samples of bodies that measure a few to many km across? With the Galileo flybys of Gaspra and Ida, it is now recognized that even very small airless bodies have indeed developed a particulate regolith. Acquiring a sample of the bulk regolith, a simple sampling strategy, provides two critical pieces of information about the body. Regolith samples are excellent bulk samples since they normally contain all the key components of the local environment, albeit in particulate form. Furthermore, since this fine fraction dominates remote measurements, regolith samples also provide information about surface alteration processes and are a key link to remote sensing of other bodies. Studies indicate that a statistically significant number of nanogram-sized particles should be able to characterize the regolith of a primitive asteroid, although the presence of larger components within even primitive meteorites (e.g.. Murchison), e.g. chondrules, CAI, large crystal fragments, etc., points out the limitations of using data obtained from nanogram-sized samples to characterize entire primitive asteroids. However, most important asteroidal geological processes have left their mark on the matrix, since this is the finest-grained portion and therefore most sensitive to chemical and physical changes. Thus, the following information can be learned from this fine grain size fraction alone: (1) mineral paragenesis; (2) regolith processes, (3) bulk composition; (4) conditions of thermal and aqueous alteration (if any); (5) relationships to planets, comets, meteorites (via isotopic analyses, including oxygen; (6) abundance of water and hydrated material; (7) abundance of organics; (8) history of volatile mobility, (9) presence and origin of presolar and/or interstellar material. Most of this information can even be obtained from dust samples from bodies for which nanogram-sized samples are not truly representative. Future advances in sensitivity and accuracy of laboratory analytical techniques can be expected to enhance the science value of nano- to microgram sized samples even further. This highlights a key advantage of sample returns - that the most advanced analysis techniques can always be applied in the laboratory, and that well-preserved samples are available for future investigations.

  12. Characterizing string-of-pearls colloidal silica by multidetector hydrodynamic chromatography and comparison to multidetector size-exclusion chromatography, off-line multiangle static light scattering, and transmission electron microscopy.

    PubMed

    Brewer, Amandaa K; Striegel, André M

    2011-04-15

    The string-of-pearls-type morphology is ubiquitous, manifesting itself variously in proteins, vesicles, bacteria, synthetic polymers, and biopolymers. Characterizing the size and shape of analytes with such morphology, however, presents a challenge, due chiefly to the ease with which the "strings" can be broken during chromatographic analysis or to the paucity of information obtained from the benchmark microscopy and off-line light scattering methods. Here, we address this challenge with multidetector hydrodynamic chromatography (HDC), which has the ability to determine, simultaneously, the size, shape, and compactness and their distributions of string-of-pearls samples. We present the quadruple-detector HDC analysis of colloidal string-of-pearls silica, employing static multiangle and quasielastic light scattering, differential viscometry, and differential refractometry as detection methods. The multidetector approach shows a sample that is broadly polydisperse in both molar mass and size, with strings ranging from two to five particles, but which also contains a high concentration of single, unattached "pearls". Synergistic combination of the various size parameters obtained from the multiplicity of detectors employed shows that the strings with higher degrees of polymerization have a shape similar to the theory-predicted shape of a Gaussian random coil chain of nonoverlapping beads, while the strings with lower degrees of polymerization have a prolate ellipsoidal shape. The HDC technique is contrasted experimentally with multidetector size-exclusion chromatography, where, even under extremely gentle conditions, the strings still degraded during analysis. Such degradation is shown to be absent in HDC, as evidenced by the fact that the molar mass and radius of gyration obtained by HDC with multiangle static light scattering detection (HDC/MALS) compare quite favorably to those determined by off-line MALS analysis under otherwise identical conditions. The multidetector HDC results were also comparable to those obtained by transmission electron microscopy (TEM). Unlike off-line MALS or TEM, however, multidetector HDC is able to provide complete particle analysis based on the molar mass, size, shape, and compactness and their distributions for the entire sample population in less than 20 min. © 2011 American Chemical Society

  13. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    PubMed

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Modeling ultrasound propagation through material of increasing geometrical complexity.

    PubMed

    Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen

    2018-06-01

    Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Rapid detection of soils contaminated with heavy metals and oils by laser induced breakdown spectroscopy (LIBS).

    PubMed

    Kim, Gibaek; Kwak, Jihyun; Kim, Ki-Rak; Lee, Heesung; Kim, Kyoung-Woong; Yang, Hyeon; Park, Kihong

    2013-12-15

    A laser induced breakdown spectroscopy (LIBS) coupled with the chemometric method was applied to rapidly discriminate between soils contaminated with heavy metals or oils and clean soils. The effects of the water contents and grain sizes of soil samples on LIBS emissions were also investigated. The LIBS emission lines decreased by 59-75% when the water content increased from 1.2% to 7.8%, and soil samples with a grain size of 75 μm displayed higher LIBS emission lines with lower relative standard deviations than those with a 2mm grain size. The water content was found to have a more pronounced effect on the LIBS emission lines than the grain size. Pelletizing and sieving were conducted for all samples collected from abandoned mining areas and military camp to have similar water contents and grain sizes before being analyzed by the LIBS with the chemometric analysis. The data show that three types of soil samples were clearly discerned by using the first three principal components from the spectral data of soil samples. A blind test was conducted with a 100% correction rate for soil samples contaminated with heavy metals and oil residues. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Crospovidone interactions with water. II. Dynamic vapor sorption analysis of the effect of Polyplasdone particle size on its uptake and distribution of water.

    PubMed

    Saripella, Kalyan K; Mallipeddi, Rama; Neau, Steven H

    2014-11-20

    Polyplasdone of different particle size was used to study the sorption, desorption, and distribution of water, and to seek evidence that larger particles can internalize water. The three samples were Polyplasdone® XL, XL-10, and INF-10. Moisture sorption and desorption isotherms at 25 °C at 5% intervals from 0 to 95% relative humidity (RH) were generated by dynamic vapor sorption analysis. The three products provided similar data, judged to be Type III with a small hysteresis that appears when RH is below 65%. An absent rounded knee in the sorption curve suggests that multilayers form before the monolayer is completed. The hysteresis indicates that internally absorbed moisture is trapped as the water is desorbed and the polymer sample shrinks, thus requiring a lower level of RH to continue desorption. The fit of the Guggenheim-Anderson-de Boer (GAB) and the Young and Nelson equations was accomplished in the data analysis. The W(m), C(G), and K values from GAB analysis are similar across the three samples, revealing 0.962 water molecules per repeating unit in the monolayer. A small amount of absorbed water is identified, but this is consistent across the three particle sizes. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Cluster randomised crossover trials with binary data and unbalanced cluster sizes: application to studies of near-universal interventions in intensive care.

    PubMed

    Forbes, Andrew B; Akram, Muhammad; Pilcher, David; Cooper, Jamie; Bellomo, Rinaldo

    2015-02-01

    Cluster randomised crossover trials have been utilised in recent years in the health and social sciences. Methods for analysis have been proposed; however, for binary outcomes, these have received little assessment of their appropriateness. In addition, methods for determination of sample size are currently limited to balanced cluster sizes both between clusters and between periods within clusters. This article aims to extend this work to unbalanced situations and to evaluate the properties of a variety of methods for analysis of binary data, with a particular focus on the setting of potential trials of near-universal interventions in intensive care to reduce in-hospital mortality. We derive a formula for sample size estimation for unbalanced cluster sizes, and apply it to the intensive care setting to demonstrate the utility of the cluster crossover design. We conduct a numerical simulation of the design in the intensive care setting and for more general configurations, and we assess the performance of three cluster summary estimators and an individual-data estimator based on binomial-identity-link regression. For settings similar to the intensive care scenario involving large cluster sizes and small intra-cluster correlations, the sample size formulae developed and analysis methods investigated are found to be appropriate, with the unweighted cluster summary method performing well relative to the more optimal but more complex inverse-variance weighted method. More generally, we find that the unweighted and cluster-size-weighted summary methods perform well, with the relative efficiency of each largely determined systematically from the study design parameters. Performance of individual-data regression is adequate with small cluster sizes but becomes inefficient for large, unbalanced cluster sizes. When outcome prevalences are 6% or less and the within-cluster-within-period correlation is 0.05 or larger, all methods display sub-nominal confidence interval coverage, with the less prevalent the outcome the worse the coverage. As with all simulation studies, conclusions are limited to the configurations studied. We confined attention to detecting intervention effects on an absolute risk scale using marginal models and did not explore properties of binary random effects models. Cluster crossover designs with binary outcomes can be analysed using simple cluster summary methods, and sample size in unbalanced cluster size settings can be determined using relatively straightforward formulae. However, caution needs to be applied in situations with low prevalence outcomes and moderate to high intra-cluster correlations. © The Author(s) 2014.

  18. Metallographic Characterization of Wrought Depleted Uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forsyth, Robert Thomas; Hill, Mary Ann

    Metallographic characterization was performed on wrought depleted uranium (DU) samples taken from the longitudinal and transverse orientations from specific locations on two specimens. Characterization of the samples included general microstructure, inclusion analysis, grain size analysis, and microhardness testing. Comparisons of the characterization results were made to determine any differences based on specimen, sample orientation, or sample location. In addition, the characterization results for the wrought DU samples were also compared with data obtained from the metallographic characterization of cast DU samples previously characterized. No differences were observed in microstructure, inclusion size, morphology, and distribution, or grain size in regard tomore » specimen, location, or orientation for the wrought depleted uranium samples. However, a small difference was observed in average hardness with regard to orientation at the same locations within the same specimen. The longitudinal samples were slightly harder than the transverse samples from the same location of the same specimen. This was true for both wrought DU specimens. Comparing the wrought DU sample data with the previously characterized cast DU sample data, distinct differences in microstructure, inclusion size, morphology and distribution, grain size, and microhardness were observed. As expected, the microstructure of the wrought DU samples consisted of small recrystallized grains which were uniform, randomly oriented, and equiaxed with minimal twinning observed in only a few grains. In contrast, the cast DU microstructure consisted of large irregularly shaped grains with extensive twinning observed in most grains. Inclusions in the wrought DU samples were elongated, broken and cracked and light and dark phases were observed in some inclusions. The mean inclusion area percentage for the wrought DU samples ranged from 0.08% to 0.34% and the average density from all wrought DU samples was 1.62E+04/cm 2. Inclusions in the cast DU samples were equiaxed and intact with light and dark phases observed in some inclusions. The mean inclusion area percentage for the cast DU samples ranged from 0.93% to 1.00% and the average density from all wrought DU samples was 2.83E+04/cm 2. The average mean grain area from all wrought DU samples was 141 μm 2 while the average mean grain area from all cast DU samples was 1.7 mm2. The average Knoop microhardness from all wrought DU samples was 215 HK and the average Knoop microhardness from all cast DU samples was 264 HK.« less

  19. Bias and Precision of Measures of Association for a Fixed-Effect Multivariate Analysis of Variance Model

    ERIC Educational Resources Information Center

    Kim, Soyoung; Olejnik, Stephen

    2005-01-01

    The sampling distributions of five popular measures of association with and without two bias adjusting methods were examined for the single factor fixed-effects multivariate analysis of variance model. The number of groups, sample sizes, number of outcomes, and the strength of association were manipulated. The results indicate that all five…

  20. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    PubMed

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.

  1. The WAIS Melt Monitor: An automated ice core melting system for meltwater sample handling and the collection of high resolution microparticle size distribution data

    NASA Astrophysics Data System (ADS)

    Breton, D. J.; Koffman, B. G.; Kreutz, K. J.; Hamilton, G. S.

    2010-12-01

    Paleoclimate data are often extracted from ice cores by careful geochemical analysis of meltwater samples. The analysis of the microparticles found in ice cores can also yield unique clues about atmospheric dust loading and transport, dust provenance and past environmental conditions. Determination of microparticle concentration, size distribution and chemical makeup as a function of depth is especially difficult because the particle size measurement either consumes or contaminates the meltwater, preventing further geochemical analysis. Here we describe a microcontroller-based ice core melting system which allows the collection of separate microparticle and chemistry samples from the same depth intervals in the ice core, while logging and accurately depth-tagging real-time electrical conductivity and particle size distribution data. This system was designed specifically to support microparticle analysis of the WAIS Divide WDC06A deep ice core, but many of the subsystems are applicable to more general ice core melting operations. Major system components include: a rotary encoder to measure ice core melt displacement with 0.1 millimeter accuracy, a meltwater tracking system to assign core depths to conductivity, particle and sample vial data, an optical debubbler level control system to protect the Abakus laser particle counter from damage due to air bubbles, a Rabbit 3700 microcontroller which communicates with a host PC, collects encoder and optical sensor data and autonomously operates Gilson peristaltic pumps and fraction collectors to provide automatic sample handling, melt monitor control software operating on a standard PC allowing the user to control and view the status of the system, data logging software operating on the same PC to collect data from the melting, electrical conductivity and microparticle measurement systems. Because microparticle samples can easily be contaminated, we use optical air bubble sensors and high resolution ice core density profiles to guide the melting process. The combination of these data allow us to analyze melt head performance, minimize outer-to-inner fraction contamination and avoid melt head flooding. The WAIS Melt Monitor system allows the collection of real-time, sub-annual microparticle and electrical conductivity data while producing and storing enough sample for traditional Coulter-Counter particle measurements as well long term acid leaching of bioactive metals (e.g., Fe, Co, Cd, Cu, Zn) prior to chemical analysis.

  2. Recent advances of mesoporous materials in sample preparation.

    PubMed

    Zhao, Liang; Qin, Hongqiang; Wu, Ren'an; Zou, Hanfa

    2012-03-09

    Sample preparation has been playing an important role in the analysis of complex samples. Mesoporous materials as the promising adsorbents have gained increasing research interest in sample preparation due to their desirable characteristics of high surface area, large pore volume, tunable mesoporous channels with well defined pore-size distribution, controllable wall composition, as well as modifiable surface properties. The aim of this paper is to review the recent advances of mesoporous materials in sample preparation with emphases on extraction of metal ions, adsorption of organic compounds, size selective enrichment of peptides/proteins, specific capture of post-translational peptides/proteins and enzymatic reactor for protein digestion. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Size Matters: FTIR Spectral Analysis of Apollo Regolith Samples Exhibits Grain Size Dependence.

    NASA Astrophysics Data System (ADS)

    Martin, Dayl; Joy, Katherine; Pernet-Fisher, John; Wogelius, Roy; Morlok, Andreas; Hiesinger, Harald

    2017-04-01

    The Mercury Thermal Infrared Spectrometer (MERTIS) on the upcoming BepiColombo mission is designed to analyse the surface of Mercury in thermal infrared wavelengths (7-14 μm) to investigate the physical properties of the surface materials [1]. Laboratory analyses of analogue materials are useful for investigating how various sample properties alter the resulting infrared spectrum. Laboratory FTIR analysis of Apollo fine (<1mm) soil samples 14259,672, 15401,147, and 67481,96 have provided an insight into how grain size, composition, maturity (i.e., exposure to space weathering processes), and proportion of glassy material affect their average infrared spectra. Each of these samples was analysed as a bulk sample and five size fractions: <25, 25-63, 63-125, 125-250, and <250 μm. Sample 14259,672 is a highly mature highlands regolith with a large proportion of agglutinates [2]. The high agglutinate content (>60%) causes a 'flattening' of the spectrum, with reduced reflectance in the Reststrahlen Band region (RB) as much as 30% in comparison to samples that are dominated by a high proportion of crystalline material. Apollo 15401,147 is an immature regolith with a high proportion of volcanic glass pyroclastic beads [2]. The high mafic mineral content results in a systematic shift in the Christiansen Feature (CF - the point of lowest reflectance) to longer wavelength: 8.6 μm. The glass beads dominate the spectrum, displaying a broad peak around the main Si-O stretch band (at 10.8 μm). As such, individual mineral components of this sample cannot be resolved from the average spectrum alone. Apollo 67481,96 is a sub-mature regolith composed dominantly of anorthite plagioclase [2]. The CF position of the average spectrum is shifted to shorter wavelengths (8.2 μm) due to the higher proportion of felsic minerals. Its average spectrum is dominated by anorthite reflectance bands at 8.7, 9.1, 9.8, and 10.8 μm. The average reflectance is greater than the other samples due to a lower proportion of glassy material. In each soil, the smallest fractions (0-25 and 25-63 μm) have CF positions 0.1-0.4 μm higher than the larger grain sizes. Also, the bulk-sample spectra mostly closely resemble the 0-25 μm sieved size fraction spectrum, indicating that this size fraction of each sample dominates the bulk spectrum regardless of other physical properties. This has implications for surface analyses of other Solar System bodies where some mineral phases or components could be concentrated in a particular size fraction. For example, the anorthite grains in 67481,96 are dominantly >25 μm in size and therefore may not contribute proportionally to the bulk average spectrum (compared to the <25 μm fraction). The resulting bulk spectrum of 67481,96 has a CF position 0.2 μm higher than all size fractions >25 microns and therefore does not represent a true average composition of the sample. Further investigation of how grain size and composition alters the average spectrum is required to fully understand infrared spectra of planetary surfaces. [1] - Hiesinger H., Helbert J., and MERTIS Co-I Team. (2010). The Mercury Radiometer and Thermal Infrared Spectrometer (MERTIS) for the BepiColombo Mission. Planetary and Space Science. 58, 144-165. [2] - NASA Lunar Sample Compendium. https://curator.jsc.nasa.gov/lunar/lsc/

  4. A Scanning Transmission Electron Microscopy Method for Determining Manganese Composition in Welding Fume as a Function of Primary Particle Size

    PubMed Central

    Richman, Julie D.; Livi, Kenneth J.T.; Geyh, Alison S.

    2011-01-01

    Increasing evidence suggests that the physicochemical properties of inhaled nanoparticles influence the resulting toxicokinetics and toxicodynamics. This report presents a method using scanning transmission electron microscopy (STEM) to measure the Mn content throughout the primary particle size distribution of welding fume particle samples collected on filters for application in exposure and health research. Dark field images were collected to assess the primary particle size distribution and energy-dispersive X-ray and electron energy loss spectroscopy were performed for measurement of Mn composition as a function of primary particle size. A manual method incorporating imaging software was used to measure the primary particle diameter and to select an integration region for compositional analysis within primary particles throughout the size range. To explore the variation in the developed metric, the method was applied to 10 gas metal arc welding (GMAW) fume particle samples of mild steel that were collected under a variety of conditions. The range of Mn composition by particle size was −0.10 to 0.19 %/nm, where a positive estimate indicates greater relative abundance of Mn increasing with primary particle size and a negative estimate conversely indicates decreasing Mn content with size. However, the estimate was only statistically significant (p<0.05) in half of the samples (n=5), which all had a positive estimate. In the remaining samples, no significant trend was measured. Our findings indicate that the method is reproducible and that differences in the abundance of Mn by primary particle size among welding fume samples can be detected. PMID:21625364

  5. A Scanning Transmission Electron Microscopy Method for Determining Manganese Composition in Welding Fume as a Function of Primary Particle Size.

    PubMed

    Richman, Julie D; Livi, Kenneth J T; Geyh, Alison S

    2011-06-01

    Increasing evidence suggests that the physicochemical properties of inhaled nanoparticles influence the resulting toxicokinetics and toxicodynamics. This report presents a method using scanning transmission electron microscopy (STEM) to measure the Mn content throughout the primary particle size distribution of welding fume particle samples collected on filters for application in exposure and health research. Dark field images were collected to assess the primary particle size distribution and energy-dispersive X-ray and electron energy loss spectroscopy were performed for measurement of Mn composition as a function of primary particle size. A manual method incorporating imaging software was used to measure the primary particle diameter and to select an integration region for compositional analysis within primary particles throughout the size range. To explore the variation in the developed metric, the method was applied to 10 gas metal arc welding (GMAW) fume particle samples of mild steel that were collected under a variety of conditions. The range of Mn composition by particle size was -0.10 to 0.19 %/nm, where a positive estimate indicates greater relative abundance of Mn increasing with primary particle size and a negative estimate conversely indicates decreasing Mn content with size. However, the estimate was only statistically significant (p<0.05) in half of the samples (n=5), which all had a positive estimate. In the remaining samples, no significant trend was measured. Our findings indicate that the method is reproducible and that differences in the abundance of Mn by primary particle size among welding fume samples can be detected.

  6. Two models of the sound-signal frequency dependence on the animal body size as exemplified by the ground squirrels of Eurasia (mammalia, rodentia).

    PubMed

    Nikol'skii, A A

    2017-11-01

    Dependence of the sound-signal frequency on the animal body length was studied in 14 ground squirrel species (genus Spermophilus) of Eurasia. Regression analysis of the total sample yielded a low determination coefficient (R 2 = 26%), because the total sample proved to be heterogeneous in terms of signal frequency within the dimension classes of animals. When the total sample was divided into two groups according to signal frequency, two statistically significant models (regression equations) were obtained in which signal frequency depended on the body size at high determination coefficients (R 2 = 73 and 94% versus 26% for the total sample). Thus, the problem of correlation between animal body size and the frequency of their vocal signals does not have a unique solution.

  7. Sample size allocation for food item radiation monitoring and safety inspection.

    PubMed

    Seto, Mayumi; Uriu, Koichiro

    2015-03-01

    The objective of this study is to identify a procedure for determining sample size allocation for food radiation inspections of more than one food item to minimize the potential risk to consumers of internal radiation exposure. We consider a simplified case of food radiation monitoring and safety inspection in which a risk manager is required to monitor two food items, milk and spinach, in a contaminated area. Three protocols for food radiation monitoring with different sample size allocations were assessed by simulating random sampling and inspections of milk and spinach in a conceptual monitoring site. Distributions of (131)I and radiocesium concentrations were determined in reference to (131)I and radiocesium concentrations detected in Fukushima prefecture, Japan, for March and April 2011. The results of the simulations suggested that a protocol that allocates sample size to milk and spinach based on the estimation of (131)I and radiocesium concentrations using the apparent decay rate constants sequentially calculated from past monitoring data can most effectively minimize the potential risks of internal radiation exposure. © 2014 Society for Risk Analysis.

  8. Integrated approaches for reducing sample size for measurements of trace elemental impurities in plutonium by ICP-OES and ICP-MS

    DOE PAGES

    Xu, Ning; Chamberlin, Rebecca M.; Thompson, Pam; ...

    2017-10-07

    This study has demonstrated that bulk plutonium chemical analysis can be performed at small scales (\\50 mg material) through three case studies. Analytical methods were developed for ICP-OES and ICP-MS instruments to measure trace impurities and gallium content in plutonium metals with comparable or improved detection limits, measurement accuracy and precision. In two case studies, the sample size has been reduced by 109, and in the third case study, by as much as 50009, so that the plutonium chemical analysis can be performed in a facility rated for lower-hazard and lower-security operations.

  9. Integrated approaches for reducing sample size for measurements of trace elemental impurities in plutonium by ICP-OES and ICP-MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Ning; Chamberlin, Rebecca M.; Thompson, Pam

    This study has demonstrated that bulk plutonium chemical analysis can be performed at small scales (\\50 mg material) through three case studies. Analytical methods were developed for ICP-OES and ICP-MS instruments to measure trace impurities and gallium content in plutonium metals with comparable or improved detection limits, measurement accuracy and precision. In two case studies, the sample size has been reduced by 109, and in the third case study, by as much as 50009, so that the plutonium chemical analysis can be performed in a facility rated for lower-hazard and lower-security operations.

  10. Sample size re-estimation and other midcourse adjustments with sequential parallel comparison design.

    PubMed

    Silverman, Rachel K; Ivanova, Anastasia

    2017-01-01

    Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.

  11. A Meta-Analysis of Mathematics and Working Memory: Moderating Effects of Working Memory Domain, Type of Mathematics Skill, and Sample Characteristics

    ERIC Educational Resources Information Center

    Peng, Peng; Namkung, Jessica; Barnes, Marcia; Sun, Congying

    2016-01-01

    The purpose of this meta-analysis was to determine the relation between mathematics and working memory (WM) and to identify possible moderators of this relation including domains of WM, types of mathematics skills, and sample type. A meta-analysis of 110 studies with 829 effect sizes found a significant medium correlation of mathematics and WM, r…

  12. Structure and properties of clinical coralline implants measured via 3D imaging and analysis.

    PubMed

    Knackstedt, Mark Alexander; Arns, Christoph H; Senden, Tim J; Gross, Karlis

    2006-05-01

    The development and design of advanced porous materials for biomedical applications requires a thorough understanding of how material structure impacts on mechanical and transport properties. This paper illustrates a 3D imaging and analysis study of two clinically proven coral bone graft samples (Porites and Goniopora). Images are obtained from X-ray micro-computed tomography (micro-CT) at a resolution of 16.8 microm. A visual comparison of the two images shows very different structure; Porites has a homogeneous structure and consistent pore size while Goniopora has a bimodal pore size and a strongly disordered structure. A number of 3D structural characteristics are measured directly on the images including pore volume-to-surface-area, pore and solid size distributions, chord length measurements and tortuosity. Computational results made directly on the digitized tomographic images are presented for the permeability, diffusivity and elastic modulus of the coral samples. The results allow one to quantify differences between the two samples. 3D digital analysis can provide a more thorough assessment of biomaterial structure including the pore wall thickness, local flow, mechanical properties and diffusion pathways. We discuss the implications of these results to the development of optimal scaffold design for tissue ingrowth.

  13. Impact of rail pressure and biodiesel fueling on the particulate morphology and soot nanostructures from a common-rail turbocharged direct injection diesel engine

    DOE PAGES

    Ye, Peng; Vander Wal, Randy; Boehman, Andre L.; ...

    2014-12-26

    The effect of rail pressure and biodiesel fueling on the morphology of exhaust particulate agglomerates and the nanostructure of primary particles (soot) was investigated with a common-rail turbocharged direct injection diesel engine. The engine was operated at steady state on a dynamometer running at moderate speed with both low (30%) and medium–high (60%) fixed loads, and exhaust particulate was sampled for analysis. Ultra-low sulfur diesel and its 20% v/v blends with soybean methyl ester biodiesel were used. Fuel injection occurred in a single event around top dead center at three different injection pressures. Exhaust particulate samples were characterized with TEMmore » imaging, scanning mobility particle sizing, thermogravimetric analysis, Raman spectroscopy, and XRD analysis. Particulate morphology and oxidative reactivity were found to vary significantly with rail pressure and with biodiesel blend level. Higher biodiesel content led to increases in the primary particle size and oxidative reactivity but did not affect nanoscale disorder in the as-received samples. For particulates generated with higher injection pressures, the initial oxidative reactivity increased, but there was no detectable correlation with primary particle size or nanoscale disorder.« less

  14. Impact of rail pressure and biodiesel fueling on the particulate morphology and soot nanostructures from a common-rail turbocharged direct injection diesel engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Peng; Vander Wal, Randy; Boehman, Andre L.

    The effect of rail pressure and biodiesel fueling on the morphology of exhaust particulate agglomerates and the nanostructure of primary particles (soot) was investigated with a common-rail turbocharged direct injection diesel engine. The engine was operated at steady state on a dynamometer running at moderate speed with both low (30%) and medium–high (60%) fixed loads, and exhaust particulate was sampled for analysis. Ultra-low sulfur diesel and its 20% v/v blends with soybean methyl ester biodiesel were used. Fuel injection occurred in a single event around top dead center at three different injection pressures. Exhaust particulate samples were characterized with TEMmore » imaging, scanning mobility particle sizing, thermogravimetric analysis, Raman spectroscopy, and XRD analysis. Particulate morphology and oxidative reactivity were found to vary significantly with rail pressure and with biodiesel blend level. Higher biodiesel content led to increases in the primary particle size and oxidative reactivity but did not affect nanoscale disorder in the as-received samples. For particulates generated with higher injection pressures, the initial oxidative reactivity increased, but there was no detectable correlation with primary particle size or nanoscale disorder.« less

  15. Supercritical Fluid Extraction and Analysis of Tropospheric Aerosol Particles

    NASA Astrophysics Data System (ADS)

    Hansen, Kristen J.

    An integrated sampling and supercritical fluid extraction (SFE) cell has been designed for whole-sample analysis of organic compounds on tropospheric aerosol particles. The low-volume extraction cell has been interfaced with a sampling manifold for aerosol particle collection in the field. After sample collection, the entire SFE cell was coupled to a gas chromatograph; after on-line extraction, the cryogenically -focused sample was separated and the volatile compounds detected with either a mass spectrometer or a flame ionization detector. A 20-minute extraction at 450 atm and 90 ^circC with pure supercritical CO _2 is sufficient for quantitative extraction of most volatile compounds in aerosol particle samples. A comparison between SFE and thermal desorption, the traditional whole-sample technique for analyses of this type, was performed using ambient aerosol particle samples, as well as samples containing known amounts of standard analytes. The results of these studies indicate that SFE of atmospheric aerosol particles provides quantitative measurement of several classes of organic compounds. SFE provides information that is complementary to that gained by the thermal desorption analysis. The results also indicate that SFE with CO _2 can be validated as an alternative to thermal desorption for quantitative recovery of several organic compounds. In 1989, the organic constituents of atmospheric aerosol particles collected at Niwot Ridge, Colorado, along with various physical and meteorological data, were measured during a collaborative field study. Temporal changes in the composition of samples collected during summertime at the rural site were studied. Thermal desorption-GC/FID was used to quantify selected compounds in samples collected during the field study. The statistical analysis of the 1989 Niwot Ridge data set is presented in this work. Principal component analysis was performed on thirty-one variables selected from the data set in order to ascertain different source and process components, and to examine concentration changes in groups of variables with respect to time of day and meteorological conditions. Seven orthogonal groups of variables resulted from the statistical analysis; the groups serve as molecular markers for different biologic and anthropogenic emission sources. In addition, the results of the statistical analysis were used to investigate how several emission source contributions vary with respect to local atmospheric dynamics. Field studies were conducted in the urban environment in and around Boulder, CO. to characterize the dynamics, chemistry, and emission sources which affect the composition and concentration of different size-fractions of aerosol particles in the Boulder air mass. Relationships between different size fractions of particles and some gas-phase pollutants were elucidated. These field studies included an investigation of seasonal variations in the organic content and concentration of aerosol particles, and how these characteristics are related to local meteorology and to the concentration of some gas-phase pollutants. The elemental and organic composition of aerosol particles was investigated according to particle size in preliminary studies of size-differentiated samples of aerosol particles. In order to aid in future studies of urban aerosol particles, samples were collected at a forest fire near Boulder. Molecular markers specific to wood burning processes will be useful indicators of residential wood burning activities in future field studies.

  16. A Meta-Analysis on Antecedents and Outcomes of Detachment from Work.

    PubMed

    Wendsche, Johannes; Lohmann-Haislah, Andrea

    2016-01-01

    Detachment from work has been proposed as an important non-work experience helping employees to recover from work demands. This meta-analysis (86 publications, k = 91 independent study samples, N = 38,124 employees) examined core antecedents and outcomes of detachment in employee samples. With regard to outcomes, results indicated average positive correlations between detachment and self-reported mental (i.e., less exhaustion, higher life satisfaction, more well-being, better sleep) and physical (i.e., lower physical discomfort) health, state well-being (i.e., less fatigue, higher positive affect, more intensive state of recovery), and task performance (small to medium sized effects). However, average relationships between detachment and physiological stress indicators and work motivation were not significant while associations with contextual performance and creativity were significant, but negative. Concerning work characteristics, as expected, job demands were negatively related and job resources were positively related to detachment (small sized effects). Further, analyses revealed that person characteristics such as negative affectivity/neuroticism (small sized effect) and heavy work investment (medium sized effect) were negatively related to detachment whereas detachment and demographic variables (i.e., age and gender) were not related. Moreover, we found a medium sized average negative relationship between engagement in work-related activities during non-work time and detachment. For most of the examined relationships heterogeneity of effect sizes was moderate to high. We identified study design, samples' gender distribution, and affective valence of work-related thoughts as moderators for some of these aforementioned relationships. The results of this meta-analysis point to detachment as a non-work (recovery) experience that is influenced by work-related and personal characteristics which in turn is relevant for a range of employee outcomes.

  17. A Meta-Analysis on Antecedents and Outcomes of Detachment from Work

    PubMed Central

    Wendsche, Johannes; Lohmann-Haislah, Andrea

    2017-01-01

    Detachment from work has been proposed as an important non-work experience helping employees to recover from work demands. This meta-analysis (86 publications, k = 91 independent study samples, N = 38,124 employees) examined core antecedents and outcomes of detachment in employee samples. With regard to outcomes, results indicated average positive correlations between detachment and self-reported mental (i.e., less exhaustion, higher life satisfaction, more well-being, better sleep) and physical (i.e., lower physical discomfort) health, state well-being (i.e., less fatigue, higher positive affect, more intensive state of recovery), and task performance (small to medium sized effects). However, average relationships between detachment and physiological stress indicators and work motivation were not significant while associations with contextual performance and creativity were significant, but negative. Concerning work characteristics, as expected, job demands were negatively related and job resources were positively related to detachment (small sized effects). Further, analyses revealed that person characteristics such as negative affectivity/neuroticism (small sized effect) and heavy work investment (medium sized effect) were negatively related to detachment whereas detachment and demographic variables (i.e., age and gender) were not related. Moreover, we found a medium sized average negative relationship between engagement in work-related activities during non-work time and detachment. For most of the examined relationships heterogeneity of effect sizes was moderate to high. We identified study design, samples' gender distribution, and affective valence of work-related thoughts as moderators for some of these aforementioned relationships. The results of this meta-analysis point to detachment as a non-work (recovery) experience that is influenced by work-related and personal characteristics which in turn is relevant for a range of employee outcomes. PMID:28133454

  18. Effect of finite sample size on feature selection and classification: a simulation study.

    PubMed

    Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping

    2010-02-01

    The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.

  19. Sampling hazelnuts for aflatoxin: uncertainty associated with sampling, sample preparation, and analysis.

    PubMed

    Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis

    2006-01-01

    The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.

  20. Polymorphism in magic-sized Au144(SR)60 clusters

    DOE PAGES

    Jensen, Kirsten M. O.; Juhas, Pavol; Tofanelli, Marcus A.; ...

    2016-06-14

    Ultra-small, magic-sized metal nanoclusters represent an important new class of materials with properties between molecules and particles. However, their small size challenges the conventional methods for structure characterization. We present the structure of ultra-stable Au144(SR)60 magic-sized nanoclusters obtained from atomic pair distribution function analysis of X-ray powder diffraction data. Our study reveals structural polymorphism in these archetypal nanoclusters. Additionally, in order to confirm the theoretically predicted icosahedral-cored cluster, we also find samples with a truncated decahedral core structure, with some samples exhibiting a coexistence of both cluster structures. Although the clusters are monodisperse in size, structural diversity is apparent. Finally,more » the discovery of polymorphism may open up a new dimension in nanoscale engineering.« less

  1. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Treesearch

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  2. A single test for rejecting the null hypothesis in subgroups and in the overall sample.

    PubMed

    Lin, Yunzhi; Zhou, Kefei; Ganju, Jitendra

    2017-01-01

    In clinical trials, some patient subgroups are likely to demonstrate larger effect sizes than other subgroups. For example, the effect size, or informally the benefit with treatment, is often greater in patients with a moderate condition of a disease than in those with a mild condition. A limitation of the usual method of analysis is that it does not incorporate this ordering of effect size by patient subgroup. We propose a test statistic which supplements the conventional test by including this information and simultaneously tests the null hypothesis in pre-specified subgroups and in the overall sample. It results in more power than the conventional test when the differences in effect sizes across subgroups are at least moderately large; otherwise it loses power. The method involves combining p-values from models fit to pre-specified subgroups and the overall sample in a manner that assigns greater weight to subgroups in which a larger effect size is expected. Results are presented for randomized trials with two and three subgroups.

  3. Influences of Co doping on the structural and optical properties of ZnO nanostructured

    NASA Astrophysics Data System (ADS)

    Majeed Khan, M. A.; Wasi Khan, M.; Alhoshan, Mansour; Alsalhi, M. S.; Aldwayyan, A. S.

    2010-07-01

    Pure and Co-doped ZnO nanostructured samples have been synthesized by a chemical route. We have studied the structural and optical properties of the samples by using X-ray diffraction (XRD), field-emission scanning electron microscopy (FESEM), field-emission transmission electron microscope (FETEM), energy-dispersive X-ray (EDX) analysis and UV-VIS spectroscopy. The XRD patterns show that all the samples are hexagonal wurtzite structures. Changes in crystallite size due to mechanical activation were also determined from X-ray measurements. These results were correlated with changes in particle size followed by SEM and TEM. The average crystallite sizes obtained from XRD were between 20 to 25 nm. The TEM images showed the average particle size of undoped ZnO nanostructure was about 20 nm whereas the smallest average grain size at 3% Co was about 15 nm. Optical parameters such as absorption coefficient ( α), energy band gap ( E g ), the refractive index ( n), and dielectric constants ( σ) have been determined using different methods.

  4. Standard-less analysis of Zircaloy clad samples by an instrumental neutron activation method

    NASA Astrophysics Data System (ADS)

    Acharya, R.; Nair, A. G. C.; Reddy, A. V. R.; Goswami, A.

    2004-03-01

    A non-destructive method for analysis of irregular shape and size samples of Zircaloy has been developed using the recently standardized k0-based internal mono standard instrumental neutron activation analysis (INAA). The samples of Zircaloy-2 and -4 tubes, used as fuel cladding in Indian boiling water reactors (BWR) and pressurized heavy water reactors (PHWR), respectively, have been analyzed. Samples weighing in the range of a few tens of grams were irradiated in the thermal column of Apsara reactor to minimize neutron flux perturbations and high radiation dose. The method utilizes in situ relative detection efficiency using the γ-rays of selected activation products in the sample for overcoming γ-ray self-attenuation. Since the major and minor constituents (Zr, Sn, Fe, Cr and/or Ni) in these samples were amenable to NAA, the absolute concentrations of all the elements were determined using mass balance instead of using the concentration of the internal mono standard. Concentrations were also determined in a smaller size Zircaloy-4 sample by irradiating in the core position of the reactor to validate the present methodology. The results were compared with literature specifications and were found to be satisfactory. Values of sensitivities and detection limits have been evaluated for the elements analyzed.

  5. Spacecraft mass trade-offs versus radio-frequency power and antenna size at 8 GHz and 32 GHz

    NASA Technical Reports Server (NTRS)

    Gilchriest, C. E.

    1987-01-01

    The purpose of this analysis is to help determine the relative merits of 32 GHz over 8 GHz for future deep space communications. This analysis is only a piece of the overall analysis and only considers the downlink communication mass, power, and size comparisons for 8 and 32 GHz. Both parabolic antennas and flat-plate arrays are considered. The Mars Sample Return mission is considered in some detail as an example of the tradeoffs involved; for this mission the mass, power, and size show a definite advantage of roughly 2:1 in using the 32 GHz over 8 GHz.

  6. Optical, electrical and magnetic properties of nanostructured Mn3O4 synthesized through a facile chemical route

    NASA Astrophysics Data System (ADS)

    Bose, Vipin C.; Biju, V.

    2015-02-01

    Nanostructured Mn3O4 sample with an average crystallite size of ˜15 nm is synthesized via the reduction of potassium permanganate using hydrazine. The average particle size obtained from the Transmission Electron Microscopy analysis is in good agreement with the average crystallite size estimated from X-ray diffraction analysis. The presence of Mn4+ ions at the octahedral sites is inferred from the results of Raman, UV-visible absorption and X-ray photoelectron spectroscopy analyzes. DC electrical conductivity of the sample in the temperature range 313-423 K, is about five orders of magnitude larger than that reported for single crystalline Mn3O4 sample. The dominant conduction mechanism is identified to be of the polaronic hopping of holes between cations in the octahedral sites. The zero field cooled and field cooled magnetization of the sample is studied in the range 20-300 K. The Curie temperature for the sample is about 45 K, below which the sample is ferrimagnetic. A blocking temperature of 35 K is observed in the field cooled curve. It is observed that the sample shows hysteresis at temperatures below the Curie temperature with no saturation, even at an applied field (20 kOe). The presence of an ordered core and disordered surface of spin arrangements is observed from the magnetization studies. Above the Curie temperature, the sample shows linear dependence of magnetization on applied field with no hysteresis characteristic of paramagnetic phase.

  7. The Effect of Game-Assisted Mathematics Education on Academic Achievement in Turkey: A Meta-Analysis Study

    ERIC Educational Resources Information Center

    Turgut, Sedat; Temur, Özlem Dogan

    2017-01-01

    In this research, the effects of using game in mathematics teaching process on academic achievement in Turkey were examined by metaanalysis method. For this purpose, the average effect size value and the average effect size values of the moderator variables (education level, the field of education, game type, implementation period and sample size)…

  8. The Statistical Power of Planned Comparisons.

    ERIC Educational Resources Information Center

    Benton, Roberta L.

    Basic principles underlying statistical power are examined; and issues pertaining to effect size, sample size, error variance, and significance level are highlighted via the use of specific hypothetical examples. Analysis of variance (ANOVA) and related methods remain popular, although other procedures sometimes have more statistical power against…

  9. Sediment laboratory quality-assurance project: studies of methods and materials

    USGS Publications Warehouse

    Gordon, J.D.; Newland, C.A.; Gray, J.R.

    2001-01-01

    In August 1996 the U.S. Geological Survey initiated the Sediment Laboratory Quality-Assurance project. The Sediment Laboratory Quality Assurance project is part of the National Sediment Laboratory Quality-Assurance program. This paper addresses the fmdings of the sand/fme separation analysis completed for the single-blind reference sediment-sample project and differences in reported results between two different analytical procedures. From the results it is evident that an incomplete separation of fme- and sand-size material commonly occurs resulting in the classification of some of the fme-size material as sand-size material. Electron microscopy analysis supported the hypothesis that the negative bias for fme-size material and the positive bias for sand-size material is largely due to aggregation of some of the fine-size material into sand-size particles and adherence of fine-size material to the sand-size grains. Electron microscopy analysis showed that preserved river water, which was low in dissolved solids, specific conductance, and neutral pH, showed less aggregation and adhesion than preserved river water that was higher in dissolved solids and specific conductance with a basic pH. Bacteria were also found growing in the matrix, which may enhance fme-size material aggregation through their adhesive properties. Differences between sediment-analysis methods were also investigated as pan of this study. Suspended-sediment concentration results obtained from one participating laboratory that used a total-suspended solids (TSS) method had greater variability and larger negative biases than results obtained when this laboratory used a suspended-sediment concentration method. When TSS methods were used to analyze the reference samples, the median suspended sediment concentration percent difference was -18.04 percent. When the laboratory used a suspended-sediment concentration method, the median suspended-sediment concentration percent difference was -2.74 percent. The percent difference was calculated as follows: Percent difference = (( reported mass - known mass)/known mass ) X 100.

  10. A Monte-Carlo simulation analysis for evaluating the severity distribution functions (SDFs) calibration methodology and determining the minimum sample-size requirements.

    PubMed

    Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique

    2017-01-01

    Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A comprehensive and scalable database search system for metaproteomics.

    PubMed

    Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W

    2016-08-16

    Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.

  12. Bacterial contamination of boar semen affects the litter size.

    PubMed

    Maroto Martín, Luis O; Muñoz, Eduardo Cruz; De Cupere, Françoise; Van Driessche, Edilbert; Echemendia-Blanco, Dannele; Rodríguez, José M Machado; Beeckmans, Sonia

    2010-07-01

    One hundred and fifteen semen samples were collected from 115 different boars from two farms in Cuba. The boars belonged to five different breeds. Evaluation of the semen sample characteristics (volume, pH, colour, smell, motility of sperm cells) revealed that they meet international standards. The samples were also tested for the presence of agglutinated sperm cells and for bacterial contamination. Seventy five percent of the ejaculates were contaminated with at least one type of bacteria and E. coli was by far the major contaminant, being present in 79% of the contaminated semen samples (n=68). Other contaminating bacteria belonged to the genera Proteus (n=31), Serratia (n=31), Enterobacter (n=24), Klebsiella (n=12), Staphylococcus (n=10), Streptococcus (n=8) and Pseudomonas (n=7). Only in one sample anaerobic bacteria were detected. Pearson's analysis of the data revealed that there is a positive correlation between the presence of E. coli and sperm agglutination, and a negative correlation between sperm agglutination and litter size. One-way ANOVA and post hoc Tukey analysis of 378 litters showed that the litter size is significantly reduced when semen is used that is contaminated with spermagglutinating E. coli above a threshold value of 3.5x10(3)CFU/ml. Copyright 2010 Elsevier B.V. All rights reserved.

  13. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  14. Characterization of fly ash from low-sulfur and high-sulfur coal sources: Partitioning of carbon and trace elements with particle size

    USGS Publications Warehouse

    Hower, J.C.; Trimble, A.S.; Eble, C.F.; Palmer, C.A.; Kolker, A.

    1999-01-01

    Fly ash samples were collected in November and December of 1994, from generating units at a Kentucky power station using high- and low-sulfur feed coals. The samples are part of a two-year study of the coal and coal combustion byproducts from the power station. The ashes were wet screened at 100, 200, 325, and 500 mesh (150, 75, 42, and 25 ??m, respectively). The size fractions were then dried, weighed, split for petrographic and chemical analysis, and analyzed for ash yield and carbon content. The low-sulfur "heavy side" and "light side" ashes each have a similar size distribution in the November samples. In contrast, the December fly ashes showed the trend observed in later months, the light-side ash being finer (over 20 % more ash in the -500 mesh [-25 ??m] fraction) than the heavy-side ash. Carbon tended to be concentrated in the coarse fractions in the December samples. The dominance of the -325 mesh (-42 ??m) fractions in the overall size analysis implies, though, that carbon in the fine sizes may be an important consideration in the utilization of the fly ash. Element partitioning follows several patterns. Volatile elements, such as Zn and As, are enriched in the finer sizes, particularly in fly ashes collected at cooler, light-side electrostatic precipitator (ESP) temperatures. The latter trend is a function of precipitation at the cooler-ESP temperatures and of increasing concentration with the increased surface area of the finest fraction. Mercury concentrations are higher in high-carbon fly ashes, suggesting Hg adsorption on the fly ash carbon. Ni and Cr are associated, in part, with the spinel minerals in the fly ash. Copyright ?? 1999 Taylor & Francis.

  15. Demonstration of Multi- and Single-Reader Sample Size Program for Diagnostic Studies software.

    PubMed

    Hillis, Stephen L; Schartz, Kevin M

    2015-02-01

    The recently released software Multi- and Single-Reader Sample Size Sample Size Program for Diagnostic Studies , written by Kevin Schartz and Stephen Hillis, performs sample size computations for diagnostic reader-performance studies. The program computes the sample size needed to detect a specified difference in a reader performance measure between two modalities, when using the analysis methods initially proposed by Dorfman, Berbaum, and Metz (DBM) and Obuchowski and Rockette (OR), and later unified and improved by Hillis and colleagues. A commonly used reader performance measure is the area under the receiver-operating-characteristic curve. The program can be used with typical common reader-performance measures which can be estimated parametrically or nonparametrically. The program has an easy-to-use step-by-step intuitive interface that walks the user through the entry of the needed information. Features of the software include the following: (1) choice of several study designs; (2) choice of inputs obtained from either OR or DBM analyses; (3) choice of three different inference situations: both readers and cases random, readers fixed and cases random, and readers random and cases fixed; (4) choice of two types of hypotheses: equivalence or noninferiority; (6) choice of two output formats: power for specified case and reader sample sizes, or a listing of case-reader combinations that provide a specified power; (7) choice of single or multi-reader analyses; and (8) functionality in Windows, Mac OS, and Linux.

  16. The development of miniplex primer sets for the analysis of degraded DNA

    NASA Astrophysics Data System (ADS)

    McCord, Bruce; Opel, Kerry; Chung, Denise; Drabek, Jiri; Tatarek, Nancy; Meadows Jantz, Lee; Butler, John

    2005-05-01

    In this project, a new set of multiplexed PCR reactions has been developed for the analysis of degraded DNA. These DNA markers, known as Miniplexes, utilize primers that have shorter amplicons for use in short tandem repeat (STR) analysis of degraded DNA. In our work we have defined six of these new STR multiplexes, each of which consists of 3 to 4 reduced size STR loci, and each labeled with a different fluorescent dye. When compared to commercially available STR systems, reductions in size of up to 300 basepairs are possible. In addition, these newly designed amplicons consist of loci that are fully compatible with the the national computer DNA database known as CODIS. To demonstrate compatibility with commercial STR kits, a concordance study of 532 DNA samples of Caucasian, African American, and Hispanic origin was undertaken There was 99.77% concordance between allele calls with the two methods. Of these 532 samples, only 15 samples showed discrepancies at one of 12 loci. These occurred predominantly at 2 loci, vWA and D13S317. DNA sequencing revealed that these locations had deletions between the two primer binding sites. Uncommon deletions like these can be expected in certain samples and will not affect the utility of the Miniplexes as tools for degraded DNA analysis. The Miniplexes were also applied to enzymatically digested DNA to assess their potential in degraded DNA analysis. The results demonstrated a greatly improved efficiency in the analysis of degraded DNA when compared to commercial STR genotyping kits. A series of human skeletal remains that had been exposed to a variety of environmental conditions were also examined. Sixty-four percent of the samples generated full profiles when amplified with the Miniplexes, while only sixteen percent of the samples tested generated full profiles with a commercial kit. In addition, complete profiles were obtained for eleven of the twelve Miniplex loci which had amplicon size ranges less than 200 base pairs. These data clearly demonstrate that smaller PCR amplicons provide an attractive alternative to mitochondrial DNA for forensic analysis of degraded DNA.

  17. Stocking, Forest Type, and Stand Size Class - The Southern Forest Inventory and Analysis Unit's Calculation of Three Important Stand Descriptors

    Treesearch

    Dennis M. May

    1990-01-01

    The procedures by which the Southern Forest Inventory and Analysis unit calculates stocking from tree data collected on inventory sample plots are described in this report. Stocking is then used to ascertain two other important stand descriptors: forest type and stand size class. Inventory data for three plots from the recently completed 1989 Tennessee survey are used...

  18. Particle size analysis of lamb meat: Effect of homogenization speed, comparison with myofibrillar fragmentation index and its relationship with shear force.

    PubMed

    Karumendu, L U; Ven, R van de; Kerr, M J; Lanza, M; Hopkins, D L

    2009-08-01

    The impact of homogenization speed on Particle Size (PS) results was examined using samples from the M.longissimus thoracis et lumborum (LL) of 40 lambs. One gram duplicate samples from meat aged for 1 and 5days were homogenized at five different speeds; 11,000, 13,000, 16,000, 19,000 and 22,000rpm. In addition to this LL samples from 30 different lamb carcases also aged for 1 and 5days were used to study the comparison between PS and myofibrillar fragmentation index (MFI) values. In this case, 1g duplicate samples (n=30) were homogenized at 16,000rpm and the other half (0.5g samples) at 11,000rpm (n=30). The homogenates were then subjected to respective combinations of treatments which included either PS analysis or the determination of MFI, both with or without three cycles of centrifugation. All 140 samples of LL included 65g blocks for subsequent shear force (SF) testing. Homogenization at 16,000rpm provided the greatest ability to detect ageing differences for particle size between samples aged for 1 and 5days. Particle size at the 25% quantile provided the best result for detecting differences due to ageing. It was observed that as ageing increased the mean PS decreased and was significantly (P<0.001) less for 5days aged samples compared to 1day aged samples, while MFI values significantly increased (P<0.001) as ageing period increased. When comparing the PS and MFI methods it became apparent that, as opposed to the MFI method, there was a greater coefficient of variation for the PS method which warranted a quality assurance system. Given this requirement and examination of the mean, standard deviation and the 25% quantile for PS data it was concluded that three cycles of centrifugation were not necessary and this also applied to the MFI method. There were significant correlations (P<0.001) within the same lamb loin sample aged for a given period between mean MFI and mean PS (-0.53), mean MFI and mean SF (-0.38) and mean PS and mean SF (0.23). It was concluded that PS analysis offers significant potential for streamlining determination of myofibrillar degradation when samples are measured after homogenization at 16,000rpm with no centrifugation.

  19. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  20. Practical limitations of single particle ICP-MS in the determination of nanoparticle size distributions and dissolution: case of rare earth oxides.

    PubMed

    Fréchette-Viens, Laurie; Hadioui, Madjid; Wilkinson, Kevin J

    2017-01-15

    The applicability of single particle ICP-MS (SP-ICP-MS) for the analysis of nanoparticle size distributions and the determination of particle numbers was evaluated using the rare earth oxide, La 2 O 3 , as a model particle. The composition of the storage containers, as well as the ICP-MS sample introduction system were found to significantly impact SP-ICP-MS analysis. While La 2 O 3 nanoparticles (La 2 O 3 NP) did not appear to interact strongly with sample containers, adsorptive losses of La 3+ (over 24h) were substantial (>72%) for fluorinated ethylene propylene bottles as opposed to polypropylene (<10%). Furthermore, each part of the sample introduction system (nebulizers made of perfluoroalkoxy alkane (PFA) or glass, PFA capillary tubing, and polyvinyl chloride (PVC) peristaltic pump tubing) contributed to La 3+ adsorptive losses. On the other hand, the presence of natural organic matter in the nanoparticle suspensions led to a decreased adsorptive loss in both the sample containers and the introduction system, suggesting that SP-ICP-MS may nonetheless be appropriate for NP analysis in environmental matrices. Coupling of an ion-exchange resin to the SP-ICP-MS led to more accurate determinations of the La 2 O 3 NP size distributions. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. The Effect of Unequal Samples, Heterogeneity of Covariance Matrices, and Number of Variables on Discriminant Analysis Classification Tables and Related Statistics.

    ERIC Educational Resources Information Center

    Spearing, Debra; Woehlke, Paula

    To assess the effect on discriminant analysis in terms of correct classification into two groups, the following parameters were systematically altered using Monte Carlo techniques: sample sizes; proportions of one group to the other; number of independent variables; and covariance matrices. The pairing of the off diagonals (or covariances) with…

  2. A simple autocorrelation algorithm for determining grain size from digital images of sediment

    USGS Publications Warehouse

    Rubin, D.M.

    2004-01-01

    Autocorrelation between pixels in digital images of sediment can be used to measure average grain size of sediment on the bed, grain-size distribution of bed sediment, and vertical profiles in grain size in a cross-sectional image through a bed. The technique is less sensitive than traditional laboratory analyses to tails of a grain-size distribution, but it offers substantial other advantages: it is 100 times as fast; it is ideal for sampling surficial sediment (the part that interacts with a flow); it can determine vertical profiles in grain size on a scale finer than can be sampled physically; and it can be used in the field to provide almost real-time grain-size analysis. The technique can be applied to digital images obtained using any source with sufficient resolution, including digital cameras, digital video, or underwater digital microscopes (for real-time grain-size mapping of the bed). ?? 2004, SEPM (Society for Sedimentary Geology).

  3. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    USGS Publications Warehouse

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than adequate for the majority of sedimentological applications, especially considering that the autocorrelation technique is estimated to be at least 100 times faster than traditional methods.

  4. A Bayesian sequential design using alpha spending function to control type I error.

    PubMed

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  5. Analysis of variability in additive manufactured open cell porous structures.

    PubMed

    Evans, Sam; Jones, Eric; Fox, Pete; Sutcliffe, Chris

    2017-06-01

    In this article, a novel method of analysing build consistency of additively manufactured open cell porous structures is presented. Conventionally, methods such as micro computed tomography or scanning electron microscopy imaging have been applied to the measurement of geometric properties of porous material; however, high costs and low speeds make them unsuitable for analysing high volumes of components. Recent advances in the image-based analysis of open cell structures have opened up the possibility of qualifying variation in manufacturing of porous material. Here, a photogrammetric method of measurement, employing image analysis to extract values for geometric properties, is used to investigate the variation between identically designed porous samples measuring changes in material thickness and pore size, both intra- and inter-build. Following the measurement of 125 samples, intra-build material thickness showed variation of ±12%, and pore size ±4% of the mean measured values across five builds. Inter-build material thickness and pore size showed mean ranges higher than those of intra-build, ±16% and ±6% of the mean material thickness and pore size, respectively. Acquired measurements created baseline variation values and demonstrated techniques suitable for tracking build deviation and inspecting additively manufactured porous structures to indicate unwanted process fluctuations.

  6. Stability and bias of classification rates in biological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.; Hines, J.E.

    1990-01-01

    We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases

  7. Are Apparent Sex Differences in Mean IQ Scores Created in Part by Sample Restriction and Increased Male Variance?

    ERIC Educational Resources Information Center

    Dykiert, Dominika; Gale, Catharine R.; Deary, Ian J.

    2009-01-01

    This study investigated the possibility that apparent sex differences in IQ are at least partly created by the degree of sample restriction from the baseline population. We used a nationally representative sample, the 1970 British Cohort Study. Sample sizes varied from 6518 to 11,389 between data-collection sweeps. Principal components analysis of…

  8. Laboratory and Airborne BRDF Analysis of Vegetation Leaves and Soil Samples

    NASA Technical Reports Server (NTRS)

    Georgiev, Georgi T.; Gatebe, Charles K.; Butler, James J.; King, Michael D.

    2008-01-01

    Laboratory-based Bidirectional Reflectance Distribution Function (BRDF) analysis of vegetation leaves, soil, and leaf litter samples is presented. The leaf litter and soil samples, numbered 1 and 2, were obtained from a site located in the savanna biome of South Africa (Skukuza: 25.0degS, 31.5degE). A third soil sample, number 3, was obtained from Etosha Pan, Namibia (19.20degS, 15.93degE, alt. 1100 m). In addition, BRDF of local fresh and dry leaves from tulip tree (Liriodendron tulipifera) and acacia tree (Acacia greggii) were studied. It is shown how the BRDF depends on the incident and scatter angles, sample size (i.e. crushed versus whole leaf,) soil samples fraction size, sample status (i.e. fresh versus dry leaves), vegetation species (poplar versus acacia), and vegetation s biochemical composition. As a demonstration of the application of the results of this study, airborne BRDF measurements acquired with NASA's Cloud Absorption Radiometer (CAR) over the same general site where the soil and leaf litter samples were obtained are compared to the laboratory results. Good agreement between laboratory and airborne measured BRDF is reported.

  9. Authoritarian Parenting and Asian Adolescent School Performance: Insights from the US and Taiwan

    PubMed Central

    Pong, Suet-ling; Johnston, Jamie; Chen, Vivien

    2014-01-01

    Our study re-examines the relationship between parenting and school performance among Asian students. We use two sources of data: wave I of the Adolescent Health Longitudinal Survey (Add Health), and waves I and II of the Taiwan Educational Panel Survey (TEPS). Analysis using Add Health reveals that the Asian-American/European-American difference in the parenting–school performance relationship is due largely to differential sample sizes. When we select a random sample of European-American students comparable to the sample size of Asian-American students, authoritarian parenting also shows no effect for European-American students. Furthermore, analysis of TEPS shows that authoritarian parenting is negatively associated with children's school achievement, while authoritative parenting is positively associated. This result for Taiwanese Chinese students is similar to previous results for European-American students in the US. PMID:24850978

  10. Authoritarian Parenting and Asian Adolescent School Performance: Insights from the US and Taiwan.

    PubMed

    Pong, Suet-Ling; Johnston, Jamie; Chen, Vivien

    2010-01-01

    Our study re-examines the relationship between parenting and school performance among Asian students. We use two sources of data: wave I of the Adolescent Health Longitudinal Survey (Add Health), and waves I and II of the Taiwan Educational Panel Survey (TEPS). Analysis using Add Health reveals that the Asian-American/European-American difference in the parenting-school performance relationship is due largely to differential sample sizes. When we select a random sample of European-American students comparable to the sample size of Asian-American students, authoritarian parenting also shows no effect for European-American students. Furthermore, analysis of TEPS shows that authoritarian parenting is negatively associated with children's school achievement, while authoritative parenting is positively associated. This result for Taiwanese Chinese students is similar to previous results for European-American students in the US.

  11. Extension of latin hypercube samples with correlated variables.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hora, Stephen Curtis; Helton, Jon Craig; Sallaberry, Cedric J. PhD.

    2006-11-01

    A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number ofmore » model evaluations.« less

  12. A procedure for partitioning bulk sediments into distinct grain-size fractions for geochemical analysis

    USGS Publications Warehouse

    Barbanti, A.; Bothner, Michael H.

    1993-01-01

    A method to separate sediments into discrete size fractions for geochemical analysis has been tested. The procedures were chosen to minimize the destruction or formation of aggregates and involved gentle sieving and settling of wet samples. Freeze-drying and sonication pretreatments, known to influence aggregates, were used for comparison. Freeze-drying was found to increase the silt/clay ratio by an average of 180 percent compared to analysis of a wet sample that had been wet sieved only. Sonication of a wet sample decreased the silt/clay ratio by 51 percent. The concentrations of metals and organic carbon in the separated fractions changed depending on the pretreatment procedures in a manner consistent with the hypothesis that aggregates consist of fine-grained organic- and metal-rich particles. The coarse silt fraction of a freeze-dried sample contained 20–44 percent higher concentrations of Zn, Cu, and organic carbon than the coarse silt fraction of the wet sample. Sonication resulted in concentrations of these analytes that were 18–33 percent lower in the coarse silt fraction than found in the wet sample. Sonication increased the concentration of lead in the clay fraction by an average of 40 percent compared to an unsonicated sample. Understanding the magnitude of change caused by different analysis protocols is an aid in designing future studies that seek to interpret the spatial distribution of contaminated sediments and their transport mechanisms.

  13. Digital image processing of nanometer-size metal particles on amorphous substrates

    NASA Technical Reports Server (NTRS)

    Soria, F.; Artal, P.; Bescos, J.; Heinemann, K.

    1989-01-01

    The task of differentiating very small metal aggregates supported on amorphous films from the phase contrast image features inherently stemming from the support is extremely difficult in the nanometer particle size range. Digital image processing was employed to overcome some of the ambiguities in evaluating such micrographs. It was demonstrated that such processing allowed positive particle detection and a limited degree of statistical size analysis even for micrographs where by bare eye examination the distribution between particles and erroneous substrate features would seem highly ambiguous. The smallest size class detected for Pd/C samples peaks at 0.8 nm. This size class was found in various samples prepared under different evaporation conditions and it is concluded that these particles consist of 'a magic number' of 13 atoms and have cubooctahedral or icosahedral crystal structure.

  14. Power and sample size for multivariate logistic modeling of unmatched case-control studies.

    PubMed

    Gail, Mitchell H; Haneuse, Sebastien

    2017-01-01

    Sample size calculations are needed to design and assess the feasibility of case-control studies. Although such calculations are readily available for simple case-control designs and univariate analyses, there is limited theory and software for multivariate unconditional logistic analysis of case-control data. Here we outline the theory needed to detect scalar exposure effects or scalar interactions while controlling for other covariates in logistic regression. Both analytical and simulation methods are presented, together with links to the corresponding software.

  15. GLIMMPSE Lite: Calculating Power and Sample Size on Smartphone Devices

    PubMed Central

    Munjal, Aarti; Sakhadeo, Uttara R.; Muller, Keith E.; Glueck, Deborah H.; Kreidler, Sarah M.

    2014-01-01

    Researchers seeking to develop complex statistical applications for mobile devices face a common set of difficult implementation issues. In this work, we discuss general solutions to the design challenges. We demonstrate the utility of the solutions for a free mobile application designed to provide power and sample size calculations for univariate, one-way analysis of variance (ANOVA), GLIMMPSE Lite. Our design decisions provide a guide for other scientists seeking to produce statistical software for mobile platforms. PMID:25541688

  16. Self-objectification and disordered eating: A meta-analysis.

    PubMed

    Schaefer, Lauren M; Thompson, J Kevin

    2018-06-01

    Objectification theory posits that self-objectification increases risk for disordered eating. The current study sought to examine the relationship between self-objectification and disordered eating using meta-analytic techniques. Data from 53 cross-sectional studies (73 effect sizes) revealed a significant moderate positive overall effect (r = .39), which was moderated by gender, ethnicity, sexual orientation, and measurement of self-objectification. Specifically, larger effect sizes were associated with female samples and the Objectified Body Consciousness Scale. Effect sizes were smaller among heterosexual men and African American samples. Age, body mass index, country of origin, measurement of disordered eating, sample type and publication type were not significant moderators. Overall, results from the first meta-analysis to examine the relationship between self-objectification and disordered eating provide support for one of the major tenets of objectification theory and suggest that self-objectification may be a meaningful target in eating disorder interventions, though further work is needed to establish temporal and causal relationships. Findings highlight current gaps in the literature (e.g., limited representation of males, and ethnic and sexual minorities) with implications for guiding future research. © 2018 Wiley Periodicals, Inc.

  17. PIXE Analysis of Atmospheric Aerosol Samples Collected in the Adirondack Mountains

    NASA Astrophysics Data System (ADS)

    Yoskowitz, Josh; Ali, Salina; Nadareski, Benjamin; Safiq, Alexandrea; Smith, Jeremy; Labrake, Scott; Vineyard, Michael

    2013-10-01

    We have performed an elemental analysis of atmospheric aerosol samples collected at Piseco Lake in Upstate New York using proton induced x-ray emission spectroscopy (PIXE). This work is part of a systematic study of airborne pollution in the Adirondack Mountains. Of particular interest is the sulfur content that can contribute to acid rain, a well-documented problem in the Adirondacks. We used a nine-stage cascade impactor to collect the samples and distribute the particulate matter onto Kapton foils by particle size. The PIXE experiments were performed with 2.2-MeV proton beams from the 1.1-MV pelletron accelerator in the Union College Ion-Beam Analysis Laboratory. X-Ray energy spectra were measured with a silicon drift detector and analyzed with GUPIX software to determine the elemental concentrations of the aerosols. A broad range of elements from silicon to zinc were detected with significant sulfur concentrations measured for particulate matter between 0.25 and 0.5 μm in size. The PIXE analysis will be described and preliminary results will be presented.

  18. TableSim--A program for analysis of small-sample categorical data.

    Treesearch

    David J. Rugg

    2003-01-01

    Documents a computer program for calculating correct P-values of 1-way and 2-way tables when sample sizes are small. The program is written in Fortran 90; the executable code runs in 32-bit Microsoft-- command line environments.

  19. AEROSOL SAMPLING AND ANALYSIS, PHOENIX, ARIZONA

    EPA Science Inventory

    An atmospheric sampling program was carried out in the greater Phoenix, Arizona metropolitan area in November, 1975. Objectives of the study were to measure aerosol mass flux through Phoenix and to characterize the aerosol according to particle type and size. The ultimate goal of...

  20. A computational framework for estimating statistical power and planning hypothesis-driven experiments involving one-dimensional biomechanical continua.

    PubMed

    Pataky, Todd C; Robinson, Mark A; Vanrenterghem, Jos

    2018-01-03

    Statistical power assessment is an important component of hypothesis-driven research but until relatively recently (mid-1990s) no methods were available for assessing power in experiments involving continuum data and in particular those involving one-dimensional (1D) time series. The purpose of this study was to describe how continuum-level power analyses can be used to plan hypothesis-driven biomechanics experiments involving 1D data. In particular, we demonstrate how theory- and pilot-driven 1D effect modeling can be used for sample-size calculations for both single- and multi-subject experiments. For theory-driven power analysis we use the minimum jerk hypothesis and single-subject experiments involving straight-line, planar reaching. For pilot-driven power analysis we use a previously published knee kinematics dataset. Results show that powers on the order of 0.8 can be achieved with relatively small sample sizes, five and ten for within-subject minimum jerk analysis and between-subject knee kinematics, respectively. However, the appropriate sample size depends on a priori justifications of biomechanical meaning and effect size. The main advantage of the proposed technique is that it encourages a priori justification regarding the clinical and/or scientific meaning of particular 1D effects, thereby robustly structuring subsequent experimental inquiry. In short, it shifts focus from a search for significance to a search for non-rejectable hypotheses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Particle-size dependence on metal(loid) distributions in mine wastes: Implications for water contamination and human exposure

    USGS Publications Warehouse

    Kim, C.S.; Wilson, K.M.; Rytuba, J.J.

    2011-01-01

    The mining and processing of metal-bearing ores has resulted in contamination issues where waste materials from abandoned mines remain in piles of untreated and unconsolidated material, posing the potential for waterborne and airborne transport of toxic elements. This study presents a systematic method of particle size separation, mass distribution, and bulk chemical analysis for mine tailings and adjacent background soil samples from the Rand historic mining district, California, in order to assess particle size distribution and related trends in metal(loid) concentration as a function of particle size. Mine tailings produced through stamp milling and leaching processes were found to have both a narrower and finer particle size distribution than background samples, with significant fractions of particles available in a size range (???250 ??m) that could be incidentally ingested. In both tailings and background samples, the majority of trace metal(loid)s display an inverse relationship between concentration and particle size, resulting in higher proportions of As, Cr, Cu, Pb and Zn in finer-sized fractions which are more susceptible to both water- and wind-borne transport as well as ingestion and/or inhalation. Established regulatory screening levels for such elements may, therefore, significantly underestimate potential exposure risk if relying solely on bulk sample concentrations to guide remediation decisions. Correlations in elemental concentration trends (such as between As and Fe) indicate relationships between elements that may be relevant to their chemical speciation. ?? 2011 Elsevier Ltd.

  2. A review of accuracy assessment for object-based image analysis: From per-pixel to per-polygon approaches

    NASA Astrophysics Data System (ADS)

    Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul

    2018-07-01

    Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.

  3. Nanoliter hemolymph sampling and analysis of individual adult Drosophila melanogaster.

    PubMed

    Piyankarage, Sujeewa C; Featherstone, David E; Shippy, Scott A

    2012-05-15

    The fruit fly (Drosophila melanogaster) is an extensively used and powerful, genetic model organism. However, chemical studies using individual flies have been limited by the animal's small size. Introduced here is a method to sample nanoliter hemolymph volumes from individual adult fruit-flies for chemical analysis. The technique results in an ability to distinguish hemolymph chemical variations with developmental stage, fly sex, and sampling conditions. Also presented is the means for two-point monitoring of hemolymph composition for individual flies.

  4. Use of ancillary data to improve the analysis of forest health indicators

    Treesearch

    Dave Gartner

    2013-01-01

    In addition to its standard suite of mensuration variables, the Forest Inventory and Analysis (FIA) program of the U.S. Forest Service also collects data on forest health variables formerly measured by the Forest Health Monitoring program. FIA obtains forest health information on a subset of the base sample plots. Due to the sample size differences, the two sets of...

  5. Laboratory and exterior decay of wood plastic composite boards: voids analysis and computed tomography

    Treesearch

    Grace Sun; Rebecca E. Ibach; Meghan Faillace; Marek Gnatowski; Jessie A. Glaeser; John Haight

    2016-01-01

    After exposure in the field and laboratory soil block culture testing, the void content of wood–plastic composite (WPC) decking boards was compared to unexposed samples. A void volume analysis was conducted based on calculations of sample density and from micro-computed tomography (microCT) data. It was found that reference WPC contains voids of different sizes from...

  6. Massively parallel sequencing of 17 commonly used forensic autosomal STRs and amelogenin with small amplicons.

    PubMed

    Kim, Eun Hye; Lee, Hwan Young; Yang, In Seok; Jung, Sang-Eun; Yang, Woo Ick; Shin, Kyoung-Jin

    2016-05-01

    The next-generation sequencing (NGS) method has been utilized to analyze short tandem repeat (STR) markers, which are routinely used for human identification purposes in the forensic field. Some researchers have demonstrated the successful application of the NGS system to STR typing, suggesting that NGS technology may be an alternative or additional method to overcome limitations of capillary electrophoresis (CE)-based STR profiling. However, there has been no available multiplex PCR system that is optimized for NGS analysis of forensic STR markers. Thus, we constructed a multiplex PCR system for the NGS analysis of 18 markers (13CODIS STRs, D2S1338, D19S433, Penta D, Penta E and amelogenin) by designing amplicons in the size range of 77-210 base pairs. Then, PCR products were generated from two single-sources, mixed samples and artificially degraded DNA samples using a multiplex PCR system, and were prepared for sequencing on the MiSeq system through construction of a subsequent barcoded library. By performing NGS and analyzing the data, we confirmed that the resultant STR genotypes were consistent with those of CE-based typing. Moreover, sequence variations were detected in targeted STR regions. Through the use of small-sized amplicons, the developed multiplex PCR system enables researchers to obtain successful STR profiles even from artificially degraded DNA as well as STR loci which are analyzed with large-sized amplicons in the CE-based commercial kits. In addition, successful profiles can be obtained from mixtures up to a 1:19 ratio. Consequently, the developed multiplex PCR system, which produces small size amplicons, can be successfully applied to STR NGS analysis of forensic casework samples such as mixtures and degraded DNA samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Synthesis, photoluminescence and Magnetic properties of iron oxide (α-Fe2O3) nanoparticles through precipitation or hydrothermal methods

    NASA Astrophysics Data System (ADS)

    Lassoued, Abdelmajid; Lassoued, Mohamed Saber; Dkhil, Brahim; Ammar, Salah; Gadri, Abdellatif

    2018-07-01

    In this work the iron oxide (α-Fe2O3) nanoparticles are synthesized using two different methods: precipitation and hydrothermal. Size, structural, optical and magnetic properties were determined and compared using X-ray diffraction (XRD), Transmission Electron Microscopy (TEM), Scanning Electron Microscopy (SEM), Fourier Transform Infra-Red (FT-IR), Raman spectroscopy, Differential Thermal Analysis (DTA), Thermogravimetric Analysis (TGA), Ultraviolet-Visible (UV-Vis) analysis, Superconducting QUantum Interference Device (SQUID) magnetometer and Photoluminescence (PL). XRD data further revealed a rhombohedral (hexagonal) structure with the space group (R-3c) and showed an average size of 21 nm for hydrothermal samples and 33 nm for precipitation samples which concorded with TEM and SEM images. FT-IR confirms the phase purity of the nanoparticles synthesized. The Raman spectroscopy was used not only to prove that we have synthesized pure α-Fe2O3 but also to identify their phonon modes. The TGA showed three mass losses, whereas DTA resulted in three endothermic peaks. The decrease in the particle size of hematite of 33 nm for precipitation samples to 21 nm for hydrothermal samples is responsible for increasing the optical band gap of 1.94-2.10 eV where, the relation between them is inverse relationship. The products exhibited the attractive magnetic properties with good saturation magnetization, which were examined by a SQUID magnetometer. Photoluminescence measurements showed a strong emission band at 450 nm. Pure hematite prepared by hydrothermal method has smallest size, best crystallinity, highest band gap and best value of saturation magnetization compared to the hematite elaborated by the precipitation method.

  8. An analysis of Apollo lunar soil samples 12070,889, 12030,187, and 12070,891: Basaltic diversity at the Apollo 12 landing site and implications for classification of small-sized lunar samples

    NASA Astrophysics Data System (ADS)

    Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.

    2016-09-01

    Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.

  9. RAINDROP DISTRIBUTIONS AT MAJURO ATOLL, MARSHALL ISLANDS.

    DTIC Science & Technology

    RAINDROPS, MARSHALL ISLANDS), (*ATMOSPHERIC PRECIPITATION, TROPICAL REGIONS), PARTICLE SIZE, SAMPLING, TABLES(DATA), WATER , ATTENUATION, DISTRIBUTION, VOLUME, RADAR REFLECTIONS, RAINFALL, PHOTOGRAPHIC ANALYSIS, COMPUTERS

  10. Size-segregated compositional analysis of aerosol particles collected in the European Arctic during the ACCACIA campaign

    NASA Astrophysics Data System (ADS)

    Young, G.; Jones, H. M.; Darbyshire, E.; Baustian, K. J.; McQuaid, J. B.; Bower, K. N.; Connolly, P. J.; Gallagher, M. W.; Choularton, T. W.

    2016-03-01

    Single-particle compositional analysis of filter samples collected on board the Facility for Airborne Atmospheric Measurements (FAAM) BAe-146 aircraft is presented for six flights during the springtime Aerosol-Cloud Coupling and Climate Interactions in the Arctic (ACCACIA) campaign (March-April 2013). Scanning electron microscopy was utilised to derive size-segregated particle compositions and size distributions, and these were compared to corresponding data from wing-mounted optical particle counters. Reasonable agreement between the calculated number size distributions was found. Significant variability in composition was observed, with differing external and internal mixing identified, between air mass trajectory cases based on HYbrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) analyses. Dominant particle classes were silicate-based dusts and sea salts, with particles notably rich in K and Ca detected in one case. Source regions varied from the Arctic Ocean and Greenland through to northern Russia and the European continent. Good agreement between the back trajectories was mirrored by comparable compositional trends between samples. Silicate dusts were identified in all cases, and the elemental composition of the dust was consistent for all samples except one. It is hypothesised that long-range, high-altitude transport was primarily responsible for this dust, with likely sources including the Asian arid regions.

  11. Sampling methods, dispersion patterns, and fixed precision sequential sampling plans for western flower thrips (Thysanoptera: Thripidae) and cotton fleahoppers (Hemiptera: Miridae) in cotton.

    PubMed

    Parajulee, M N; Shrestha, R B; Leser, J F

    2006-04-01

    A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.

  12. Rapid and non-invasive analysis of deoxynivalenol in durum and common wheat by Fourier-Transform Near Infrared (FT-NIR) spectroscopy.

    PubMed

    De Girolamo, A; Lippolis, V; Nordkvist, E; Visconti, A

    2009-06-01

    Fourier transform near-infrared spectroscopy (FT-NIR) was used for rapid and non-invasive analysis of deoxynivalenol (DON) in durum and common wheat. The relevance of using ground wheat samples with a homogeneous particle size distribution to minimize measurement variations and avoid DON segregation among particles of different sizes was established. Calibration models for durum wheat, common wheat and durum + common wheat samples, with particle size <500 microm, were obtained by using partial least squares (PLS) regression with an external validation technique. Values of root mean square error of prediction (RMSEP, 306-379 microg kg(-1)) were comparable and not too far from values of root mean square error of cross-validation (RMSECV, 470-555 microg kg(-1)). Coefficients of determination (r(2)) indicated an "approximate to good" level of prediction of the DON content by FT-NIR spectroscopy in the PLS calibration models (r(2) = 0.71-0.83), and a "good" discrimination between low and high DON contents in the PLS validation models (r(2) = 0.58-0.63). A "limited to good" practical utility of the models was ascertained by range error ratio (RER) values higher than 6. A qualitative model, based on 197 calibration samples, was developed to discriminate between blank and naturally contaminated wheat samples by setting a cut-off at 300 microg kg(-1) DON to separate the two classes. The model correctly classified 69% of the 65 validation samples with most misclassified samples (16 of 20) showing DON contamination levels quite close to the cut-off level. These findings suggest that FT-NIR analysis is suitable for the determination of DON in unprocessed wheat at levels far below the maximum permitted limits set by the European Commission.

  13. Integrated investigation of the mixed origin of lunar sample 72161,11

    NASA Technical Reports Server (NTRS)

    Basu, A.; Des Marais, D. J.; Hayes, J. M.; Meinschein, W. G.

    1975-01-01

    The comminution-agglutination model and the solar-wind implantation-retention model are used to postulate the origins of the particulate components of lunar sample (72161,11), a submillimeter fraction of a surface sample for the dark mantle regolith at LRV-3. Grain-size analysis was performed by wet sieving with liquid argon, and analyses for CO2, CO, CH4, and H2 were carried out by stepwise pyrolysis in a helium atmosphere. The results indicate that the present sample is from a mature regolith, but the agglutinate content is only 30% in the particle-size range between 90 and 177 microns, indicating an apparent departure from steady state. Analyses of the carbon, methane, and hydrogen concentrations in size fractions larger than 149 microns show that the volume-correlated component of these species increases with increased grain size. It is suggested that the observed increase can be explained in terms of mixing of a dominant local population of coarser agglutinates having high carbon and hydrogen concentrations with an imported population of finer agglutinates relatively poor in carbon and hydrogen.

  14. Large exchange bias effect in NiFe2O4/CoO nanocomposites

    NASA Astrophysics Data System (ADS)

    Mohan, Rajendra; Prasad Ghosh, Mritunjoy; Mukherjee, Samrat

    2018-03-01

    In this work, we report the exchange bias effect of NiFe2O4/CoO nanocomposites, synthesized via chemical co-precipitation method. Four samples of different particle size ranging from 4 nm to 31 nm were prepared with the annealing temperature varying from 200 °C to 800 °C. X-ray diffraction analysis of all the samples confirmed the presence of cubic spinel phase of Nickel ferrite along with CoO phase without trace of any impurity. Sizes of the particles were studied from transmission electron micrographs and were found to be in agreement with those estimated from x-ray diffraction. Field cooled (FC) hysteresis loops at 5 K revealed an exchange bias (HE) of 2.2 kOe for the sample heated at 200 °C which decreased with the increase of particle size. Exchange bias expectedly vanished at 300 K due to high thermal energy (kBT) and low effective surface anisotropy. M-T curves revealed a blocking temperature of 135 K for the sample with smaller particle size.

  15. Measurement of particle size distribution of soil and selected aggregate sizes using the hydrometer method and laser diffractometry

    NASA Astrophysics Data System (ADS)

    Guzmán, G.; Gómez, J. A.; Giráldez, J. V.

    2010-05-01

    Soil particle size distribution has been traditionally determined by the hydrometer or the sieve-pipette methods, both of them time consuming and requiring a relatively large soil sample. This might be a limitation in situations, such as for instance analysis of suspended sediment, when the sample is small. A possible alternative to these methods are the optical techniques such as laser diffractometry. However the literature indicates that the use of this technique as an alternative to traditional methods is still limited, because the difficulty in replicating the results obtained with the standard methods. In this study we present the percentages of soil grain size determined using laser diffractometry within ranges set between 0.04 - 2000 μm. A Beckman-Coulter ® LS-230 with a 750 nm laser beam and software version 3.2 in five soils, representative of southern Spain: Alameda, Benacazón, Conchuela, Lanjarón and Pedrera. In three of the studied soils (Alameda, Benacazón and Conchuela) the particle size distribution of each aggregate size class was also determined. Aggregate size classes were obtained by dry sieve analysis using a Retsch AS 200 basic ®. Two hundred grams of air dried soil were sieved during 150 s, at amplitude 2 mm, getting nine different sizes between 2000 μm and 10 μm. Analyses were performed by triplicate. The soil sample preparation was also adapted to our conditions. A small amount each soil sample (less than 1 g) was transferred to the fluid module full of running water and disaggregated by ultrasonication at energy level 4 and 80 ml of sodium hexametaphosphate solution during 580 seconds. Two replicates of each sample were performed. Each measurement was made for a 90 second reading at a pump speed of 62. After the laser diffractometry analysis, each soil and its aggregate classes were processed calibrating its own optical model fitting the optical parameters that mainly depends on the color and the shape of the analyzed particle. As a second alternative a unique optical model valid for a broad range of soils developed by the Department of Soil, Water, and Environmental Science of the University of Arizona (personal communication, already submitted) was tested. The results were compared with the particle size distribution measured in the same soils and aggregate classes using the hydrometer method. Preliminary results indicate a better calibration of the technique using the optical model of the Department of Soil, Water, and Environmental Science of the University of Arizona, which obtained a good correlations (r2>0.85). This result suggests that with an appropriate calibration of the optical model laser diffractometry might provide a reliable soil particle characterization.

  16. Characterization and utilization potential of basalt rock from East-Lampung district

    NASA Astrophysics Data System (ADS)

    Isnugroho, K.; Hendronursito, Y.; Birawidha, D. C.

    2018-01-01

    The aim of this research was to study the petrography and chemical properties of basalt rock from East Lampung district, Lampung province. Petrography analysis was performed using a polarization microscope, and analysis of chemical composition using X-RF method. From the analysis of basalt rock samples, the mineral composition consists of pyroxene, plagioclase, olivine, and opaque minerals. Basic mass of basalt rock samples is, composed of plagioclase and pyroxene with subhedral-anhedral shape, forming intergranular texture, and uniform distribution. Mineral plagioclase is colorless and blade shape, transformed into opaque minerals with a size of <0.2 mm, whereas pyroxene present among the blades of plagioclase, with a greenish tint looked and a size of <0.006 mm. Mineral opaque has a rectangular shape to irregular, with a size of <0.16 mm. The chemical composition of basalt rock samples, consisting of 37.76-59.64 SiO2; 10.10-20.93 Fe2O3; 11.77-14.32 Al2O3; 5.57-14.75 CaO; 5.37-9.15 MgO; 1.40-3.34 Na2O. From the calculation, obtained the value of acidity ratio (Ma) = 3.81. With these values, indicate that the basalt rock from East Lampung district has the potential to be utilized as stone wool fiber.

  17. The determination of specific forms of aluminum in natural water

    USGS Publications Warehouse

    Barnes, R.B.

    1975-01-01

    A procedure for analysis and pretreatment of natural-water samples to determine very low concentrations of Al is described which distinguishes the rapidly reacting equilibrium species from the metastable or slowly reacting macro ions and colloidal suspended material. Aluminum is complexed with 8-hydroxyquinoline (oxine), pH is adjusted to 8.3 to minimize interferences, and the aluminum oxinate is extracted with methyl isobutyl ketone (MIBK) prior to analysis by atomic absorption. To determine equilibrium species only, the contact time between sample and 8-hydroxyquinoline is minimized. The Al may be extracted at the sample site with a minimum of equipment and the MIBK extract stored for several weeks prior to atomic absorption analysis. Data obtained from analyses of 39 natural groundwater samples indicate that filtration through a 0.1-??m pore size filter is not an adequate means of removing all insoluble and metastable Al species present, and extraction of Al immediately after collection is necessary if only dissolved and readily reactive species are to be determined. An average of 63% of the Al present in natural waters that had been filtered through 0.1-??m pore size filters was in the form of monomeric ions. The total Al concentration, which includes all forms that passed through a 0.1-??m pore size filter, ranged 2-70 ??g/l. The concentration of Al in the form of monomeric ions ranged from below detection to 57 ??g/l. Most of the natural water samples used in this study were collected from thermal springs and oil wells. ?? 1975.

  18. Modeling the transport of engineered nanoparticles in saturated porous media - an experimental setup

    NASA Astrophysics Data System (ADS)

    Braun, A.; Neukum, C.; Azzam, R.

    2011-12-01

    The accelerating production and application of engineered nanoparticles is causing concerns regarding their release and fate in the environment. For assessing the risk that is posed to drinking water resources it is important to understand the transport and retention mechanisms of engineered nanoparticles in soil and groundwater. In this study an experimental setup for analyzing the mobility of silver and titanium dioxide nanoparticles in saturated porous media is presented. Batch and column experiments with glass beads and two different soils as matrices are carried out under varied conditions to study the impact of electrolyte concentration and pore water velocities. The analysis of nanoparticles implies several challenges, such as the detection and characterization and the preparation of a well dispersed sample with defined properties, as nanoparticles tend to form agglomerates when suspended in an aqueous medium. The analytical part of the experiments is mainly undertaken with Flow Field-Flow Fractionation (FlFFF). This chromatography like technique separates a particulate sample according to size. It is coupled to a UV/Vis and a light scattering detector for analyzing concentration and size distribution of the sample. The advantage of this technique is the ability to analyze also complex environmental samples, such as the effluent of column experiments including soil components, and the gentle sample treatment. For optimization of the sample preparation and for getting a first idea of the aggregation behavior in soil solutions, in sedimentation experiments the effect of ionic strength, sample concentration and addition of a surfactant on particle or aggregate size and temporal dispersion stability was investigated. In general the samples are more stable the lower the concentration of particles is. For TiO2 nanoparticles, the addition of a surfactant yielded the most stable samples with smallest aggregate sizes. Furthermore the suspension stability is increasing with electrolyte concentration. Depending on the dispersing medium the results show that TiO2 nanoparticles tend to form aggregates between 100-200 nm in diameter while the primary particle size is given as 21 nm by the manufacturer. Aggregate sizes are increasing with time. The particle size distribution of the silver nanoparticle samples is quite uniform in each medium. The fresh samples show aggregate sizes between 40 and 45 nm while the primary particle size is 15 nm according to the manufacturer. Aggregate size is only slightly increasing with time during the sedimentation experiments. These results are used as a reference when analyzing the effluent of column experiments.

  19. Intrapopulation variation in stature and body proportions: social status and sex differences in an Italian medieval population (Trino Vercellese, VC).

    PubMed

    Vercellotti, Giuseppe; Stout, Sam D; Boano, Rosa; Sciulli, Paul W

    2011-06-01

    The phenotypic expression of adult body size and shape results from synergistic interactions between hereditary factors and environmental conditions experienced during growth. Variation in body size and shape occurs even in genetically relatively homogeneous groups, due to different occurrence, duration, and timing of growth insults. Understanding the causes and patterns of intrapopulation variation can foster meaningful information on early life conditions in living and past populations. This study assesses the pattern of biological variation in body size and shape attributable to sex and social status in a medieval Italian population. The sample includes 52 (20 female, 32 male) adult individuals from the medieval population of Trino Vercellese, Italy. Differences in element size and overall body size (skeletal height and body mass) were assessed through Monte Carlo methods, while univariate non-parametric tests and Principal Component Analysis (PCA) were employed to examine segmental and overall body proportions. Discriminant Analysis was employed to determine the predictive value of individual skeletal elements for social status in the population. Our results highlight a distinct pattern in body size and shape variation in relation to status and sex. Male subsamples exhibit significant postcranial variation in body size, while female subsamples express smaller, nonsignificant differences. The analysis of segmental proportions highlighted differences in trunk/lower limb proportions between different status samples, and PCA indicated that in terms of purely morphological variation high status males were distinct from all other groups. The pattern observed likely resulted from a combination of biological factors and cultural practices. Copyright © 2011 Wiley-Liss, Inc.

  20. Dental size variation in the Atapuerca-SH Middle Pleistocene hominids.

    PubMed

    Bermúdez de Castro, J M; Sarmiento, S; Cunha, E; Rosas, A; Bastir, M

    2001-09-01

    The Middle Pleistocene Atapuerca-Sima de los Huesos (SH) site in Spain has yielded the largest sample of fossil hominids so far found from a single site and belonging to the same biological population. The SH dental sample includes a total of 452 permanent and deciduous teeth, representing a minimum of 27 individuals. We present a study of the dental size variation in these hominids, based on the analysis of the mandibular permanent dentition: lateral incisors, n=29; canines, n=27; third premolars, n=30; fourth premolars, n=34; first molars, n=38; second molars, n=38. We have obtained the buccolingual diameter and the crown area (measured on occlusal photographs) of these teeth, and used the bootstrap method to assess the amount of variation in the SH sample compared with the variation of a modern human sample from the Museu Antropologico of the Universidade of Coimbra (Portugal). The SH hominids have, in general terms, a dental size variation higher than that of the modern human sample. The analysis is especially conclusive for the canines. Furthermore, we have estimated the degree of sexual dimorphism of the SH sample by obtaining male and female dental subsamples by means of sexing the large sample of SH mandibular specimens. We obtained the index of sexual dimorphism (ISD=male mean/female mean) and the values were compared with those obtained from the sexed modern human sample from Coimbra, and with data found in the literature concerning several recent human populations. In all tooth classes the ISD of the SH hominids was higher than that of modern humans, but the differences were generally modest, except for the canines, thus suggesting that canine size sexual dimorphism in Homo heidelbergensis was probably greater than that of modern humans. Since the approach of sexing fossil specimens has some obvious limitations, these results should be assessed with caution. Additional data from SH and other European Middle Pleistocene sites would be necessary to test this hypothesis. Copyright 2001 Academic Press.

  1. Characterization of winemaking yeast by cell number-size distribution analysis through flow field-flow fractionation with multi-wavelength turbidimetric detection.

    PubMed

    Zattoni, Andrea; Melucci, Dora; Reschiglian, Pierluigi; Sanz, Ramsés; Puignou, Lluís; Galceran, Maria Teresa

    2004-10-29

    Yeasts are widely used in several areas of food industry, e.g. baking, beer brewing, and wine production. Interest in new analytical methods for quality control and characterization of yeast cells is thus increasing. The biophysical properties of yeast cells, among which cell size, are related to yeast cell capabilities to produce primary and secondary metabolites during the fermentation process. Biophysical properties of winemaking yeast strains can be screened by field-flow fractionation (FFF). In this work we present the use of flow FFF (FlFFF) with turbidimetric multi-wavelength detection for the number-size distribution analysis of different commercial winemaking yeast varieties. The use of a diode-array detector allows to apply to dispersed samples like yeast cells the recently developed method for number-size (or mass-size) analysis in flow-assisted separation techniques. Results for six commercial winemaking yeast strains are compared with data obtained by a standard method for cell sizing (Coulter counter). The method here proposed gives, at short analysis time, accurate information on the number of cells of a given size, and information on the total number of cells.

  2. Direct on-strip analysis of size- and time-resolved aerosol impactor samples using laser induced fluorescence spectra excited at 263 and 351 nm.

    PubMed

    Wang, Chuji; Pan, Yong-Le; James, Deryck; Wetmore, Alan E; Redding, Brandon

    2014-04-11

    We report a novel atmospheric aerosol characterization technique, in which dual wavelength UV laser induced fluorescence (LIF) spectrometry marries an eight-stage rotating drum impactor (RDI), namely UV-LIF-RDI, to achieve size- and time-resolved analysis of aerosol particles on-strip. The UV-LIF-RDI technique measured LIF spectra via direct laser beam illumination onto the particles that were impacted on a RDI strip with a spatial resolution of 1.2mm, equivalent to an averaged time resolution in the aerosol sampling of 3.6 h. Excited by a 263 nm or 351 nm laser, more than 2000 LIF spectra within a 3-week aerosol collection time period were obtained from the eight individual RDI strips that collected particles in eight different sizes ranging from 0.09 to 10 μm in Djibouti. Based on the known fluorescence database from atmospheric aerosols in the US, the LIF spectra obtained from the Djibouti aerosol samples were found to be dominated by fluorescence clusters 2, 5, and 8 (peaked at 330, 370, and 475 nm) when excited at 263 nm and by fluorescence clusters 1, 2, 5, and 6 (peaked at 390 and 460 nm) when excited at 351 nm. Size- and time-dependent variations of the fluorescence spectra revealed some size and time evolution behavior of organic and biological aerosols from the atmosphere in Djibouti. Moreover, this analytical technique could locate the possible sources and chemical compositions contributing to these fluorescence clusters. Advantages, limitations, and future developments of this new aerosol analysis technique are also discussed. Published by Elsevier B.V.

  3. Malaria prevalence metrics in low- and middle-income countries: an assessment of precision in nationally-representative surveys.

    PubMed

    Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M

    2017-11-21

    One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.

  4. Automated measurement of diatom size

    USGS Publications Warehouse

    Spaulding, Sarah A.; Jewson, David H.; Bixby, Rebecca J.; Nelson, Harry; McKnight, Diane M.

    2012-01-01

    Size analysis of diatom populations has not been widely considered, but it is a potentially powerful tool for understanding diatom life histories, population dynamics, and phylogenetic relationships. However, measuring cell dimensions on a light microscope is a time-consuming process. An alternative technique has been developed using digital flow cytometry on a FlowCAM® (Fluid Imaging Technologies) to capture hundreds, or even thousands, of images of a chosen taxon from a single sample in a matter of minutes. Up to 30 morphological measures may be quantified through post-processing of the high resolution images. We evaluated FlowCAM size measurements, comparing them against measurements from a light microscope. We found good agreement between measurement of apical cell length in species with elongated, straight valves, including small Achnanthidium minutissimum (11-21 µm) and largeDidymosphenia geminata (87–137 µm) forms. However, a taxon with curved cells, Hannaea baicalensis (37–96 µm), showed differences of ~ 4 µm between the two methods. Discrepancies appear to be influenced by the choice of feret or geodesic measurement for asymmetric cells. We describe the operating conditions necessary for analysis of size distributions and present suggestions for optimal instrument conditions for size analysis of diatom samples using the FlowCAM. The increased speed of data acquisition through use of imaging flow cytometers like the FlowCAM is an essential step for advancing studies of diatom populations.

  5. Procedures for analysis of debris relative to Space Shuttle systems

    NASA Technical Reports Server (NTRS)

    Kim, Hae Soo; Cummings, Virginia J.

    1993-01-01

    Debris samples collected from various Space Shuttle systems have been submitted to the Microchemical Analysis Branch. This investigation was initiated to develop optimal techniques for the analysis of debris. Optical microscopy provides information about the morphology and size of crystallites, particle sizes, amorphous phases, glass phases, and poorly crystallized materials. Scanning electron microscopy with energy dispersive spectrometry is utilized for information on surface morphology and qualitative elemental content of debris. Analytical electron microscopy with wavelength dispersive spectrometry provides information on the quantitative elemental content of debris.

  6. Morphological and chemical analysis of bone substitutes by scanning electron microscopy and microanalysis by spectroscopy of dispersion energy.

    PubMed

    da Cruz, Gabriela Alessandra; de Toledo, Sérgio; Sallum, Enilson Antonio; de Lima, Antonio Fernando Martorelli

    2007-01-01

    This study evaluated the morphological and chemical composition of the following bone substitutes: cancellous and cortical organic bovine bone with macro and microparticle size ranging from 1.0 to 2.0 mm and 0.25 to 1.0 mm, respectively; inorganic bovine bone with particle size ranging from 0.25 to 1.0 mm; hydroxyapatite with particle size ranging from 0.75 to 1.0 mm; and demineralized freeze-dried bone allograft with particle size ranging from 0.25 to 0.5 mm. The samples were sputter-coated with gold in an ion coater, the morphology was observed and particle size was measured under vacuum by scanning electron microscopy (SEM). The chemical composition was evaluated by spectroscopy of dispersion energy (EDS) microanalysis using samples without coating. SEM analysis provided visual evidence that all examined materials have irregular shape and particle sizes larger than those informed by the manufacturer. EDS microanalysis detected the presence of sodium, calcium and phosphorus that are usual elements of the bone tissue. However, mineral elements were detected in all analyzed particles of organic bovine bone except for macro cancellous organic bovine bone. These results suggest that the examined organic bovine bone cannot be considered as a pure organic material.

  7. Assessment of the influence of field size on maize gene flow using SSR analysis.

    PubMed

    Palaudelmàs, M; Melé, E; Monfort, A; Serra, J; Salvia, J; Messeguer, J

    2012-06-01

    One of the factors that may influence the rate of cross-fertilization is the relative size of the pollen donor and receptor fields. We designed a spatial distribution with four varieties of genetically-modified (GM) yellow maize to generate different sized fields while maintaining a constant distance to neighbouring fields of conventional white kernel maize. Samples of cross-fertilized, yellow kernels in white cobs were collected from all of the adjacent fields at different distances. A special series of samples was collected at distances of 0, 2, 5, 10, 20, 40, 80 and 120 m following a transect traced in the dominant down-wind direction in order to identify the origin of the pollen through SSR analysis. The size of the receptor fields should be taken into account, especially when they extend in the same direction than the GM pollen flow is coming. From collected data, we then validated a function that takes into account the gene flow found in the field border and that is very useful for estimating the % of GM that can be found in any point of the field. It also serves to predict the total GM content of the field due to cross fertilization. Using SSR analysis to identify the origin of pollen showed that while changes in the size of the donor field clearly influence the percentage of GMO detected, this effect is moderate. This study demonstrates that doubling the donor field size resulted in an approximate increase of GM content in the receptor field of 7%. This indicates that variations in the size of the donor field have a smaller influence on GM content than variations in the size of the receptor field.

  8. A Systematic Review of Published Respondent-Driven Sampling Surveys Collecting Behavioral and Biologic Data.

    PubMed

    Johnston, Lisa G; Hakim, Avi J; Dittrich, Samantha; Burnett, Janet; Kim, Evelyn; White, Richard G

    2016-08-01

    Reporting key details of respondent-driven sampling (RDS) survey implementation and analysis is essential for assessing the quality of RDS surveys. RDS is both a recruitment and analytic method and, as such, it is important to adequately describe both aspects in publications. We extracted data from peer-reviewed literature published through September, 2013 that reported collected biological specimens using RDS. We identified 151 eligible peer-reviewed articles describing 222 surveys conducted in seven regions throughout the world. Most published surveys reported basic implementation information such as survey city, country, year, population sampled, interview method, and final sample size. However, many surveys did not report essential methodological and analytical information for assessing RDS survey quality, including number of recruitment sites, seeds at start and end, maximum number of waves, and whether data were adjusted for network size. Understanding the quality of data collection and analysis in RDS is useful for effectively planning public health service delivery and funding priorities.

  9. Comparative forensic soil analysis of New Jersey state parks using a combination of simple techniques with multivariate statistics.

    PubMed

    Bonetti, Jennifer; Quarino, Lawrence

    2014-05-01

    This study has shown that the combination of simple techniques with the use of multivariate statistics offers the potential for the comparative analysis of soil samples. Five samples were obtained from each of twelve state parks across New Jersey in both the summer and fall seasons. Each sample was examined using particle-size distribution, pH analysis in both water and 1 M CaCl2 , and a loss on ignition technique. Data from each of the techniques were combined, and principal component analysis (PCA) and canonical discriminant analysis (CDA) were used for multivariate data transformation. Samples from different locations could be visually differentiated from one another using these multivariate plots. Hold-one-out cross-validation analysis showed error rates as low as 3.33%. Ten blind study samples were analyzed resulting in no misclassifications using Mahalanobis distance calculations and visual examinations of multivariate plots. Seasonal variation was minimal between corresponding samples, suggesting potential success in forensic applications. © 2014 American Academy of Forensic Sciences.

  10. Atomic Force Microscopy Thermally-Assisted Microsampling with Atmospheric Pressure Temperature Ramped Thermal Desorption/Ionization-Mass Spectrometry Analysis

    DOE PAGES

    Hoffmann, William D.; Kertesz, Vilmos; Srijanto, Bernadeta R.; ...

    2017-02-20

    The use of atomic force microscopy controlled nano-thermal analysis probes for reproducible spatially resolved thermally-assisted sampling of micrometer-sized areas (ca. 11 m 17 m wide 2.4 m deep) from relatively low number average molecular weight (M n < 3000) polydisperse thin films of poly(2-vinylpyridine) (P2VP) is presented. Following sampling, the nano-thermal analysis probes were moved up from the surface and the probe temperature ramped to liberate the sampled materials into the gas phase for atmospheric pressure chemical ionization and mass spectrometric analysis. Furthermore, the procedure and mechanism for material pickup, the sampling reproducibility and sampling size are discussed and themore » oligomer distribution information available from slow temperature ramps versus ballistic temperature jumps is presented. For the M n = 970 P2VP, the Mn and polydispersity index determined from the mass spectrometric data were in line with both the label values from the sample supplier and the value calculated from the simple infusion of a solution of polymer into the commercial atmospheric pressure chemical ionization source on this mass spectrometer. With a P2VP sample of higher Mn (M n = 2070 and 2970), intact oligomers were still observed (as high as m/z 2793 corresponding to the 26-mer), but a significant abundance of thermolysis products were also observed. In addition, the capability for confident identification of the individual oligomers by slowly ramping the probe temperature and collecting data dependent tandem mass spectra was also demonstrated. We also discuss the material type limits to the current sampling and analysis approach as well as possible improvements in nano-thermal analysis probe design to enable smaller area sampling and to enable controlled temperature ramps beyond the present upper limit of about 415°C.« less

  11. Atomic Force Microscopy Thermally-Assisted Microsampling with Atmospheric Pressure Temperature Ramped Thermal Desorption/Ionization-Mass Spectrometry Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffmann, William D.; Kertesz, Vilmos; Srijanto, Bernadeta R.

    The use of atomic force microscopy controlled nano-thermal analysis probes for reproducible spatially resolved thermally-assisted sampling of micrometer-sized areas (ca. 11 m 17 m wide 2.4 m deep) from relatively low number average molecular weight (M n < 3000) polydisperse thin films of poly(2-vinylpyridine) (P2VP) is presented. Following sampling, the nano-thermal analysis probes were moved up from the surface and the probe temperature ramped to liberate the sampled materials into the gas phase for atmospheric pressure chemical ionization and mass spectrometric analysis. Furthermore, the procedure and mechanism for material pickup, the sampling reproducibility and sampling size are discussed and themore » oligomer distribution information available from slow temperature ramps versus ballistic temperature jumps is presented. For the M n = 970 P2VP, the Mn and polydispersity index determined from the mass spectrometric data were in line with both the label values from the sample supplier and the value calculated from the simple infusion of a solution of polymer into the commercial atmospheric pressure chemical ionization source on this mass spectrometer. With a P2VP sample of higher Mn (M n = 2070 and 2970), intact oligomers were still observed (as high as m/z 2793 corresponding to the 26-mer), but a significant abundance of thermolysis products were also observed. In addition, the capability for confident identification of the individual oligomers by slowly ramping the probe temperature and collecting data dependent tandem mass spectra was also demonstrated. We also discuss the material type limits to the current sampling and analysis approach as well as possible improvements in nano-thermal analysis probe design to enable smaller area sampling and to enable controlled temperature ramps beyond the present upper limit of about 415°C.« less

  12. Scale-dependent effect sizes of ecological drivers on biodiversity: why standardised sampling is not enough.

    PubMed

    Chase, Jonathan M; Knight, Tiffany M

    2013-05-01

    There is little consensus about how natural (e.g. productivity, disturbance) and anthropogenic (e.g. invasive species, habitat destruction) ecological drivers influence biodiversity. Here, we show that when sampling is standardised by area (species density) or individuals (rarefied species richness), the measured effect sizes depend critically on the spatial grain and extent of sampling, as well as the size of the species pool. This compromises comparisons of effects sizes within studies using standard statistics, as well as among studies using meta-analysis. To derive an unambiguous effect size, we advocate that comparisons need to be made on a scale-independent metric, such as Hurlbert's Probability of Interspecific Encounter. Analyses of this metric can be used to disentangle the relative influence of changes in the absolute and relative abundances of individuals, as well as their intraspecific aggregations, in driving differences in biodiversity among communities. This and related approaches are necessary to achieve generality in understanding how biodiversity responds to ecological drivers and will necessitate a change in the way many ecologists collect and analyse their data. © 2013 John Wiley & Sons Ltd/CNRS.

  13. Population size and stopover duration estimation using mark–resight data and Bayesian analysis of a superpopulation model

    USGS Publications Warehouse

    Lyons, James E.; Kendall, William L.; Royle, J. Andrew; Converse, Sarah J.; Andres, Brad A.; Buchanan, Joseph B.

    2016-01-01

    We present a novel formulation of a mark–recapture–resight model that allows estimation of population size, stopover duration, and arrival and departure schedules at migration areas. Estimation is based on encounter histories of uniquely marked individuals and relative counts of marked and unmarked animals. We use a Bayesian analysis of a state–space formulation of the Jolly–Seber mark–recapture model, integrated with a binomial model for counts of unmarked animals, to derive estimates of population size and arrival and departure probabilities. We also provide a novel estimator for stopover duration that is derived from the latent state variable representing the interim between arrival and departure in the state–space model. We conduct a simulation study of field sampling protocols to understand the impact of superpopulation size, proportion marked, and number of animals sampled on bias and precision of estimates. Simulation results indicate that relative bias of estimates of the proportion of the population with marks was low for all sampling scenarios and never exceeded 2%. Our approach does not require enumeration of all unmarked animals detected or direct knowledge of the number of marked animals in the population at the time of the study. This provides flexibility and potential application in a variety of sampling situations (e.g., migratory birds, breeding seabirds, sea turtles, fish, pinnipeds, etc.). Application of the methods is demonstrated with data from a study of migratory sandpipers.

  14. Combining gas-phase electrophoretic mobility molecular analysis (GEMMA), light scattering, field flow fractionation and cryo electron microscopy in a multidimensional approach to characterize liposomal carrier vesicles

    PubMed Central

    Gondikas, Andreas; von der Kammer, Frank; Hofmann, Thilo; Marchetti-Deschmann, Martina; Allmaier, Günter; Marko-Varga, György; Andersson, Roland

    2017-01-01

    For drug delivery, characterization of liposomes regarding size, particle number concentrations, occurrence of low-sized liposome artefacts and drug encapsulation are of importance to understand their pharmacodynamic properties. In our study, we aimed to demonstrate the applicability of nano Electrospray Gas-Phase Electrophoretic Mobility Molecular Analyser (nES GEMMA) as a suitable technique for analyzing these parameters. We measured number-based particle concentrations, identified differences in size between nominally identical liposomal samples, and detected the presence of low-diameter material which yielded bimodal particle size distributions. Subsequently, we compared these findings to dynamic light scattering (DLS) data and results from light scattering experiments coupled to Asymmetric Flow-Field Flow Fractionation (AF4), the latter improving the detectability of smaller particles in polydisperse samples due to a size separation step prior detection. However, the bimodal size distribution could not be detected due to method inherent limitations. In contrast, cryo transmission electron microscopy corroborated nES GEMMA results. Hence, gas-phase electrophoresis proved to be a versatile tool for liposome characterization as it could analyze both vesicle size and size distribution. Finally, a correlation of nES GEMMA results with cell viability experiments was carried out to demonstrate the importance of liposome batch-to-batch control as low-sized sample components possibly impact cell viability. PMID:27639623

  15. Relationships fade with time: a meta-analysis of temporal trends in publication in ecology and evolution.

    PubMed Central

    Jennions, Michael D; Møller, Anders P

    2002-01-01

    Both significant positive and negative relationships between the magnitude of research findings (their 'effect size') and their year of publication have been reported in a few areas of biology. These trends have been attributed to Kuhnian paradigm shifts, scientific fads and bias in the choice of study systems. Here we test whether or not these isolated cases reflect a more general trend. We examined the relationship using effect sizes extracted from 44 peer-reviewed meta-analyses covering a wide range of topics in ecological and evolutionary biology. On average, there was a small but significant decline in effect size with year of publication. For the original empirical studies there was also a significant decrease in effect size as sample size increased. However, the effect of year of publication remained even after we controlled for sampling effort. Although these results have several possible explanations, it is suggested that a publication bias against non-significant or weaker findings offers the most parsimonious explanation. As in the medical sciences, non-significant results may take longer to publish and studies with both small sample sizes and non-significant results may be less likely to be published. PMID:11788035

  16. Multi-Mission System Analysis for Planetary Entry (M-SAPE) Version 1

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid; Glaab, Louis; Winski, Richard G.; Maddock, Robert W.; Emmett, Anjie L.; Munk, Michelle M.; Agrawal, Parul; Sepka, Steve; Aliaga, Jose; Zarchi, Kerry; hide

    2014-01-01

    This report describes an integrated system for Multi-mission System Analysis for Planetary Entry (M-SAPE). The system in its current form is capable of performing system analysis and design for an Earth entry vehicle suitable for sample return missions. The system includes geometry, mass sizing, impact analysis, structural analysis, flight mechanics, TPS, and a web portal for user access. The report includes details of M-SAPE modules and provides sample results. Current M-SAPE vehicle design concept is based on Mars sample return (MSR) Earth entry vehicle design, which is driven by minimizing risk associated with sample containment (no parachute and passive aerodynamic stability). By M-SAPE exploiting a common design concept, any sample return mission, particularly MSR, will benefit from significant risk and development cost reductions. The design provides a platform by which technologies and design elements can be evaluated rapidly prior to any costly investment commitment.

  17. Equations for hydraulic conductivity estimation from particle size distribution: A dimensional analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ji-Peng; François, Bertrand; Lambert, Pierre

    2017-09-01

    Estimating hydraulic conductivity from particle size distribution (PSD) is an important issue for various engineering problems. Classical models such as Hazen model, Beyer model, and Kozeny-Carman model usually regard the grain diameter at 10% passing (d10) as an effective grain size and the effects of particle size uniformity (in Beyer model) or porosity (in Kozeny-Carman model) are sometimes embedded. This technical note applies the dimensional analysis (Buckingham's ∏ theorem) to analyze the relationship between hydraulic conductivity and particle size distribution (PSD). The porosity is regarded as a dependent variable on the grain size distribution in unconsolidated conditions. It indicates that the coefficient of grain size uniformity and a dimensionless group representing the gravity effect, which is proportional to the mean grain volume, are the main two determinative parameters for estimating hydraulic conductivity. Regression analysis is then carried out on a database comprising 431 samples collected from different depositional environments and new equations are developed for hydraulic conductivity estimation. The new equation, validated in specimens beyond the database, shows an improved prediction comparing to using the classic models.

  18. Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method

    PubMed Central

    Lu, Zhaolin

    2017-01-01

    Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis. PMID:28298925

  19. Hierarchical Linear Modeling Meta-Analysis of Single-Subject Design Research

    ERIC Educational Resources Information Center

    Gage, Nicholas A.; Lewis, Timothy J.

    2014-01-01

    The identification of evidence-based practices continues to provoke issues of disagreement across multiple fields. One area of contention is the role of single-subject design (SSD) research in providing scientific evidence. The debate about SSD's utility centers on three issues: sample size, effect size, and serial dependence. One potential…

  20. OpenMSI Arrayed Analysis Toolkit: Analyzing Spatially Defined Samples Using Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Raad, Markus; de Rond, Tristan; Rübel, Oliver

    Mass spectrometry imaging (MSI) has primarily been applied in localizing biomolecules within biological matrices. Although well-suited, the application of MSI for comparing thousands of spatially defined spotted samples has been limited. One reason for this is a lack of suitable and accessible data processing tools for the analysis of large arrayed MSI sample sets. In this paper, the OpenMSI Arrayed Analysis Toolkit (OMAAT) is a software package that addresses the challenges of analyzing spatially defined samples in MSI data sets. OMAAT is written in Python and is integrated with OpenMSI (http://openmsi.nersc.gov), a platform for storing, sharing, and analyzing MSI data.more » By using a web-based python notebook (Jupyter), OMAAT is accessible to anyone without programming experience yet allows experienced users to leverage all features. OMAAT was evaluated by analyzing an MSI data set of a high-throughput glycoside hydrolase activity screen comprising 384 samples arrayed onto a NIMS surface at a 450 μm spacing, decreasing analysis time >100-fold while maintaining robust spot-finding. The utility of OMAAT was demonstrated for screening metabolic activities of different sized soil particles, including hydrolysis of sugars, revealing a pattern of size dependent activities. Finally, these results introduce OMAAT as an effective toolkit for analyzing spatially defined samples in MSI. OMAAT runs on all major operating systems, and the source code can be obtained from the following GitHub repository: https://github.com/biorack/omaat.« less

  1. Temporal variability of coastal Planctomycetes clades at Kabeltonne station, North Sea.

    PubMed

    Pizzetti, Ilaria; Fuchs, Bernhard M; Gerdts, Gunnar; Wichels, Antje; Wiltshire, Karen H; Amann, Rudolf

    2011-07-01

    Members of the bacterial phylum Planctomycetes are reported in marine water samples worldwide, but quantitative information is scarce. Here we investigated the phylogenetic diversity, abundance, and distribution of Planctomycetes in surface waters off the German North Sea island Helgoland during different seasons by 16S rRNA gene analysis and catalyzed reporter deposition fluorescence in situ hybridization (CARD-FISH). Generally Planctomycetes are more abundant in samples collected in summer and autumn than in samples collected in winter and spring. Statistical analysis revealed that Planctomycetes abundance was correlated to the Centrales diatom bloom in spring 2007. The analysis of size-fractionated seawater samples and of macroaggregates showed that ~90% of the Planctomycetes reside in the >3-μm size fraction. Comparative sequence analysis of 184 almost full-length 16S rRNA genes revealed three dominant clades. The clades, named Planctomyces-related group A, uncultured Planctomycetes group B, and Pirellula-related group D, were monitored by CARD-FISH using newly developed oligonucleotide probes. All three clades showed recurrent abundance patterns during two annual sampling campaigns. Uncultured Planctomycetes group B was most abundant in autumn samples, while Planctomyces-related group A was present in high numbers only during late autumn and winter. The levels of Pirellula-related group D were more constant throughout the year, with elevated counts in summer. Our analyses suggest that the seasonal succession of the Planctomycetes is correlated with algal blooms. We hypothesize that the niche partitioning of the different clades might be caused by their algal substrates.

  2. Synthesis and characterization of nanocrystalline Co-Fe-Nb-Ta-B alloy

    NASA Astrophysics Data System (ADS)

    Raanaei, Hossein; Fakhraee, Morteza

    2017-09-01

    In this research work, structural and magnetic evolution of Co57Fe13Nb8Ta4B18 alloy, during mechanical alloying process, have been investigated by using, X-ray diffraction, scanning electron microscopy, transmission electron microscopy, electron dispersive X-ray spectroscopy, differential thermal analysis and also vibrating sample magnetometer. It is observed that at 120 milling time, the crystallite size reaches to about 7.8 nm. Structural analyses show that, the solid solution of the initial powder mixture occurs at160 h milling time. The coercivity behavior demonstrates a rise, up to 70 h followed by decreasing tendency up to final stage of milling process. Thermal analysis of 160 h milling time sample reveals two endothermic peaks. The characterization of annealed milled sample for 160 h milling time at 427 °C shows crystallite size growth accompanied by increasing in saturation magnetization.

  3. Automatic classification techniques for type of sediment map from multibeam sonar data

    NASA Astrophysics Data System (ADS)

    Zakariya, R.; Abdullah, M. A.; Che Hasan, R.; Khalil, I.

    2018-02-01

    Sediment map can be important information for various applications such as oil drilling, environmental and pollution study. A study on sediment mapping was conducted at a natural reef (rock) in Pulau Payar using Sound Navigation and Ranging (SONAR) technology which is Multibeam Echosounder R2-Sonic. This study aims to determine sediment type by obtaining backscatter and bathymetry data from multibeam echosounder. Ground truth data were used to verify the classification produced. The method used to analyze ground truth samples consists of particle size analysis (PSA) and dry sieving methods. Different analysis being carried out due to different sizes of sediment sample obtained. The smaller size was analyzed using PSA with the brand CILAS while bigger size sediment was analyzed using sieve. For multibeam, data acquisition includes backscatter strength and bathymetry data were processed using QINSy, Qimera, and ArcGIS. This study shows the capability of multibeam data to differentiate the four types of sediments which are i) very coarse sand, ii) coarse sand, iii) very coarse silt and coarse silt. The accuracy was reported as 92.31% overall accuracy and 0.88 kappa coefficient.

  4. Integrative Analysis of Cancer Diagnosis Studies with Composite Penalization

    PubMed Central

    Liu, Jin; Huang, Jian; Ma, Shuangge

    2013-01-01

    Summary In cancer diagnosis studies, high-throughput gene profiling has been extensively conducted, searching for genes whose expressions may serve as markers. Data generated from such studies have the “large d, small n” feature, with the number of genes profiled much larger than the sample size. Penalization has been extensively adopted for simultaneous estimation and marker selection. Because of small sample sizes, markers identified from the analysis of single datasets can be unsatisfactory. A cost-effective remedy is to conduct integrative analysis of multiple heterogeneous datasets. In this article, we investigate composite penalization methods for estimation and marker selection in integrative analysis. The proposed methods use the minimax concave penalty (MCP) as the outer penalty. Under the homogeneity model, the ridge penalty is adopted as the inner penalty. Under the heterogeneity model, the Lasso penalty and MCP are adopted as the inner penalty. Effective computational algorithms based on coordinate descent are developed. Numerical studies, including simulation and analysis of practical cancer datasets, show satisfactory performance of the proposed methods. PMID:24578589

  5. Analysis of Genetic Algorithm for Rule-Set Production (GARP) modeling approach for predicting distributions of fleas implicated as vectors of plague, Yersinia pestis, in California.

    PubMed

    Adjemian, Jennifer C Z; Girvetz, Evan H; Beckett, Laurel; Foley, Janet E

    2006-01-01

    More than 20 species of fleas in California are implicated as potential vectors of Yersinia pestis. Extremely limited spatial data exist for plague vectors-a key component to understanding where the greatest risks for human, domestic animal, and wildlife health exist. This study increases the spatial data available for 13 potential plague vectors by using the ecological niche modeling system Genetic Algorithm for Rule-Set Production (GARP) to predict their respective distributions. Because the available sample sizes in our data set varied greatly from one species to another, we also performed an analysis of the robustness of GARP by using the data available for flea Oropsylla montana (Baker) to quantify the effects that sample size and the chosen explanatory variables have on the final species distribution map. GARP effectively modeled the distributions of 13 vector species. Furthermore, our analyses show that all of these modeled ranges are robust, with a sample size of six fleas or greater not significantly impacting the percentage of the in-state area where the flea was predicted to be found, or the testing accuracy of the model. The results of this study will help guide the sampling efforts of future studies focusing on plague vectors.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorenz, Matthias; Ovchinnikova, Olga S; Van Berkel, Gary J

    RATIONALE: Laser ablation provides for the possibility of sampling a large variety of surfaces with high spatial resolution. This type of sampling when employed in conjunction with liquid capture followed by nanoelectrospray ionization provides the opportunity for sensitive and prolonged interrogation of samples by mass spectrometry as well as the ability to analyze surfaces not amenable to direct liquid extraction. METHODS: A fully automated, reflection geometry, laser ablation liquid capture spot sampling system was achieved by incorporating appropriate laser fiber optics and a focusing lens into a commercially available, liquid extraction surface analysis (LESA ) ready Advion TriVersa NanoMate system.more » RESULTS: Under optimized conditions about 10% of laser ablated material could be captured in a droplet positioned vertically over the ablation region using the NanoMate robot controlled pipette. The sampling spot size area with this laser ablation liquid capture surface analysis (LA/LCSA) mode of operation (typically about 120 m x 160 m) was approximately 50 times smaller than that achievable by direct liquid extraction using LESA (ca. 1 mm diameter liquid extraction spot). The set-up was successfully applied for the analysis of ink on glass and paper as well as the endogenous components in Alstroemeria Yellow King flower petals. In a second mode of operation with a comparable sampling spot size, termed laser ablation/LESA , the laser system was used to drill through, penetrate, or otherwise expose material beneath a solvent resistant surface. Once drilled, LESA was effective in sampling soluble material exposed at that location on the surface. CONCLUSIONS: Incorporating the capability for different laser ablation liquid capture spot sampling modes of operation into a LESA ready Advion TriVersa NanoMate enhanced the spot sampling spatial resolution of this device and broadened the surface types amenable to analysis to include absorbent and solvent resistant materials.« less

  7. An anthropometric analysis of Korean male helicopter pilots for helicopter cockpit design.

    PubMed

    Lee, Wonsup; Jung, Kihyo; Jeong, Jeongrim; Park, Jangwoon; Cho, Jayoung; Kim, Heeeun; Park, Seikwon; You, Heecheon

    2013-01-01

    This study measured 21 anthropometric dimensions (ADs) of 94 Korean male helicopter pilots in their 20s to 40s and compared them with corresponding measurements of Korean male civilians and the US Army male personnel. The ADs and the sample size of the anthropometric survey were determined by a four-step process: (1) selection of ADs related to helicopter cockpit design, (2) evaluation of the importance of each AD, (3) calculation of required sample sizes for selected precision levels and (4) determination of an appropriate sample size by considering both the AD importance evaluation results and the sample size requirements. The anthropometric comparison reveals that the Korean helicopter pilots are larger (ratio of means = 1.01-1.08) and less dispersed (ratio of standard deviations = 0.71-0.93) than the Korean male civilians and that they are shorter in stature (0.99), have shorter upper limbs (0.89-0.96) and lower limbs (0.93-0.97), but are taller on sitting height, sitting eye height and acromial height (1.01-1.03), and less dispersed (0.68-0.97) than the US Army personnel. The anthropometric characteristics of Korean male helicopter pilots were compared with those of Korean male civilians and US Army male personnel. The sample size determination process and the anthropometric comparison results presented in this study are useful to design an anthropometric survey and a helicopter cockpit layout, respectively.

  8. Elemental Analysis of Beryllium Samples Using a Microzond-EGP-10 Unit

    NASA Astrophysics Data System (ADS)

    Buzoverya, M. E.; Karpov, I. A.; Gorodnov, A. A.; Shishpor, I. V.; Kireycheva, V. I.

    2017-12-01

    Results concerning the structural and elemental analysis of beryllium samples obtained via different technologies using a Microzond-EGP-10 unit with the help of the PIXE and RBS methods are presented. As a result, the overall chemical composition and the nature of inclusions were determined. The mapping method made it possible to reveal the structural features of beryllium samples: to select the grains of the main substance having different size and chemical composition, to visualize the interfaces between the regions of different composition, and to describe the features of the distribution of impurities in the samples.

  9. On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.

    PubMed

    Westgate, Philip M; Burchett, Woodrow W

    2017-03-15

    The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Investigation of element distributions in Luna-16 regolith

    NASA Astrophysics Data System (ADS)

    Kuznetsov, R. A.; Lure, B. G.; Minevich, V. Ia.; Stiuf, V. I.; Pankratov, V. B.

    1981-03-01

    The concentrations of 32 elements in fractions of different grain sizes in the samples of the lunar regolith brought back by Luna-16 are determined by means of neutron activation analysis. Four groups of elements are distinguished on the basis of the variations of their concentration with grain size, and concentration variations of the various elements with sample depth are also noted. Chemical leaching of the samples combined with neutron activation also reveals differences in element concentrations in the water soluble, metallic, sulphide, phosphate, rare mineral and rock phases of the samples. In particular, the rare earth elements are observed to be depleted in the regolith with respect to chondritic values, and to be concentrated in the phase extracted with 14 M HNO3.

  11. Optimal number of features as a function of sample size for various classification rules.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Lowey, James; Suh, Edward; Dougherty, Edward R

    2005-04-15

    Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification. For the companion website, please visit http://public.tgen.org/tamu/ofs/ e-dougherty@ee.tamu.edu.

  12. A statistical analysis of seat belt effectiveness in 1973-1975 model cars involved in towaway crashes. Volume 1

    DOT National Transportation Integrated Search

    1976-09-01

    Standardized injury rates and seat belt effectiveness measures are derived from a probability sample of towaway accidents involving 1973-1975 model cars. The data were collected in five different geographic regions. Weighted sample size available for...

  13. Samples in applied psychology: over a decade of research in review.

    PubMed

    Shen, Winny; Kiger, Thomas B; Davies, Stacy E; Rasch, Rena L; Simon, Kara M; Ones, Deniz S

    2011-09-01

    This study examines sample characteristics of articles published in Journal of Applied Psychology (JAP) from 1995 to 2008. At the individual level, the overall median sample size over the period examined was approximately 173, which is generally adequate for detecting the average magnitude of effects of primary interest to researchers who publish in JAP. Samples using higher units of analyses (e.g., teams, departments/work units, and organizations) had lower median sample sizes (Mdn ≈ 65), yet were arguably robust given typical multilevel design choices of JAP authors despite the practical constraints of collecting data at higher units of analysis. A substantial proportion of studies used student samples (~40%); surprisingly, median sample sizes for student samples were smaller than working adult samples. Samples were more commonly occupationally homogeneous (~70%) than occupationally heterogeneous. U.S. and English-speaking participants made up the vast majority of samples, whereas Middle Eastern, African, and Latin American samples were largely unrepresented. On the basis of study results, recommendations are provided for authors, editors, and readers, which converge on 3 themes: (a) appropriateness and match between sample characteristics and research questions, (b) careful consideration of statistical power, and (c) the increased popularity of quantitative synthesis. Implications are discussed in terms of theory building, generalizability of research findings, and statistical power to detect effects. PsycINFO Database Record (c) 2011 APA, all rights reserved

  14. Structural elucidation and magnetic behavior evaluation of Cu-Cr doped BaCo-X hexagonal ferrites

    NASA Astrophysics Data System (ADS)

    Azhar Khan, Muhammad; Hussain, Farhat; Rashid, Muhammad; Mahmood, Asif; Ramay, Shahid M.; Majeed, Abdul

    2018-04-01

    Ba2-xCuxCo2CryFe28-yO46 (x = 0.0, 0.1, 0.2, 0.3, 0.4, y = 0.0, 0.2, 0.4, 0.6, 0.8) X-type hexagonal ferrites were synthesized via micro-emulsion route. The techniques which were applied to characterize the prepared samples are as follows: X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), Dielectric measurements and vibrating sample magnetometer (VSM). The structural parameters i.e. lattice constant (a, c), cell volume (V), X-ray density, bulk density and crystallite size of all the prepared samples were obtained using XRD analysis. The lattice parameters 'a' and 'c' increase from 5.875 Å to 5.934 Å and 83.367 Å to 83.990 Å respectively. The crystallite size of investigated samples lies in the range of 28-32 nm. The magnetic properties of all samples have been calculated by vibrating sample magnetometer (VSM) analysis. The increase in coercivity (Hc) was observed with the increase of doping contents. It was observed that the coercivity (Hc) of all prepared samples is inversely related to the crystalline size which reflects that all materials are super-paramagnetic. The dielectric parameters i.e. dielectric constant, dielectric loss, tangent loss etc were obtained in the frequency range of 1 MHz-3 GHz and followed the Maxwell-Wagner's model. The significant variation the dielectric parameters are observed with increasing frequency. The maximum Q value is obtained at ∼2 GHz due to which these materials are used for high frequency multilayer chip inductors.

  15. Conceptual data sampling for breast cancer histology image classification.

    PubMed

    Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir

    2017-10-01

    Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Occurrence and Characterization of Steroid Growth Promoters Associated with Particulate Matter Originating from Beef Cattle Feedyards.

    PubMed

    Blackwell, Brett R; Wooten, Kimberly J; Buser, Michael D; Johnson, Bradley J; Cobb, George P; Smith, Philip N

    2015-07-21

    Studies of steroid growth promoters from beef cattle feedyards have previously focused on effluent or surface runoff as the primary route of transport from animal feeding operations. There is potential for steroid transport via fugitive airborne particulate matter (PM) from cattle feedyards; therefore, the objective of this study was to characterize the occurrence and concentration of steroid growth promoters in PM from feedyards. Air sampling was conducted at commercial feedyards (n = 5) across the Southern Great Plains from 2010 to 2012. Total suspended particulates (TSP), PM10, and PM2.5 were collected for particle size analysis and steroid growth promoter analysis. Particle size distributions were generated from TSP samples only, while steroid analysis was conducted on extracts of PM samples using liquid chromatography mass spectrometry. Of seven targeted steroids, 17α-estradiol and estrone were the most commonly detected, identified in over 94% of samples at median concentrations of 20.6 and 10.8 ng/g, respectively. Melengestrol acetate and 17α-trenbolone were detected in 31% and 39% of all PM samples at median concentrations of 1.3 and 1.9 ng/g, respectively. Results demonstrate PM is a viable route of steroid transportation and may be a significant contributor to environmental steroid hormone loading from cattle feedyards.

  17. Log-Normal Distribution of Cosmic Voids in Simulations and Mocks

    NASA Astrophysics Data System (ADS)

    Russell, E.; Pycke, J.-R.

    2017-01-01

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.

  18. Temporal change in the size distribution of airborne Radiocesium derived from the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Kaneyasu, Naoki; Ohashi, Hideo; Suzuki, Fumie; Okuda, Tomoaki; Ikemori, Fumikazu; Akata, Naofumi

    2013-04-01

    The accident of Fukushima Dai-ichi nuclear power plant discharged a large amount of radioactive materials into the environment. After 40 days of the accident, we started to collect the size-segregated aerosol at Tsukuba City, Japan, located 170 km south of the plant, by use of a low-pressure cascade impactor. The sampling continued from April 28, through October 26, 2011. The number of sample sets collected in total was 8. The radioactivity of 134Cs and 137Cs in aerosols collected at each stage were determined by gamma-ray with a high sensitivity Germanic detector. After the gamma-ray spectrometry analysis, the chemical species in the aerosols were analyzed. The analyses of first (April 28-May 12) and second (May 12-26) samples showed that the activity size distributions of 134Cs and 137Cs in aerosols reside mostly in the accumulation mode size range. These activity size distributions almost overlapped with the mass size distribution of non-sea-salt sulfate aerosol. From the results, we regarded that sulfate is the main transport medium of these radionuclides, and re-suspended soil particles that attached radionuclides were not the major airborne radioactive substances by the end of May, 2011 (Kaneyasu et al., 2012). We further conducted the successive extraction experiment of radiocesium from the aerosol deposits on the aluminum sheet substrate (8th stage of the first aerosol sample, 0.5-0.7 μm in aerodynamic diameter) with water and 0.1M HCl. In contrast to the relatively insoluble property of Chernobyl radionuclides, those in aerosols collected at Tsukuba in fine mode are completely water-soluble (100%). From the third aerosol sample, the activity size distributions started to change, i.e., the major peak in the accumulation mode size range seen in the first and second aerosol samples became smaller and an additional peak appeared in the coarse mode size range. The comparison of the activity size distributions of radiocesium and the mass size distributions of major aerosol components collected by the end of August, 2011, (i.e., sample No.5) and its implication will be discussed in the presentation. Reference Kaneyasu et al., Environ. Sci. Technol. 46, 5720-5726 (2012).

  19. Gaps in sampling and limitations to tree biomass estimation: a review of past sampling efforts over the past 50 years

    Treesearch

    Aaron Weiskittel; Jereme Frank; James Westfall; David Walker; Phil Radtke; David Affleck; David Macfarlane

    2015-01-01

    Tree biomass models are widely used but differ due to variation in the quality and quantity of data used in their development. We reviewed over 250 biomass studies and categorized them by species, location, sampled diameter distribution, and sample size. Overall, less than half of the tree species in Forest Inventory and Analysis database (FIADB) are without a...

  20. Analysis of Crystallographic Structure of a Japanese Sword by the Pulsed Neutron Transmission Method

    NASA Astrophysics Data System (ADS)

    Kino, K.; Ayukawa, N.; Kiyanagi, Y.; Uchida, T.; Uno, S.; Grazzi, F.; Scherillo, A.

    We measured two-dimensional transmission spectra of pulsed neutron beams for a Japanese sword sample. Atom density, crystalline size, and preferred orientation of crystals were obtained using the RITS code. The position dependence of the atomic density is consistent with the shape of the sample. The crystalline size is very small and shows position dependence, which is understood by the unique structure of Japanese swords. The preferred orientation has strong position dependence. Our study shows the usefulness of the pulsed neutron transmission method for cultural metal artifacts.

  1. Alternatives to Three-Mode Factor Analysis: A Case Study with Data Evaluating Perceived Barriers to Medical School Training.

    ERIC Educational Resources Information Center

    Thomson, William A.; And Others

    While educational researchers frequently collect data from a sample of individuals on a sample of variables, they sometimes collect data involving samples of: (1) subjects; (2) variables; and (3) occasions of measurement. A multistage procedure for analyzing such three-mode data is presented, focusing on effect sizes and graphic confidence…

  2. Information for forest process models: a review of NRS-FIA vegetation measurements

    Treesearch

    Charles D. Canham; William H. McWilliams

    2012-01-01

    The Forest and Analysis Program of the Northern Research Station (NRS-FIA) has re-designed Phase 3 measurements and intensified the sample intensity following a study to balance costs, utility, and sample size. The sampling scheme consists of estimating canopy-cover percent for six vegetation growth habits on 24-foot-radius subplots in four height classes and as an...

  3. Measuring solids concentration in stormwater runoff: comparison of analytical methods.

    PubMed

    Clark, Shirley E; Siu, Christina Y S

    2008-01-15

    Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.

  4. Scaling ice microstructures from the laboratory to nature: cryo-EBSD on large samples.

    NASA Astrophysics Data System (ADS)

    Prior, David; Craw, Lisa; Kim, Daeyeong; Peyroux, Damian; Qi, Chao; Seidemann, Meike; Tooley, Lauren; Vaughan, Matthew; Wongpan, Pat

    2017-04-01

    Electron backscatter diffraction (EBSD) has extended significantly our ability to conduct detailed quantitative microstructural investigations of rocks, metals and ceramics. EBSD on ice was first developed in 2004. Techniques have improved significantly in the last decade and EBSD is now becoming more common in the microstructural analysis of ice. This is particularly true for laboratory-deformed ice where, in some cases, the fine grain sizes exclude the possibility of using a thin section of the ice. Having the orientations of all axes (rather than just the c-axis as in an optical method) yields important new information about ice microstructure. It is important to examine natural ice samples in the same way so that we can scale laboratory observations to nature. In the case of ice deformation, higher strain rates are used in the laboratory than those seen in nature. These are achieved by increasing stress and/or temperature and it is important to assess that the microstructures produced in the laboratory are comparable with those observed in nature. Natural ice samples are coarse grained. Glacier and ice sheet ice has a grain size from a few mm up to several cm. Sea and lake ice has grain sizes of a few cm to many metres. Thus extending EBSD analysis to larger sample sizes to include representative microstructures is needed. The chief impediments to working on large ice samples are sample exchange, limitations on stage motion and temperature control. Large ice samples cannot be transferred through a typical commercial cryo-transfer system that limits sample sizes. We transfer through a nitrogen glove box that encloses the main scanning electron microscope (SEM) door. The nitrogen atmosphere prevents the cold stage and the sample from becoming covered in frost. Having a long optimal working distance for EBSD (around 30mm for the Otago cryo-EBSD facility) , by moving the camera away from the pole piece, enables the stage to move without crashing into either the EBSD camera or the SEM pole piece (final lens). In theory a sample up to 100mm perpendicular to the tilt axis by 150mm parallel to the tilt axis can be analysed. In practice, the motion of our stage is restricted to maximum dimensions of 100 by 50mm by a conductive copper braid on our cold stage. Temperature control becomes harder as the samples become larger. If the samples become too warm then they will start to sublime and the quality of EBSD data will reduce. Large samples need to be relatively thin ( 5mm or less) so that conduction of heat to the cold stage is more effective at keeping the surface temperature low. In the Otago facility samples of up to 40mm by 40mm present little problem and can be analysed for several hours without significant sublimation. Larger samples need more care, e.g. fast sample transfer to keep the sample very cold. The largest samples we work on routinely are 40 by 60mm in size. We will show examples of EBSD data from glacial ice and sea ice from Antarctica and from large laboratory ice samples.

  5. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.

  6. Economic Analysis of a Multi-Site Prevention Program: Assessment of Program Costs and Characterizing Site-level Variability

    PubMed Central

    Corso, Phaedra S.; Ingels, Justin B.; Kogan, Steven M.; Foster, E. Michael; Chen, Yi-Fu; Brody, Gene H.

    2013-01-01

    Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95% confidence interval) incremental difference was $2149 ($397, $3901). With the probabilistic sensitivity analysis approach, the incremental difference was $2583 ($778, $4346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention. PMID:23299559

  7. Economic analysis of a multi-site prevention program: assessment of program costs and characterizing site-level variability.

    PubMed

    Corso, Phaedra S; Ingels, Justin B; Kogan, Steven M; Foster, E Michael; Chen, Yi-Fu; Brody, Gene H

    2013-10-01

    Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.

  8. Mass size distribution of particle-bound water

    NASA Astrophysics Data System (ADS)

    Canepari, S.; Simonetti, G.; Perrino, C.

    2017-09-01

    The thermal-ramp Karl-Fisher method (tr-KF) for the determination of PM-bound water has been applied to size-segregated PM samples collected in areas subjected to different environmental conditions (protracted atmospheric stability, desert dust intrusion, urban atmosphere). This method, based on the use of a thermal ramp for the desorption of water from PM samples and the subsequent analysis by the coulometric KF technique, had been previously shown to differentiate water contributes retained with different strength and associated to different chemical components in the atmospheric aerosol. The application of the method to size-segregated samples has revealed that water showed a typical mass size distribution in each one of the three environmental situations that were taken into consideration. A very similar size distribution was shown by the chemical PM components that prevailed during each event: ammonium nitrate in the case of atmospheric stability, crustal species in the case of desert dust, road-dust components in the case of urban sites. The shape of the tr-KF curve varied according to the size of the collected particles. Considering the size ranges that better characterize the event (fine fraction for atmospheric stability, coarse fraction for dust intrusion, bi-modal distribution for urban dust), this shape is coherent with the typical tr-KF shape shown by water bound to the chemical species that predominate in the same PM size range (ammonium nitrate, crustal species, secondary/combustion species - road dust components).

  9. Effect of Co doping on structural and mechanical properties of CeO2

    NASA Astrophysics Data System (ADS)

    Tiwari, Saurabh; Balasubramanian, Nivedha; Biring, Sajal; Sen, Somaditya

    2018-05-01

    Sol-gel synthesized nanocrystalline Co doped CeO2 powders [(Ce1-xCoxO2; x=0, 0.03)] were made into cylindrical discs by uniaxial pressing and sintered at 1500°C for 24h to measure mechanical properties. The pure phase formation of undoped and Co doped samples were confirmed by X-ray diffraction and Raman analysis. The scanning electron microscopy (SEM) was used for observing the microstructure of sintered samples to investigate density, porosity, and grain size. The grains size observed for 1500°C sintered samples 5-8 µm. Vickers indentation method used for investigating the micro-hardness. For undoped CeO2 micro-hardness was found 6.2 GPa which decreased with Co doping. It was found that samples follow indentation size effect (ISE) and follow elastic than plastic deformation. Enhanced ductile nature with Co doping in CeO2 made it more promising material for optoelectronic device applications.

  10. Measurements of cloud condensation nuclei activity and droplet activation kinetics of wet processed regional dust samples and minerals

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Sokolik, I. N.; Nenes, A.

    2011-04-01

    This study reports laboratory measurements of particle size distributions, cloud condensation nuclei (CCN) activity, and droplet activation kinetics of wet generated aerosols from clays, calcite, quartz, and desert soil samples from Northern Africa, East Asia/China, and Northern America. The dependence of critical supersaturation, sc, on particle dry diameter, Ddry, is used to characterize particle-water interactions and assess the ability of Frenkel-Halsey-Hill adsorption activation theory (FHH-AT) and Köhler theory (KT) to describe the CCN activity of the considered samples. Regional dust samples produce unimodal size distributions with particle sizes as small as 40 nm, CCN activation consistent with KT, and exhibit hygroscopicity similar to inorganic salts. Clays and minerals produce a bimodal size distribution; the CCN activity of the smaller mode is consistent with KT, while the larger mode is less hydrophilic, follows activation by FHH-AT, and displays almost identical CCN activity to dry generated dust. Ion Chromatography (IC) analysis performed on regional dust samples indicates a soluble fraction that cannot explain the CCN activity of dry or wet generated dust. A mass balance and hygroscopicity closure suggests that the small amount of ions (of low solubility compounds like calcite) present in the dry dust dissolve in the aqueous suspension during the wet generation process and give rise to the observed small hygroscopic mode. Overall these results identify an artifact that may question the atmospheric relevance of dust CCN activity studies using the wet generation method. Based on a threshold droplet growth analysis, wet generated mineral aerosols display similar activation kinetics compared to ammonium sulfate calibration aerosol. Finally, a unified CCN activity framework that accounts for concurrent effects of solute and adsorption is developed to describe the CCN activity of aged or hygroscopic dusts.

  11. Analysis of environmental microplastics by vibrational microspectroscopy: FTIR, Raman or both?

    PubMed

    Käppler, Andrea; Fischer, Dieter; Oberbeckmann, Sonja; Schernewski, Gerald; Labrenz, Matthias; Eichhorn, Klaus-Jochen; Voit, Brigitte

    2016-11-01

    The contamination of aquatic ecosystems with microplastics has recently been reported through many studies, and negative impacts on the aquatic biota have been described. For the chemical identification of microplastics, mainly Fourier transform infrared (FTIR) and Raman spectroscopy are used. But up to now, a critical comparison and validation of both spectroscopic methods with respect to microplastics analysis is missing. To close this knowledge gap, we investigated environmental samples by both Raman and FTIR spectroscopy. Firstly, particles and fibres >500 μm extracted from beach sediment samples were analysed by Raman and FTIR microspectroscopic single measurements. Our results illustrate that both methods are in principle suitable to identify microplastics from the environment. However, in some cases, especially for coloured particles, a combination of both spectroscopic methods is necessary for a complete and reliable characterisation of the chemical composition. Secondly, a marine sample containing particles <400 μm was investigated by Raman imaging and FTIR transmission imaging. The results were compared regarding number, size and type of detectable microplastics as well as spectra quality, measurement time and handling. We show that FTIR imaging leads to significant underestimation (about 35 %) of microplastics compared to Raman imaging, especially in the size range <20 μm. However, the measurement time of Raman imaging is considerably higher compared to FTIR imaging. In summary, we propose a further size division within the smaller microplastics fraction into 500-50 μm (rapid and reliable analysis by FTIR imaging) and into 50-1 μm (detailed and more time-consuming analysis by Raman imaging). Graphical Abstract Marine microplastic sample (fraction <400 μm) on a silicon filter (middle) with the corresponding Raman and IR images.

  12. Characterization of Extracellular Vesicles by Size-Exclusion High-Performance Liquid Chromatography (HPLC).

    PubMed

    Huang, Tao; He, Jiang

    2017-01-01

    Extracellular vesicles (EVs) have recently attracted substantial attention due to the potential diagnostic and therapeutic relevance. Although a variety of techniques have been used to isolate and analyze EVs, it is still far away from satisfaction. Size-exclusion chromatography (SEC), which separates subjects by size, has been widely applied in protein purification and analysis. The purpose of this chapter is to show the applications of size-exclusion high-performance liquid chromatography (HPLC) as methods for EV characterization of impurities or contaminants of small size, and thus for quality assay for the purity of the samples of EVs.

  13. Highly sensitive molecular diagnosis of prostate cancer using surplus material washed off from biopsy needles

    PubMed Central

    Bermudo, R; Abia, D; Mozos, A; García-Cruz, E; Alcaraz, A; Ortiz, Á R; Thomson, T M; Fernández, P L

    2011-01-01

    Introduction: Currently, final diagnosis of prostate cancer (PCa) is based on histopathological analysis of needle biopsies, but this process often bears uncertainties due to small sample size, tumour focality and pathologist's subjective assessment. Methods: Prostate cancer diagnostic signatures were generated by applying linear discriminant analysis to microarray and real-time RT–PCR (qRT–PCR) data from normal and tumoural prostate tissue samples. Additionally, after removal of biopsy tissues, material washed off from transrectal biopsy needles was used for molecular profiling and discriminant analysis. Results: Linear discriminant analysis applied to microarray data for a set of 318 genes differentially expressed between non-tumoural and tumoural prostate samples produced 26 gene signatures, which classified the 84 samples used with 100% accuracy. To identify signatures potentially useful for the diagnosis of prostate biopsies, surplus material washed off from routine biopsy needles from 53 patients was used to generate qRT–PCR data for a subset of 11 genes. This analysis identified a six-gene signature that correctly assigned the biopsies as benign or tumoural in 92.6% of the cases, with 88.8% sensitivity and 96.1% specificity. Conclusion: Surplus material from prostate needle biopsies can be used for minimal-size gene signature analysis for sensitive and accurate discrimination between non-tumoural and tumoural prostates, without interference with current diagnostic procedures. This approach could be a useful adjunct to current procedures in PCa diagnosis. PMID:22009027

  14. Methods for Investigating Mercury Speciation, Transport, Methylation, and Bioaccumulation in Watersheds Affected by Historical Mining

    NASA Astrophysics Data System (ADS)

    Alpers, C. N.; Marvin-DiPasquale, M. C.; Fleck, J.; Ackerman, J. T.; Eagles-Smith, C.; Stewart, A. R.; Windham-Myers, L.

    2016-12-01

    Many watersheds in the western U.S. have mercury (Hg) contamination from historical mining of Hg and precious metals (gold and silver), which were concentrated using Hg amalgamation (mid 1800's to early 1900's). Today, specialized sampling and analytical protocols for characterizing Hg and methylmercury (MeHg) in water, sediment, and biota generate high-quality data to inform management of land, water, and biological resources. Collection of vertically and horizontally integrated water samples in flowing streams and use of a Teflon churn splitter or cone splitter ensure that samples and subsamples are representative. Both dissolved and particulate components of Hg species in water are quantified because each responds to different hydrobiogeochemical processes. Suspended particles trapped on pre-combusted (Hg-free) glass- or quartz-fiber filters are analyzed for total mercury (THg), MeHg, and reactive divalent mercury. Filtrates are analyzed for THg and MeHg to approximate the dissolved fraction. The sum of concentrations in particulate and filtrate fractions represents whole water, equivalent to an unfiltered sample. This approach improves upon analysis of filtered and unfiltered samples and computation of particulate concentration by difference; volume filtered is adjusted based on suspended-sediment concentration to minimize particulate non-detects. Information from bed-sediment sampling is enhanced by sieving into multiple size fractions and determining detailed grain-size distribution. Wet sieving ensures particle disaggregation; sieve water is retained and fines are recovered by centrifugation. Speciation analysis by sequential extraction and examination of heavy mineral concentrates by scanning electron microscopy provide additional information regarding Hg mineralogy and geochemistry. Biomagnification of MeHg in food webs is tracked using phytoplankton, zooplankton, aquatic and emergent vegetation, invertebrates, fish, and birds. Analysis of zooplankton in multiple size fractions from multiple depths in reservoirs can provide insight into food-web dynamics. The presentation will highlight application of these methods in several Hg-contaminated watersheds, with emphasis on understanding seasonal variability in designing effective sampling strategies.

  15. HASA: Hypersonic Aerospace Sizing Analysis for the Preliminary Design of Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Harloff, Gary J.; Berkowitz, Brian M.

    1988-01-01

    A review of the hypersonic literature indicated that a general weight and sizing analysis was not available for hypersonic orbital, transport, and fighter vehicles. The objective here is to develop such a method for the preliminary design of aerospace vehicles. This report describes the developed methodology and provides examples to illustrate the model, entitled the Hypersonic Aerospace Sizing Analysis (HASA). It can be used to predict the size and weight of hypersonic single-stage and two-stage-to-orbit vehicles and transports, and is also relevant for supersonic transports. HASA is a sizing analysis that determines vehicle length and volume, consistent with body, fuel, structural, and payload weights. The vehicle component weights are obtained from statistical equations for the body, wing, tail, thermal protection system, landing gear, thrust structure, engine, fuel tank, hydraulic system, avionics, electral system, equipment payload, and propellant. Sample size and weight predictions are given for the Space Shuttle orbiter and other proposed vehicles, including four hypersonic transports, a Mach 6 fighter, a supersonic transport (SST), a single-stage-to-orbit (SSTO) vehicle, a two-stage Space Shuttle with a booster and an orbiter, and two methane-fueled vehicles.

  16. 14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices

    NASA Astrophysics Data System (ADS)

    Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet

    2017-12-01

    Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.

  17. Lack of association between ectoparasite intensities and rabies virus neutralizing antibody seroprevalence in wild big brown bats (Eptesicus fuscus), Fort Collins, Colorado

    USGS Publications Warehouse

    Pearce, R.D.; O'Shea, T.J.; Shankar, V.; Rupprecht, C.E.

    2007-01-01

    Recently, bat ectoparasites have been demonstrated to harbor pathogens of potential importance to humans. We evaluated antirabies antibody seroprevalence and the presence of ectoparasites in big brown bats (Eptesicus fuscus) sampled in 2002 and 2003 in Colorado to investigate if an association existed between ectoparasite intensity and exposure to rabies virus (RV). We used logistic regression and Akaike's Information Criteria adjusted for sample size (AICc) in a post-hoc analysis to investigate the relative importance of three ectoparasite species, as well as bat colony size, year sampled, age class, colony size, and year interaction on the presence of rabies virus neutralizing antibodies (VNA) in serum of wild E. fuscus. We obtained serum samples and ectoparasite counts from big brown bats simultaneously in 2002 and 2003. Although the presence of ectoparasites (Steatonyssus occidentalis and Spinturnix bakeri) were important in elucidating VNA seroprevalence, their intensities were higher in seronegative bats than in seropositive bats, and the presence of a third ectoparasite (Cimex pilosellus) was inconsequential. Colony size and year sampled were the most important variables in these AICc models. These findings suggest that these ectoparasites do not enhance exposure of big brown bats to RV. ?? 2007 Mary Ann Liebert, Inc.

  18. Design and analysis of three-arm trials with negative binomially distributed endpoints.

    PubMed

    Mütze, Tobias; Munk, Axel; Friede, Tim

    2016-02-20

    A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Size variation in Middle Pleistocene humans.

    PubMed

    Arsuaga, J L; Carretero, J M; Lorenzo, C; Gracia, A; Martínez, I; Bermúdez de Castro, J M; Carbonell, E

    1997-08-22

    It has been suggested that European Middle Pleistocene humans, Neandertals, and prehistoric modern humans had a greater sexual dimorphism than modern humans. Analysis of body size variation and cranial capacity variation in the large sample from the Sima de los Huesos site in Spain showed instead that the sexual dimorphism is comparable in Middle Pleistocene and modern populations.

  20. Environmental DNA particle size distribution from Brook Trout (Salvelinus fontinalis)

    Treesearch

    Taylor M. Wilcox; Kevin S. McKelvey; Michael K. Young; Winsor H. Lowe; Michael K. Schwartz

    2015-01-01

    Environmental DNA (eDNA) sampling has become a widespread approach for detecting aquatic animals with high potential for improving conservation biology. However, little research has been done to determine the size of particles targeted by eDNA surveys. In this study, we conduct particle distribution analysis of eDNA from a captive Brook Trout (Salvelinus fontinalis) in...

  1. On the Post Hoc Power in Testing Mean Differences

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Maxwell, Scott

    2005-01-01

    Retrospective or post hoc power analysis is recommended by reviewers and editors of many journals. Little literature has been found that gave a serious study of the post hoc power. When the sample size is large, the observed effect size is a good estimator of the true power. This article studies whether such a power estimator provides valuable…

  2. Effects of plot size on forest-type algorithm accuracy

    Treesearch

    James A. Westfall

    2009-01-01

    The Forest Inventory and Analysis (FIA) program utilizes an algorithm to consistently determine the forest type for forested conditions on sample plots. Forest type is determined from tree size and species information. Thus, the accuracy of results is often dependent on the number of trees present, which is highly correlated with plot area. This research examines the...

  3. Overview of the Mars Sample Return Earth Entry Vehicle

    NASA Technical Reports Server (NTRS)

    Dillman, Robert; Corliss, James

    2008-01-01

    NASA's Mars Sample Return (MSR) project will bring Mars surface and atmosphere samples back to Earth for detailed examination. Langley Research Center's MSR Earth Entry Vehicle (EEV) is a core part of the mission, protecting the sample container during atmospheric entry, descent, and landing. Planetary protection requirements demand a higher reliability from the EEV than for any previous planetary entry vehicle. An overview of the EEV design and preliminary analysis is presented, with a follow-on discussion of recommended future design trade studies to be performed over the next several years in support of an MSR launch in 2018 or 2020. Planned topics include vehicle size for impact protection of a range of sample container sizes, outer mold line changes to achieve surface sterilization during re-entry, micrometeoroid protection, aerodynamic stability, thermal protection, and structural materials selection.

  4. A Review of ETS Differential Item Functioning Assessment Procedures: Flagging Rules, Minimum Sample Size Requirements, and Criterion Refinement. Research Report. ETS RR-12-08

    ERIC Educational Resources Information Center

    Zwick, Rebecca

    2012-01-01

    Differential item functioning (DIF) analysis is a key component in the evaluation of the fairness and validity of educational tests. The goal of this project was to review the status of ETS DIF analysis procedures, focusing on three aspects: (a) the nature and stringency of the statistical rules used to flag items, (b) the minimum sample size…

  5. Development of a magnetic solid-phase extraction coupled with high-performance liquid chromatography method for the analysis of polyaromatic hydrocarbons.

    PubMed

    Ma, Yan; Xie, Jiawen; Jin, Jing; Wang, Wei; Yao, Zhijian; Zhou, Qing; Li, Aimin; Liang, Ying

    2015-07-01

    A novel magnetic solid phase extraction coupled with high-performance liquid chromatography method was established to analyze polyaromatic hydrocarbons in environmental water samples. The extraction conditions, including the amount of extraction agent, extraction time, pH and the surface structure of the magnetic extraction agent, were optimized. The results showed that the amount of extraction agent and extraction time significantly influenced the extraction performance. The increase in the specific surface area, the enlargement of pore size, and the reduction of particle size could enhance the extraction performance of the magnetic microsphere. The optimized magnetic extraction agent possessed a high surface area of 1311 m(2) /g, a large pore size of 6-9 nm, and a small particle size of 6-9 μm. The limit of detection for phenanthrene and benzo[g,h,i]perylene in the developed analysis method was 3.2 and 10.5 ng/L, respectively. When applied to river water samples, the spiked recovery of phenanthrene and benzo[g,h,i]perylene ranged from 89.5-98.6% and 82.9-89.1%, respectively. Phenanthrene was detected over a concentration range of 89-117 ng/L in three water samples withdrawn from the midstream of the Huai River, and benzo[g,h,i]perylene was below the detection limit. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Performance of the likelihood ratio difference (G2 Diff) test for detecting unidimensionality in applications of the multidimensional Rasch model.

    PubMed

    Harrell-Williams, Leigh; Wolfe, Edward W

    2014-01-01

    Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.

  7. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  8. The analysis of morphometric data on rocky mountain wolves and artic wolves using statistical method

    NASA Astrophysics Data System (ADS)

    Ammar Shafi, Muhammad; Saifullah Rusiman, Mohd; Hamzah, Nor Shamsidah Amir; Nor, Maria Elena; Ahmad, Noor’ani; Azia Hazida Mohamad Azmi, Nur; Latip, Muhammad Faez Ab; Hilmi Azman, Ahmad

    2018-04-01

    Morphometrics is a quantitative analysis depending on the shape and size of several specimens. Morphometric quantitative analyses are commonly used to analyse fossil record, shape and size of specimens and others. The aim of the study is to find the differences between rocky mountain wolves and arctic wolves based on gender. The sample utilised secondary data which included seven variables as independent variables and two dependent variables. Statistical modelling was used in the analysis such was the analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA). The results showed there exist differentiating results between arctic wolves and rocky mountain wolves based on independent factors and gender.

  9. Towards Monitoring Biodiversity in Amazonian Forests: How Regular Samples Capture Meso-Scale Altitudinal Variation in 25 km2 Plots

    PubMed Central

    Norris, Darren; Fortin, Marie-Josée; Magnusson, William E.

    2014-01-01

    Background Ecological monitoring and sampling optima are context and location specific. Novel applications (e.g. biodiversity monitoring for environmental service payments) call for renewed efforts to establish reliable and robust monitoring in biodiversity rich areas. As there is little information on the distribution of biodiversity across the Amazon basin, we used altitude as a proxy for biological variables to test whether meso-scale variation can be adequately represented by different sample sizes in a standardized, regular-coverage sampling arrangement. Methodology/Principal Findings We used Shuttle-Radar-Topography-Mission digital elevation values to evaluate if the regular sampling arrangement in standard RAPELD (rapid assessments (“RAP”) over the long-term (LTER [“PELD” in Portuguese])) grids captured patters in meso-scale spatial variation. The adequacy of different sample sizes (n = 4 to 120) were examined within 32,325 km2/3,232,500 ha (1293×25 km2 sample areas) distributed across the legal Brazilian Amazon. Kolmogorov-Smirnov-tests, correlation and root-mean-square-error were used to measure sample representativeness, similarity and accuracy respectively. Trends and thresholds of these responses in relation to sample size and standard-deviation were modeled using Generalized-Additive-Models and conditional-inference-trees respectively. We found that a regular arrangement of 30 samples captured the distribution of altitude values within these areas. Sample size was more important than sample standard deviation for representativeness and similarity. In contrast, accuracy was more strongly influenced by sample standard deviation. Additionally, analysis of spatially interpolated data showed that spatial patterns in altitude were also recovered within areas using a regular arrangement of 30 samples. Conclusions/Significance Our findings show that the logistically feasible sample used in the RAPELD system successfully recovers meso-scale altitudinal patterns. This suggests that the sample size and regular arrangement may also be generally appropriate for quantifying spatial patterns in biodiversity at similar scales across at least 90% (≈5 million km2) of the Brazilian Amazon. PMID:25170894

  10. PIXE Analysis of Indoor Aerosols

    NASA Astrophysics Data System (ADS)

    Johnson, Christopher; Turley, Colin; Moore, Robert; Battaglia, Maria; Labrake, Scott; Vineyard, Michael

    2011-10-01

    We have performed a proton-induced X-ray emission (PIXE) analysis of aerosol samples collected in academic buildings at Union College to investigate the air quality in these buildings and the effectiveness of their air filtration systems. This is also the commissioning experiment for a new scattering chamber in the Union College Ion-Beam Analysis Laboratory. The aerosol samples were collected on Kapton foils using a nine-stage cascade impactor that separates particles according to their aerodynamic size. The foils were bombarded with beams of 2.2-MeV protons from the Union College 1.1-MV Pelletron Accelerator and the X-ray products were detected with an Amptek silicon drift detector. After subtracting the contribution from the Kapton foils, the X-ray energy spectra of the aerosol samples were analyzed using GUPIX software to determine the elemental concentrations of the samples. We will describe the collection of the aerosol samples, discuss the PIXE analysis, and present the results.

  11. Characterization of minerals in natural and manufactured sand in Cauvery River belt, Tamilnadu, India

    NASA Astrophysics Data System (ADS)

    Gnanasaravanan, S.; Rajkumar, P.

    2013-05-01

    The present study investigates the characterization of minerals in the River Sand (R - Sand) and the Manufactured sand (M-Sand) through FTIR spectroscopic studies. The R - Sand is collected from seven different locations in Cauvery River and M - Sand is collected from eight different manufactures around the Cauvery River belt in Salem, Erode, Tirupur and Namakkal districts of Tamilnadu, India. To extend the effectiveness of the analysis, the samples were subjected to grain size separation to classify the bulk samples into different grain sizes. All the samples were analyzed using FTIR spectrometer. The number of minerals identified with the help of FTIR spectra in overall (bulk) samples of R - Sand is 14 and of M - Sand is 13. The number has been increased while going for grain size separation, i.e., from 14 to 31 for R - Sand and from 13 to 20 for M - Sand. Among all minerals, quartz plays a major role. The relative distribution and the crystallinity nature of quartz have been discussed based on the extinction co-efficient and the crystallinity index values computed. There is no major variation found in M - Sand while going for grain size separation.

  12. Variation of phytoplankton assemblages along the Mozambique coast as revealed by HPLC and microscopy

    NASA Astrophysics Data System (ADS)

    Sá, C.; Leal, M. C.; Silva, A.; Nordez, S.; André, E.; Paula, J.; Brotas, V.

    2013-05-01

    This study is an integrated overview of pigment and microscopic analysis of phytoplankton communities throughout the Mozambican coast. Collected samples revealed notable patterns of phytoplankton occurrence and distribution, with community structure changing between regions and sample depth. Pigment data showed Delagoa Bight, Sofala Bank and Angoche as the most productive regions throughout the sampled area. In general, micro-sized phytoplankton, particularly diatoms, were important contributors to biomass both at surface and sub-surface maximum (SSM) samples, although were almost absent in the northern stations. In contrast, nano- and pico-sized phytoplankton revealed opposing patterns. Picophytoplankton were most abundant at surface, as opposed to nanophytoplankton, which were more abundant at the SSM. Microphytoplankton were associated with cooler southern water masses, while picophytoplankton were related to warmer northern water masses. Nanophytoplankton were found to increase their contribution to biomass with increasing SSM. Microscopy information on the genera and species level revealed the diatoms Chaetoceros spp., Proboscia alata, Pseudo-nitzschia spp., Cylindrotheca closterium and Hemiaulus haukii as the most abundant taxa of the micro-sized phytoplankton. Discosphaera tubifera and Emiliania huxleyi were the most abundant coccolithophores, nano-sized phytoplankton.

  13. How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

    PubMed

    Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

    2014-09-01

    Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  14. XRD analysis of undoped and Fe doped TiO{sub 2} nanoparticles by Williamson Hall method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bharti, Bandna; Barman, P. B.; Kumar, Rajesh, E-mail: rajesh.kumar@juit.ac.in

    2015-08-28

    Undoped and Fe doped titanium dioxide (TiO{sub 2}) nanoparticles were synthesized by sol-gel method at room temperature. The synthesized samples were annealed at 500°C. For structural analysis, the prepared samples were characterized by X-ray diffraction (XRD). The crystallite size of TiO{sub 2} and Fe doped TiO{sub 2} nanoparticles were calculated by Scherer’s formula, and was found to be 15 nm and 11 nm, respectively. Reduction in crystallite size of TiO{sub 2} with Fe doping was observed. The anatase phase of Fe-doped TiO{sub 2} nanoparticles was also confirmed by X-ray diffraction. By using Williamson-Hall method, lattice strain and crystallite size weremore » also calculated. Williamson–Hall plot indicates the presence of compressive strain for TiO{sub 2} and tensile strain for Fe-TiO{sub 2} nanoparticles annealed at 500°C.« less

  15. Grain size analysis and depositional environment of shallow marine to basin floor, Kelantan River Delta

    NASA Astrophysics Data System (ADS)

    Afifah, M. R. Nurul; Aziz, A. Che; Roslan, M. Kamal

    2015-09-01

    Sediment samples were collected from the shallow marine from Kuala Besar, Kelantan outwards to the basin floor of South China Sea which consisted of quaternary bottom sediments. Sixty five samples were analysed for their grain size distribution and statistical relationships. Basic statistical analysis like mean, standard deviation, skewness and kurtosis were calculated and used to differentiate the depositional environment of the sediments and to derive the uniformity of depositional environment either from the beach or river environment. The sediments of all areas were varied in their sorting ranging from very well sorted to poorly sorted, strongly negative skewed to strongly positive skewed, and extremely leptokurtic to very platykurtic in nature. Bivariate plots between the grain-size parameters were then interpreted and the Coarsest-Median (CM) pattern showed the trend suggesting relationships between sediments influenced by three ongoing hydrodynamic factors namely turbidity current, littoral drift and waves dynamic, which functioned to control the sediments distribution pattern in various ways.

  16. Identifying airborne metal particles sources near an optoelectronic and semiconductor industrial park

    NASA Astrophysics Data System (ADS)

    Chen, Ho-Wen; Chen, Wei-Yea; Chang, Cheng-Nan; Chuang, Yen-Hsun; Lin, Yu-Hao

    2016-06-01

    The recently developed Central Taiwan Science Park (CTSP) in central Taiwan is home to an optoelectronic and semiconductor industrial cluster. Therefore, exploring the elemental compositions and size distributions of airborne particles emitted from the CTSP would help to prevent pollution. This study analyzed size-fractionated metal-rich particle samples collected in upwind and downwind areas of CTSP during Jan. and Oct. 2013 by using micro-orifice uniform deposited impactor (MOUDI). Correlation analysis, hierarchical cluster analysis and particle mass-size distribution analysis are performed to identify the source of metal-rich particle near the CTSP. Analyses of elemental compositions and particle size distributions emitted from the CTSP revealed that the CTSP emits some metals (V, As, In Ga, Cd and Cu) in the ultrafine particles (< 1 μm). The statistical analysis combines with the particle mass-size distribution analysis could provide useful source identification information. In airborne particles with the size of 0.32 μm, Ga could be a useful pollution index for optoelectronic and semiconductor emission in the CTSP. Meanwhile, the ratios of As/Ga concentration at the particle size of 0.32 μm demonstrates that humans near the CTSP would be potentially exposed to GaAs ultrafine particles. That is, metals such as Ga and As and other metals that are not regulated in Taiwan are potentially harmful to human health.

  17. [An investigation of the statistical power of the effect size in randomized controlled trials for the treatment of patients with type 2 diabetes mellitus using Chinese medicine].

    PubMed

    Ma, Li-Xin; Liu, Jian-Ping

    2012-01-01

    To investigate whether the power of the effect size was based on adequate sample size in randomized controlled trials (RCTs) for the treatment of patients with type 2 diabetes mellitus (T2DM) using Chinese medicine. China Knowledge Resource Integrated Database (CNKI), VIP Database for Chinese Technical Periodicals (VIP), Chinese Biomedical Database (CBM), and Wangfang Data were systematically recruited using terms like "Xiaoke" or diabetes, Chinese herbal medicine, patent medicine, traditional Chinese medicine, randomized, controlled, blinded, and placebo-controlled. Limitation was set on the intervention course > or = 3 months in order to identify the information of outcome assessement and the sample size. Data collection forms were made according to the checking lists found in the CONSORT statement. Independent double data extractions were performed on all included trials. The statistical power of the effects size for each RCT study was assessed using sample size calculation equations. (1) A total of 207 RCTs were included, including 111 superiority trials and 96 non-inferiority trials. (2) Among the 111 superiority trials, fasting plasma glucose (FPG) and glycosylated hemoglobin HbA1c (HbA1c) outcome measure were reported in 9% and 12% of the RCTs respectively with the sample size > 150 in each trial. For the outcome of HbA1c, only 10% of the RCTs had more than 80% power. For FPG, 23% of the RCTs had more than 80% power. (3) In the 96 non-inferiority trials, the outcomes FPG and HbA1c were reported as 31% and 36% respectively. These RCTs had a samples size > 150. For HbA1c only 36% of the RCTs had more than 80% power. For FPG, only 27% of the studies had more than 80% power. The sample size for statistical analysis was distressingly low and most RCTs did not achieve 80% power. In order to obtain a sufficient statistic power, it is recommended that clinical trials should establish clear research objective and hypothesis first, and choose scientific and evidence-based study design and outcome measurements. At the same time, calculate required sample size to ensure a precise research conclusion.

  18. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.

  19. Combining gas-phase electrophoretic mobility molecular analysis (GEMMA), light scattering, field flow fractionation and cryo electron microscopy in a multidimensional approach to characterize liposomal carrier vesicles.

    PubMed

    Urey, Carlos; Weiss, Victor U; Gondikas, Andreas; von der Kammer, Frank; Hofmann, Thilo; Marchetti-Deschmann, Martina; Allmaier, Günter; Marko-Varga, György; Andersson, Roland

    2016-11-20

    For drug delivery, characterization of liposomes regarding size, particle number concentrations, occurrence of low-sized liposome artefacts and drug encapsulation are of importance to understand their pharmacodynamic properties. In our study, we aimed to demonstrate the applicability of nano Electrospray Gas-Phase Electrophoretic Mobility Molecular Analyser (nES GEMMA) as a suitable technique for analyzing these parameters. We measured number-based particle concentrations, identified differences in size between nominally identical liposomal samples, and detected the presence of low-diameter material which yielded bimodal particle size distributions. Subsequently, we compared these findings to dynamic light scattering (DLS) data and results from light scattering experiments coupled to Asymmetric Flow-Field Flow Fractionation (AF4), the latter improving the detectability of smaller particles in polydisperse samples due to a size separation step prior detection. However, the bimodal size distribution could not be detected due to method inherent limitations. In contrast, cryo transmission electron microscopy corroborated nES GEMMA results. Hence, gas-phase electrophoresis proved to be a versatile tool for liposome characterization as it could analyze both vesicle size and size distribution. Finally, a correlation of nES GEMMA results with cell viability experiments was carried out to demonstrate the importance of liposome batch-to-batch control as low-sized sample components possibly impact cell viability. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Naltrexone and Cognitive Behavioral Therapy for the Treatment of Alcohol Dependence

    PubMed Central

    Baros, AM; Latham, PK; Anton, RF

    2008-01-01

    Background Sex differences in regards to pharmacotherapy for alcoholism is a topic of concern following publications suggesting naltrexone, one of the longest approved treatments of alcoholism, is not as effective in women as in men. This study was conducted by combining two randomized placebo controlled clinical trials utilizing similar methodologies and personnel in which the data was amalgamated to evaluate sex effects in a reasonable sized sample. Methods 211 alcoholics (57 female; 154 male) were randomized to the naltrexone/CBT or placebo/CBT arm of the two clinical trials analyzed. Baseline variables were examined for differences between sex and treatment groups via analysis of variance (ANOVA) for continuous variable or chi-square test for categorical variables. All initial outcome analysis was conducted under an intent-to-treat analysis plan. Effect sizes for naltrexone over placebo were determined by Cohen’s D (d). Results The effect size of naltrexone over placebo for the following outcome variables was similar in men and women (%days abstinent (PDA) d=0.36, %heavy drinking days (PHDD) d=0.36 and total standard drinks (TSD) d=0.36). Only for men were the differences significant secondary to the larger sample size (PDA p=0.03; PHDD p=0.03; TSD p=0.04). There were a few variables (GGT at wk-12 change from baseline to week-12: men d=0.36, p=0.05; women d=0.20, p=0.45 and drinks per drinking day: men d=0.36, p=0.05; women d=0.28, p=0.34) where the naltrexone effect size for men was greater than women. In women, naltrexone tended to increase continuous abstinent days before a first drink (women d-0.46, p=0.09; men d=0.00, p=0.44). Conclusions The effect size of naltrexone over placebo appeared similar in women and men in our hands suggesting the findings of sex differences in naltrexone response might have to do with sample size and/or endpoint drinking variables rather than any inherent pharmacological or biological differences in response. PMID:18336635

  1. Textural and Mineralogical Analysis of Volcanic Rocks by µ-XRF Mapping.

    PubMed

    Germinario, Luigi; Cossio, Roberto; Maritan, Lara; Borghi, Alessandro; Mazzoli, Claudio

    2016-06-01

    In this study, µ-XRF was applied as a novel surface technique for quick acquisition of elemental X-ray maps of rocks, image analysis of which provides quantitative information on texture and rock-forming minerals. Bench-top µ-XRF is cost-effective, fast, and non-destructive, can be applied to both large (up to a few tens of cm) and fragile samples, and yields major and trace element analysis with good sensitivity. Here, X-ray mapping was performed with a resolution of 103.5 µm and spot size of 30 µm over sample areas of about 5×4 cm of Euganean trachyte, a volcanic porphyritic rock from the Euganean Hills (NE Italy) traditionally used in cultural heritage. The relative abundance of phenocrysts and groundmass, as well as the size and shape of the various mineral phases, were obtained from image analysis of the elemental maps. The quantified petrographic features allowed identification of various extraction sites, revealing an objective method for archaeometric provenance studies exploiting µ-XRF imaging.

  2. Modified Distribution-Free Goodness-of-Fit Test Statistic.

    PubMed

    Chun, So Yeon; Browne, Michael W; Shapiro, Alexander

    2018-03-01

    Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.

  3. Frequency Dependent Susceptibility Analysis of Magnetic Carriers: Application to Fe-Oxides on Mars surface

    NASA Astrophysics Data System (ADS)

    Adachi, T.; Kletetschka, G.; Mikula, V.

    2007-12-01

    On Mars, Fe-oxides mineral phases (inferred/detected) are mainly magnetite, pyrrhotite, and hematite. Kletetschka et al., 2005 suggested that the grain size dependent potential may contribute to the Mars surface magnetic anomaly. Grain size of Fe-oxides may play a role for the magnetic signature and anomaly on Mars. According to Kletetschka et al., 2005, the larger the grain size, the larger the magnetization (in this case hematite's TRM). Weather they are magnetite, pyrrhotite or hematite, nano-phase or superparamagnetic grains may contribute to the absence of remanent magnetization on the surface of Mars. In this contribution we tackle how to resolve grain size variations by frequency dependent susceptibility measured on terrestrial hematite samples such as hemo-ilmenite from Allard Lake, Canada, Mars analogue concretions from Utah and Czech Republic, and hematite aggregates from Hawaii. The magnetic characteristics of hematite-goethite mineralogies of Utah and Czech concretions suggested (Adachi et al., 2007) that they contain super paramagnetic (SP) to single domain (SD) magnetic states. Coercivity spectra analysis from acquisition of isothermal remanent magnetization (IRM) data showed the distinct behaviors of hematite, goethite, and mixed composition of both. The estimated magnetic states are analyzed with the frequency-dependent susceptibility instrument (500-250,000 Hertz). The frequency- and size-dependent susceptibility for hematite, goethite, and magnetite are calibrated using the known size powdered (commercial) samples.

  4. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy.

    PubMed

    Shanmugam, Akshaya; Usmani, Mohammad; Mayberry, Addison; Perkins, David L; Holcomb, Daniel E

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples.

  5. Investigation of the annealing temperature effect on structural, morphology, dielectric and magnetic properties of BiFeO3 nanoparticles

    NASA Astrophysics Data System (ADS)

    Ranjbar, M.; Ghazi, M. E.; Izadifard, M.

    2018-06-01

    In this paper we have investigated the annealing temperature effect on the structure, morphology, dielectric and magnetic properties of sol-gel synthesized multiferroic BiFeO3 nanoparticles. X-ray diffraction spectroscopy revealed that all the samples have rhombohedrally distorted perovskite structure and the most pure BFO phase is obtained on the sample annealed at 800 °C. Field emission scanning electron microscopy (FESEM) revealed that increasing annealing temperature would increase the particle size. Decrease in dielectric constant was also observed by increasing annealing temperature. Vibrating sample method (VSM) analysis confirmed that samples annealed at 500-700 °C with particle size below the BFO's spiral spin structure length, have well saturated M-H curve and show ferromagnetic behavior.

  6. Comparative tests of ectoparasite species richness in seabirds

    PubMed Central

    Hughes, Joseph; Page, Roderic DM

    2007-01-01

    Background The diversity of parasites attacking a host varies substantially among different host species. Understanding the factors that explain these patterns of parasite diversity is critical to identifying the ecological principles underlying biodiversity. Seabirds (Charadriiformes, Pelecaniformes and Procellariiformes) and their ectoparasitic lice (Insecta: Phthiraptera) are ideal model groups in which to study correlates of parasite species richness. We evaluated the relative importance of morphological (body size, body weight, wingspan, bill length), life-history (longevity, clutch size), ecological (population size, geographical range) and behavioural (diving versus non-diving) variables as predictors of louse diversity on 413 seabird hosts species. Diversity was measured at the level of louse suborder, genus, and species, and uneven sampling of hosts was controlled for using literature citations as a proxy for sampling effort. Results The only variable consistently correlated with louse diversity was host population size and to a lesser extent geographic range. Other variables such as clutch size, longevity, morphological and behavioural variables including body mass showed inconsistent patterns dependent on the method of analysis. Conclusion The comparative analysis presented herein is (to our knowledge) the first to test correlates of parasite species richness in seabirds. We believe that the comparative data and phylogeny provide a valuable framework for testing future evolutionary hypotheses relating to the diversity and distribution of parasites on seabirds. PMID:18005412

  7. Heterogeneity in small aliquots of Apolllo 15 olivine-normative basalt: Implications for breccia clast studies

    NASA Astrophysics Data System (ADS)

    Lindstrom, Marilyn M.; Shervais, John W.; Vetter, Scott K.

    1993-05-01

    Most of the recent advances in lunar petrology are the direct result of breccia pull-apart studies, which have identified a wide array of new highland and mare basalt rock types that occur only as clasts within the breccias. These rocks show that the lunar crust is far more complex than suspected previously, and that processes such as magma mixing and wall-rock assimilation were important in its petrogenesis. These studies are based on the implicit assumption that the breccia clasts, which range in size from a few mm to several cm across, are representative of the parent rock from which they were derived. In many cases, the aliquot allocated for analysis may be only a few grain diameters across. While this problem is most acute for coarse-grained highland rocks, it can also cause considerable uncertainty in the analysis of mare basalt clasts. Similar problems arise with small aliquots of individual hand samples. Our study of sample heterogeneity in 9 samples of Apollo 15 olivine normative basalt (ONB) which exhibit a range in average grain size from coarse to fine are reported. Seven of these samples have not been analyzed previously, one has been analyzed by INAA only, and one has been analyzed by XRF+INAA. Our goal is to assess the effects of small aliquot size on the bulk chemistry of large mare basalt samples, and to extend this assessment to analyses of small breccia clasts.

  8. Heterogeneity in small aliquots of Apolllo 15 olivine-normative basalt: Implications for breccia clast studies

    NASA Technical Reports Server (NTRS)

    Lindstrom, Marilyn M.; Shervais, John W.; Vetter, Scott K.

    1993-01-01

    Most of the recent advances in lunar petrology are the direct result of breccia pull-apart studies, which have identified a wide array of new highland and mare basalt rock types that occur only as clasts within the breccias. These rocks show that the lunar crust is far more complex than suspected previously, and that processes such as magma mixing and wall-rock assimilation were important in its petrogenesis. These studies are based on the implicit assumption that the breccia clasts, which range in size from a few mm to several cm across, are representative of the parent rock from which they were derived. In many cases, the aliquot allocated for analysis may be only a few grain diameters across. While this problem is most acute for coarse-grained highland rocks, it can also cause considerable uncertainty in the analysis of mare basalt clasts. Similar problems arise with small aliquots of individual hand samples. Our study of sample heterogeneity in 9 samples of Apollo 15 olivine normative basalt (ONB) which exhibit a range in average grain size from coarse to fine are reported. Seven of these samples have not been analyzed previously, one has been analyzed by INAA only, and one has been analyzed by XRF+INAA. Our goal is to assess the effects of small aliquot size on the bulk chemistry of large mare basalt samples, and to extend this assessment to analyses of small breccia clasts.

  9. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Light scattering and transmission measurement using digital imaging for online analysis of constituents in milk

    NASA Astrophysics Data System (ADS)

    Jain, Pranay; Sarma, Sanjay E.

    2015-05-01

    Milk is an emulsion of fat globules and casein micelles dispersed in an aqueous medium with dissolved lactose, whey proteins and minerals. Quantification of constituents in milk is important in various stages of the dairy supply chain for proper process control and quality assurance. In field-level applications, spectrophotometric analysis is an economical option due to the low-cost of silicon photodetectors, sensitive to UV/Vis radiation with wavelengths between 300 - 1100 nm. Both absorption and scattering are witnessed as incident UV/Vis radiation interacts with dissolved and dispersed constituents in milk. These effects can in turn be used to characterize the chemical and physical composition of a milk sample. However, in order to simplify analysis, most existing instrument require dilution of samples to avoid effects of multiple scattering. The sample preparation steps are usually expensive, prone to human errors and unsuitable for field-level and online analysis. This paper introduces a novel digital imaging based method of online spectrophotometric measurements on raw milk without any sample preparation. Multiple LEDs of different emission spectra are used as discrete light sources and a digital CMOS camera is used as an image sensor. The extinction characteristic of samples is derived from captured images. The dependence of multiple scattering on power of incident radiation is exploited to quantify scattering. The method has been validated with experiments for response with varying fat concentrations and fat globule sizes. Despite of the presence of multiple scattering, the method is able to unequivocally quantify extinction of incident radiation and relate it to the fat concentrations and globule sizes of samples.

  11. Genome-wide meta-analyses of stratified depression in Generation Scotland and UK Biobank.

    PubMed

    Hall, Lynsey S; Adams, Mark J; Arnau-Soler, Aleix; Clarke, Toni-Kim; Howard, David M; Zeng, Yanni; Davies, Gail; Hagenaars, Saskia P; Maria Fernandez-Pujals, Ana; Gibson, Jude; Wigmore, Eleanor M; Boutin, Thibaud S; Hayward, Caroline; Scotland, Generation; Porteous, David J; Deary, Ian J; Thomson, Pippa A; Haley, Chris S; McIntosh, Andrew M

    2018-01-10

    Few replicable genetic associations for Major Depressive Disorder (MDD) have been identified. Recent studies of MDD have identified common risk variants by using a broader phenotype definition in very large samples, or by reducing phenotypic and ancestral heterogeneity. We sought to ascertain whether it is more informative to maximize the sample size using data from all available cases and controls, or to use a sex or recurrent stratified subset of affected individuals. To test this, we compared heritability estimates, genetic correlation with other traits, variance explained by MDD polygenic score, and variants identified by genome-wide meta-analysis for broad and narrow MDD classifications in two large British cohorts - Generation Scotland and UK Biobank. Genome-wide meta-analysis of MDD in males yielded one genome-wide significant locus on 3p22.3, with three genes in this region (CRTAP, GLB1, and TMPPE) demonstrating a significant association in gene-based tests. Meta-analyzed MDD, recurrent MDD and female MDD yielded equivalent heritability estimates, showed no detectable difference in association with polygenic scores, and were each genetically correlated with six health-correlated traits (neuroticism, depressive symptoms, subjective well-being, MDD, a cross-disorder phenotype and Bipolar Disorder). Whilst stratified GWAS analysis revealed a genome-wide significant locus for male MDD, the lack of independent replication, and the consistent pattern of results in other MDD classifications suggests that phenotypic stratification using recurrence or sex in currently available sample sizes is currently weakly justified. Based upon existing studies and our findings, the strategy of maximizing sample sizes is likely to provide the greater gain.

  12. Raman microscopy of size-segregated aerosol particles, collected at the Sonnblick Observatory in Austria

    NASA Astrophysics Data System (ADS)

    Ofner, Johannes; Kasper-Giebl, Anneliese; Kistler, Magdalena; Matzl, Julia; Schauer, Gerhard; Hitzenberger, Regina; Lohninger, Johann; Lendl, Bernhard

    2014-05-01

    Size classified aerosol samples were collected using low pressure impactors in July 2013 at the high alpine background site Sonnnblick. The Sonnblick Observatory is located in the Austrian Alps, at the summit of Sonnblick 3100 m asl. Sampling was performed in parallel on the platform of the Observatory and after the aerosol inlet. The inlet is constructed as a whole air inlet and is operated at an overall sampling flow of 137 lpm and heated to 30 °C. Size cuts of the eight stage low pressure impactors were from 0.1 to 12.8 µm a.d.. Alumina foils were used as sample substrates for the impactor stages. In addition to the size classified aerosol sampling overall aerosol mass (Sharp Monitor 5030, Thermo Scientific) and number concentrations (TSI, CPC 3022a; TCC-3, Klotz) were determined. A Horiba LabRam 800HR Raman microscope was used for vibrational mapping of an area of about 100 µm x 100 µm of the alumina foils at a resolution of about 0.5 µm. The Raman microscope is equipped with a laser with an excitation wavelength of 532 nm and a grating with 300 gr/mm. Both optical images and the related chemical images were combined and a chemometric investigation of the combined images was done using the software package Imagelab (Epina Software Labs). Based on the well-known environment, a basic assignment of Raman signals of single particles is possible at a sufficient certainty. Main aerosol constituents e.g. like sulfates, black carbon and mineral particles could be identified. First results of the chemical imaging of size-segregated aerosol, collected at the Sonnblick Observatory, will be discussed with respect to standardized long-term measurements at the sampling station. Further, advantages and disadvantages of chemical imaging with subsequent chemometric investigation of the single images will be discussed and compared to the established methods of aerosol analysis. The chemometric analysis of the dataset is focused on mixing and variation of single compounds at different stages of the impactors.

  13. Thermal conductivity of nanocrystalline silicon: importance of grain size and frequency-dependent mean free paths.

    PubMed

    Wang, Zhaojie; Alaniz, Joseph E; Jang, Wanyoung; Garay, Javier E; Dames, Chris

    2011-06-08

    The thermal conductivity reduction due to grain boundary scattering is widely interpreted using a scattering length assumed equal to the grain size and independent of the phonon frequency (gray). To assess these assumptions and decouple the contributions of porosity and grain size, five samples of undoped nanocrystalline silicon have been measured with average grain sizes ranging from 550 to 64 nm and porosities from 17% to less than 1%, at temperatures from 310 to 16 K. The samples were prepared using current activated, pressure assisted densification (CAPAD). At low temperature the thermal conductivities of all samples show a T(2) dependence which cannot be explained by any traditional gray model. The measurements are explained over the entire temperature range by a new frequency-dependent model in which the mean free path for grain boundary scattering is inversely proportional to the phonon frequency, which is shown to be consistent with asymptotic analysis of atomistic simulations from the literature. In all cases the recommended boundary scattering length is smaller than the average grain size. These results should prove useful for the integration of nanocrystalline materials in devices such as advanced thermoelectrics.

  14. Analysis of the Army’s Installation Support Modules with the Private Sector’s Open Information Systems

    DTIC Science & Technology

    1993-04-09

    according to Bryman , it is possible to use a small sample and still maintain a high degree of validity as long as the sampling size represents a ...Analysis of the Army’s 48 References Alperin, Jeffry A . & St. Germain, Robert A ., Jr. (1989). Open Systems: Ready for Lift-Off? Best Review, 90, 82. Bryman ...8217" Availability Codes • -- a l and I or Dist Spccial*,vai Analysis of the Army’s iii TABLE OF CONTENTS Title Page

  15. Monitoring of bioaerosol inhalation risks in different environments using a six-stage Andersen sampler and the PCR-DGGE method.

    PubMed

    Xu, Zhenqiang; Yao, Maosheng

    2013-05-01

    Increasing evidences show that inhalation of indoor bioaerosols has caused numerous adverse health effects and diseases. However, the bioaerosol size distribution, composition, and concentration level, representing different inhalation risks, could vary with different living environments. The six-stage Andersen sampler is designed to simulate the sampling of different human lung regions. Here, the sampler was used in investigating the bioaerosol exposure in six different environments (student dorm, hospital, laboratory, hotel room, dining hall, and outdoor environment) in Beijing. During the sampling, the Andersen sampler was operated for 30 min for each sample, and three independent experiments were performed for each of the environments. The air samples collected onto each of the six stages of the sampler were incubated on agar plates directly at 26 °C, and the colony forming units (CFU) were manually counted and statistically corrected. In addition, the developed CFUs were washed off the agar plates and subjected to polymerase chain reaction (PCR)-denaturing gradient gel electrophoresis (DGGE) for diversity analysis. Results revealed that for most environments investigated, the culturable bacterial aerosol concentrations were higher than those of culturable fungal aerosols. The culturable bacterial and fungal aerosol fractions, concentration, size distribution, and diversity were shown to vary significantly with the sampling environments. PCR-DGGE analysis indicated that different environments had different culturable bacterial aerosol compositions as revealed by distinct gel band patterns. For most environments tested, larger (>3 μm) culturable bacterial aerosols with a skewed size distribution were shown to prevail, accounting for more than 60 %, while for culturable fungal aerosols with a normal size distribution, those 2.1-4.7 μm dominated, accounting for 20-40 %. Alternaria, Cladosporium, Chaetomium, and Aspergillus were found abundant in most environments studied here. Viable microbial load per unit of particulate matter was also shown to vary significantly with the sampling environments. The results from this study suggested that different environments even with similar levels of total microbial culturable aerosol concentrations could present different inhalation risks due to different bioaerosol particle size distribution and composition. This work fills literature gaps regarding bioaerosol size and composition-based exposure risks in different human dwellings in contrast to a vast body of total bioaerosol levels.

  16. Analysis of hard coal quality for narrow size fraction under 20 mm

    NASA Astrophysics Data System (ADS)

    Niedoba, Tomasz; Pięta, Paulina

    2018-01-01

    The paper presents the results of an analysis of hard coal quality diversion in narrow size fraction by using taxonomic methods. Raw material samples were collected in selected mines of Upper Silesian Industrial Region and they were classified according to the Polish classification as types 31, 34.2 and 35. Then, each size fraction was characterized in terms of the following properties: density, ash content, calorific content, volatile content, total sulfur content and analytical moisture. As a result of the analysis it can be stated that the best quality in the entire range of the tested size fractions was the 34.2 coking coal type. At the same time, in terms of price parameters, high quality of raw material characterised the following size fractions: 0-6.3 mm of 31 energetic coal type and 0-3.15 mm of 35 coking coal type. The methods of grouping (Ward's method) and agglomeration (k-means method) have shown that the size fraction below 10 mm was characterized by higher quality in all the analyzed hard coal types. However, the selected taxonomic methods do not make it possible to identify individual size fraction or hard coal types based on chosen parameters.

  17. Preliminary assessment of an economical fugitive road dust sampler for the collection of bulk samples for geochemical analysis.

    PubMed

    Witt, Emitt C; Wronkiewicz, David J; Shi, Honglan

    2013-01-01

    Fugitive road dust collection for chemical analysis and interpretation has been limited by the quantity and representativeness of samples. Traditional methods of fugitive dust collection generally focus on point-collections that limit data interpretation to a small area or require the investigator to make gross assumptions about the origin of the sample collected. These collection methods often produce a limited quantity of sample that may hinder efforts to characterize the samples by multiple geochemical techniques, preserve a reference archive, and provide a spatially integrated characterization of the road dust health hazard. To achieve a "better sampling" for fugitive road dust studies, a cyclonic fugitive dust (CFD) sampler was constructed and tested. Through repeated and identical sample collection routes at two collection heights (50.8 and 88.9 cm above the road surface), the products of the CFD sampler were characterized using particle size and chemical analysis. The average particle size collected by the cyclone was 17.9 μm, whereas particles collected by a secondary filter were 0.625 μm. No significant difference was observed between the two sample heights tested and duplicates collected at the same height; however, greater sample quantity was achieved at 50.8 cm above the road surface than at 88.9 cm. The cyclone effectively removed 94% of the particles >1 μm, which substantially reduced the loading on the secondary filter used to collect the finer particles; therefore, suction is maintained for longer periods of time, allowing for an average sample collection rate of about 2 g mi. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  18. Is High Resolution Melting Analysis (HRMA) Accurate for Detection of Human Disease-Associated Mutations? A Meta Analysis

    PubMed Central

    Ma, Feng-Li; Jiang, Bo; Song, Xiao-Xiao; Xu, An-Gao

    2011-01-01

    Background High Resolution Melting Analysis (HRMA) is becoming the preferred method for mutation detection. However, its accuracy in the individual clinical diagnostic setting is variable. To assess the diagnostic accuracy of HRMA for human mutations in comparison to DNA sequencing in different routine clinical settings, we have conducted a meta-analysis of published reports. Methodology/Principal Findings Out of 195 publications obtained from the initial search criteria, thirty-four studies assessing the accuracy of HRMA were included in the meta-analysis. We found that HRMA was a highly sensitive test for detecting disease-associated mutations in humans. Overall, the summary sensitivity was 97.5% (95% confidence interval (CI): 96.8–98.5; I2 = 27.0%). Subgroup analysis showed even higher sensitivity for non-HR-1 instruments (sensitivity 98.7% (95%CI: 97.7–99.3; I2 = 0.0%)) and an eligible sample size subgroup (sensitivity 99.3% (95%CI: 98.1–99.8; I2 = 0.0%)). HRMA specificity showed considerable heterogeneity between studies. Sensitivity of the techniques was influenced by sample size and instrument type but by not sample source or dye type. Conclusions/Significance These findings show that HRMA is a highly sensitive, simple and low-cost test to detect human disease-associated mutations, especially for samples with mutations of low incidence. The burden on DNA sequencing could be significantly reduced by the implementation of HRMA, but it should be recognized that its sensitivity varies according to the number of samples with/without mutations, and positive results require DNA sequencing for confirmation. PMID:22194806

  19. The albatross plot: A novel graphical tool for presenting results of diversely reported studies in a systematic review

    PubMed Central

    Jones, Hayley E.; Martin, Richard M.; Lewis, Sarah J.; Higgins, Julian P.T.

    2017-01-01

    Abstract Meta‐analyses combine the results of multiple studies of a common question. Approaches based on effect size estimates from each study are generally regarded as the most informative. However, these methods can only be used if comparable effect sizes can be computed from each study, and this may not be the case due to variation in how the studies were done or limitations in how their results were reported. Other methods, such as vote counting, are then used to summarize the results of these studies, but most of these methods are limited in that they do not provide any indication of the magnitude of effect. We propose a novel plot, the albatross plot, which requires only a 1‐sided P value and a total sample size from each study (or equivalently a 2‐sided P value, direction of effect and total sample size). The plot allows an approximate examination of underlying effect sizes and the potential to identify sources of heterogeneity across studies. This is achieved by drawing contours showing the range of effect sizes that might lead to each P value for given sample sizes, under simple study designs. We provide examples of albatross plots using data from previous meta‐analyses, allowing for comparison of results, and an example from when a meta‐analysis was not possible. PMID:28453179

  20. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    PubMed

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  1. Impedance analysis and dielectric response of anatase TiO2 nanoparticles codoped with Mn and Co ions

    NASA Astrophysics Data System (ADS)

    Kumar, Anand; Kashyap, Manish K.; Sabharwal, Namita; Kumar, Sarvesh; Kumar, Ashok; Kumar, Parmod; Asokan, K.

    2017-11-01

    In order to elucidate the effect of transition metal (TM) doping, the impedance and dielectric responses of Co and/or Mn-doped TiO2 nanocrystalline powder samples with 3% doping concentration synthesized via sol gel technique, have been analyzed. X-ray diffraction (XRD) analysis confirms the formation of tetragonal TiO2 anatase phase for all studied samples without any extra impurity phase peaks. The variation in the grain size measured from field emission scanning electron microscope (FESEM) measurements for all the samples are in accordance with the change in crystallite size as obtained from XRD. The DC resistivity for pure TiO2 nanoparticles is the highest while codoped samples exhibit low resistivity. The temperature dependent dielectric constant and dielectric loss possess step like enhancement and show the relaxation behavior. At room temperature, the dielectric function and dielectric loss decrease rapidly with increase in frequency and become almost constant at the higher frequencies. Such a decrease in dielectric loss is suitable for energy storage devices.

  2. Nondestructive ultrasonic characterization of armor grade silicon carbide

    NASA Astrophysics Data System (ADS)

    Portune, Andrew Richard

    Ceramic materials have traditionally been chosen for armor applications for their superior mechanical properties and low densities. At high strain rates seen during ballistic events, the behavior of these materials relies upon the total volumetric flaw concentration more so than any single anomalous flaw. In this context flaws can be defined as any microstructural feature which detriments the performance of the material, potentially including secondary phases, pores, or unreacted sintering additives. Predicting the performance of armor grade ceramic materials depends on knowledge of the absolute and relative concentration and size distribution of bulk heterogeneities. Ultrasound was chosen as a nondestructive technique for characterizing the microstructure of dense silicon carbide ceramics. Acoustic waves interact elastically with grains and inclusions in large sample volumes, and were well suited to determine concentration and size distribution variations for solid inclusions. Methodology was developed for rapid acquisition and analysis of attenuation coefficient spectra. Measurements were conducted at individual points and over large sample areas using a novel technique entitled scanning acoustic spectroscopy. Loss spectra were split into absorption and scattering dominant frequency regimes to simplify analysis. The primary absorption mechanism in polycrystalline silicon carbide was identified as thermoelastic in nature. Correlations between microstructural conditions and parameters within the absorption equation were established through study of commercial and custom engineered SiC materials. Nonlinear least squares regression analysis was used to estimate the size distributions of boron carbide and carbon inclusions within commercial SiC materials. This technique was shown to additionally be capable of approximating grain size distributions in engineered SiC materials which did not contain solid inclusions. Comparisons to results from electron microscopy exhibited favorable agreement between predicted and observed distributions. Developed techniques were applied to large sample areas using scanning acoustic spectroscopy to map variations in the size distribution and concentration of grains and solid inclusions within the bulk microstructure. The experiments performed in this thesis form the foundation of a novel characterization technique capable of mapping variations in sample composition which could be extended to a wide range of dense polycrystalline heterogeneous materials.

  3. ELEMENTAL COMPOSITION OF FRESHLY NUCLEATED PARTICLES

    EPA Science Inventory

    The main objective of this work is to develop a method for real-time sampling and analysis of individual airborne nanoparticles in the 5 - 20 nm diameter range. The size range covered by this method is much smaller than existing single particle methods for chemical analysis. S...

  4. EVALUATION OF COMPUTER-CONTROLLED SCANNING ELECTRON MICROSCOPY APPLIED TO AN AMBIENT URBAN AEROSOL SAMPLE

    EPA Science Inventory


    Recent interest in monitoring and speciation of particulate matter has led to increased application of scanning electron microscopy (SEM) coupled with energy-dispersive x-ray analysis (EDX) to individual particle analysis. SEM/EDX provides information on the size, shape, co...

  5. LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu

    2017-01-20

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of thesemore » data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.« less

  6. VNIR reflectance spectroscopy of natural carbonate rocks: implication for remote sensing identification of fault damage zones

    NASA Astrophysics Data System (ADS)

    Traforti, Anna; Mari, Giovanna; Carli, Cristian; Demurtas, Matteo; Massironi, Matteo; Di Toro, Giulio

    2017-04-01

    Reflectance spectroscopy in the visible and near-infrared (VNIR) is a common technique used to study the mineral composition of Solar System bodies from remote sensed and in-situ robotic exploration. In the VNIR spectral range, both crystal field and vibrational overtone absorptions can be present with spectral characteristics (i.e. albedo, slopes, absorption band with different positions and depths) that vary depending on composition and texture (e.g. grain size, roughness) of the sensed materials. The characterization of the spectral variability related to the rock texture, especially in terms of grain size (i.e., both the size of rock components and the size of particulates), commonly allows to obtain a wide range of information about the different geological processes modifying the planetary surfaces. This work is aimed at characterizing how the grain size reduction associated to fault zone development produces reflectance variations in rock and mineral spectral signatures. To achieve this goal we present VNIR reflectance analysis of a set of fifteen rock samples collected at increasing distances from the fault core of the Vado di Corno fault zone (Campo Imperatore Fault System - Italian Central Apennines). The selected samples had similar content of calcite and dolomite but different grain size (X-Ray Powder Diffraction, optical and scanning electron microscopes analysis). Consequently, differences in the spectral signature of the fault rocks should not be ascribed to mineralogical composition. For each sample, bidirectional reflectance spectra were acquired with a Field-Pro Spectrometer mounted on a goniometer, on crushed rock slabs reduced to grain size <800, <200, <63, <10 μm and on intact fault zone rock slabs. The spectra were acquired on dry samples, at room temperature and normal atmospheric pressure. The source used was a Tungsten Halogen lamp with an illuminated spot area of ca. 0.5 cm2and incidence and emission angles of 30˚ and 0˚ respectively. The spectral analysis of the crushed and intact rock slabs in the VNIR spectral range revealed that in both cases, with increasing grain size: (i) the reflectance decreases (ii) VNIR spectrum slopes (i.e. calculated between wavelengths of 0.425 - 0.605 μm and 2.205 - 2.33 μm, respectively) and (iii) carbonate main absorption band depth (i.e. vibrational absorption band at wavelength of ˜2.3 μm) increase. In conclusion, grain size variations resulting from the fault zone evolution (e.g., cumulated slip or development of thick damage zones) produce reflectance variations in rocks and mineral spectral signatures. The remote sensing analysis in the VNIR spectral range can be applied to identify the spatial distribution and extent of fault core and damage zone domains for industrial and seismic hazard applications. Moreover, the spectral characterization of carbonate-built rocks can be of great interest for the surface investigation of inner planets (e.g. Earth and Mars) and outer bodies (e.g. Galilean icy satellites). On these surfaces, carbonate minerals at different grain sizes are common and usually related to water and carbon distribution, with direct implications for potential life outside Earth (e.g. Mars).

  7. Specific-age group sex estimation of infants through geometric morphometrics analysis of pubis and ischium.

    PubMed

    Estévez Campo, Enrique José; López-Lázaro, Sandra; López-Morago Rodríguez, Claudia; Alemán Aguilera, Inmaculada; Botella López, Miguel Cecilio

    2018-05-01

    Sex determination of unknown individuals is one of the primary goals of Physical and Forensic Anthropology. The adult skeleton can be sexed using both morphological and metric traits on a large number of bones. The human pelvis is often used as an important element of adult sex determination. However, studies carried out about the pelvic bone in subadult individuals present several limitations due the absence of sexually dimorphic characteristics. In this study, we analyse the sexual dimorphism of the immature pubis and ischium bones, attending to their shape (Procrustes residuals) and size (centroid size), using an identified sample of subadult individuals composed of 58 individuals for the pubis and 83 for the ischium, aged between birth and 1year of life, from the Granada osteological collection of identified infants (Granada, Spain). Geometric morphometric methods and discriminant analysis were applied to this study. The results of intra- and inter-observer error showed good and excellent agreement in the location of coordinates of landmarks and semilandmarks, respectively. Principal component analysis performed on shape and size variables showed superposition of the two sexes, suggesting a low degree of sexual dimorphism. Canonical variable analysis did not show significant changes between the male and female shapes. As a consequence, discriminant analysis with leave-one-out cross validation provided low classification accuracy. The results suggested a low degree of sexual dimorphism supported by significant sexual dimorphism in the subadult sample and poor cross-validated classification accuracy. The inclusion of centroid size as a discriminant variable does not imply a significant improvement in the results of the analysis. The similarities found between the sexes prevent consideration of pubic and ischial morphology as a sex estimator in early stages of development. The authors suggest extending this study by analysing the different trajectories of shape and size in later ontogeny between males and females. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Erosion of an ancient mountain range, the Great Smoky Mountains, North Carolina and Tennessee

    USGS Publications Warehouse

    Matmon, A.; Bierman, P.R.; Larsen, J.; Southworth, S.; Pavich, M.; Finkel, R.; Caffee, M.

    2003-01-01

    Analysis of 10Be and 26Al in bedrock (n=10), colluvium (n=5 including grain size splits), and alluvial sediments (n=59 including grain size splits), coupled with field observations and GIS analysis, suggest that erosion rates in the Great Smoky Mountains are controlled by subsurface bedrock erosion and diffusive slope processes. The results indicate rapid alluvial transport, minimal alluvial storage, and suggest that most of the cosmogenic nuclide inventory in sediments is accumulated while they are eroding from bedrock and traveling down hill slopes. Spatially homogeneous erosion rates of 25 - 30 mm Ky-1 are calculated throughout the Great Smoky Mountains using measured concentrations of cosmogenic 10Be and 26Al in quartz separated from alluvial sediment. 10Be and 26Al concentrations in sediments collected from headwater tributaries that have no upstream samples (n=18) are consistent with an average erosion rate of 28 ?? 8 mm Ky-1, similar to that of the outlet rivers (n=16, 24 ?? 6 mm Ky-1), which carry most of the sediment out of the mountain range. Grain-size-specific analysis of 6 alluvial sediment samples shows higher nuclide concentrations in smaller grain sizes than in larger ones. The difference in concentrations arises from the large elevation distribution of the source of the smaller grains compared with the narrow and relatively low source elevation of the large grains. Large sandstone clasts disaggregate into sand-size grains rapidly during weathering and downslope transport; thus, only clasts from the lower parts of slopes reach the streams. 26Al/10Be ratios do not suggest significant burial periods for our samples. However, alluvial samples have lower 26Al/10Be ratios than bedrock and colluvial samples, a trend consistent with a longer integrated cosmic ray exposure history that includes periods of burial during down-slope transport. The results confirm some of the basic ideas embedded in Davis' geographic cycle model, such as the reduction of relief through slope processes, and of Hack's dynamic equilibrium model such as the similarity of erosion rates across different lithologies. Comparing cosmogenic nuclide data with other measured and calculated erosion rates for the Appalachians, we conclude that rates of erosion, integrated over varying time periods from decades to a hundred million years are similar, the result of equilibrium between erosion and isostatic uplift in the southern Appalachian Mountains.

  9. Should particle size analysis data be combined with EPA approved sampling method data in the development of AP-42 emission factors?

    USDA-ARS?s Scientific Manuscript database

    A cotton ginning industry-supported project was initiated in 2008 and completed in 2013 to collect additional data for U.S. Environmental Protection Agency’s (EPA) Compilation of Air Pollution Emission Factors (AP-42) for PM10 and PM2.5. Stack emissions were collected using particle size distributio...

  10. Adjusting for multiple prognostic factors in the analysis of randomised trials

    PubMed Central

    2013-01-01

    Background When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large sample sizes, however treating strata as random effects should be the analysis method of choice with binary or time-to-event outcomes and a small sample size. PMID:23898993

  11. Detection of Organic Constituents Including Chloromethylpropene in the Analyses of the ROCKNEST Drift by Sample Analysis at Mars (SAM)

    NASA Technical Reports Server (NTRS)

    Eigenbrode, J. L.; Glavin, D.; Coll, P.; Summons, R. E.; Mahaffy, P.; Archer, D.; Brunner, A.; Conrad, P.; Freissinet, C.; Martin, M.; hide

    2013-01-01

    key challenge in assessing the habitability of martian environments is the detection of organic matter - a requirement of all life as we know it. The Curiosity rover, which landed on August 6, 2012 in Gale Crater of Mars, includes the Sample Analysis at Mars (SAM) instrument suite capable of in situ analysis of gaseous organic components thermally evolved from sediment samples collected, sieved, and delivered by the MSL rover. On Sol 94, SAM received its first solid sample: scooped sediment from Rocknest that was sieved to <150 m particle size. Multiple 10-40 mg portions of the scoop #5 sample were delivered to SAM for analyses. Prior to their introduction, a blank (empty cup) analysis was performed. This blank served 1) to clean the analytical instrument of SAMinternal materials that accumulated in the gas processing system since integration into the rover, and 2) to characterize the background signatures of SAM. Both the blank and the Rocknest samples showed the presence of hydrocarbon components.

  12. The multiple imputation method: a case study involving secondary data analysis.

    PubMed

    Walani, Salimah R; Cleland, Charles M

    2015-05-01

    To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.

  13. Nanostructured zirconium phosphate as ion exchanger: Synthesis, size dependent property and analytical application in radiochemical separation.

    PubMed

    Chakraborty, Rajesh; Bhattacharaya, Koustava; Chattopadhyay, Pabitra

    2014-02-01

    Nanostructured zirconium phosphates (ZPs) of different sizes were synthesized using Tritron X-100 (polyethylene glycol-p-isooctylphenyl ether) surfactant. The materials were characterized by FTIR and powdered X-ray diffraction (XRD). The structural and morphological details of the material were established by scanning electron microscopy (SEM) and transmission electron microscopy (TEM). The SEM study was followed by energy dispersive spectroscopic analysis (EDS) for elemental analysis of the sample. The particle sizes were determined by dynamic light scattering (DLS) method. Ion exchange capacity of these nanomaterials towards different metal ions was measured and size-dependent ion exchange property of the materials was investigated thoroughly. The nanomaterial of the smallest size (ca. 21.04nm) was employed to separate carrier-free (137m)Ba from (137)Cs in column chromatographic technique using 1.0M HNO3 as eluting agent at pH=5. © 2013 Elsevier Ltd. All rights reserved.

  14. Terahertz Spectroscopy for Proximal Soil Sensing: An Approach to Particle Size Analysis

    PubMed Central

    Dworak, Volker; Mahns, Benjamin; Selbeck, Jörn; Weltzien, Cornelia

    2017-01-01

    Spatially resolved soil parameters are some of the most important pieces of information for precision agriculture. These parameters, especially the particle size distribution (texture), are costly to measure by conventional laboratory methods, and thus, in situ assessment has become the focus of a new discipline called proximal soil sensing. Terahertz (THz) radiation is a promising method for nondestructive in situ measurements. The THz frequency range from 258 gigahertz (GHz) to 350 GHz provides a good compromise between soil penetration and the interaction of the electromagnetic waves with soil compounds. In particular, soil physical parameters influence THz measurements. This paper presents investigations of the spectral transmission signals from samples of different particle size fractions relevant for soil characterization. The sample thickness ranged from 5 to 17 mm. The transmission of THz waves was affected by the main mineral particle fractions, sand, silt and clay. The resulting signal changes systematically according to particle sizes larger than half the wavelength. It can be concluded that THz spectroscopic measurements provide information about soil texture and penetrate samples with thicknesses in the cm range. PMID:29048392

  15. Passive vs. Parachute System Architecture for Robotic Sample Return Vehicles

    NASA Technical Reports Server (NTRS)

    Maddock, Robert W.; Henning, Allen B.; Samareh, Jamshid A.

    2016-01-01

    The Multi-Mission Earth Entry Vehicle (MMEEV) is a flexible vehicle concept based on the Mars Sample Return (MSR) EEV design which can be used in the preliminary sample return mission study phase to parametrically investigate any trade space of interest to determine the best entry vehicle design approach for that particular mission concept. In addition to the trade space dimensions often considered (e.g. entry conditions, payload size and mass, vehicle size, etc.), the MMEEV trade space considers whether it might be more beneficial for the vehicle to utilize a parachute system during descent/landing or to be fully passive (i.e. not use a parachute). In order to evaluate this trade space dimension, a simplified parachute system model has been developed based on inputs such as vehicle size/mass, payload size/mass and landing requirements. This model works in conjunction with analytical approximations of a mission trade space dataset provided by the MMEEV System Analysis for Planetary EDL (M-SAPE) tool to help quantify the differences between an active (with parachute) and a passive (no parachute) vehicle concept.

  16. Surface-sediment grain-size distribution and sediment transport in the subaqueous Mekong Delta, Vietnam

    NASA Astrophysics Data System (ADS)

    Nguyen, T. T.; Stattegger, K.; Nittrouer, C.; Phung, P. V.; Liu, P.; DeMaster, D. J.; Bui, D. V.; Le, A. D.; Nguyen, T. N.

    2016-02-01

    Collected surface-sediment samples in coastal water around Mekong Delta (from distributary channels to Ca Mau Peninsula) were analyzed to determine surface-sediment grain-size distribution and sediment-transport trend in the subaqueous Mekong Delta. The grain-size data set of 238 samples was obtained by using the laser instrument Mastersizer 2000 and LS Particle Size Analyzer. Fourteen samples were selected for geochemical analysis (total-organic and carbonate content). These geochemical results were used to assist in interpreting variations of granulometricparamenters along the cross-shore transects. Nine transects were examined from CungHau river mouth to Ca Mau Peninsula and six thematic maps on the whole study area were made. The research results indicate that: (1) generally, the sediment becomes finer from the delta front downwards to prodelta and becomes coarser again and poorer sorted on the adjacent inner shelf due to different sources of sediment; (2) sediment-granulometry parameters vary among sedimentary sub-environments of the underwater part of Mekong Delta, the distance from sediment source and hydrodynamic regime controlling each region; (3) the net sediment transport is southwest toward the Ca Mau Peninsula.

  17. Facile Synthesis of Calcium Borate Nanoparticles and the Annealing Effect on Their Structure and Size

    PubMed Central

    Erfani, Maryam; Saion, Elias; Soltani, Nayereh; Hashim, Mansor; Wan Abdullah, Wan Saffiey B.; Navasery, Manizheh

    2012-01-01

    Calcium borate nanoparticles have been synthesized by a thermal treatment method via facile co-precipitation. Differences of annealing temperature and annealing time and their effects on crystal structure, particle size, size distribution and thermal stability of nanoparticles were investigated. The formation of calcium borate compound was characterized by X-ray diffraction (XRD) and Fourier Transform Infrared spectroscopy (FTIR), Transmission electron microscopy (TEM), and Thermogravimetry (TGA). The XRD patterns revealed that the co-precipitated samples annealed at 700 °C for 3 h annealing time formed an amorphous structure and the transformation into a crystalline structure only occurred after 5 h annealing time. It was found that the samples annealed at 900 °C are mostly metaborate (CaB2O4) nanoparticles and tetraborate (CaB4O7) nanoparticles only observed at 970 °C, which was confirmed by FTIR. The TEM images indicated that with increasing the annealing time and temperature, the average particle size increases. TGA analysis confirmed the thermal stability of the annealed samples at higher temperatures. PMID:23203073

  18. A simple method for the analysis of particle sizes of forage and total mixed rations.

    PubMed

    Lammers, B P; Buckmaster, D R; Heinrichs, A J

    1996-05-01

    A simple separator was developed to determine the particle sizes of forage and TMR that allows for easy separation of wet forage into three fractions and also allows plotting of the particle size distribution. The device was designed to mimic the laboratory-scale separator for forage particle sizes that was specified by Standard S424 of the American Society of Agricultural Engineers. A comparison of results using the standard device and the newly developed separator indicated no difference in ability to predict fractions of particles with maximum length of less than 8 and 19 mm. The separator requires a small quantity of sample (1.4 L) and is manually operated. The materials on the screens and bottom pan were weighed to obtain the cumulative percentage of sample that was undersize for the two fractions. The results were then plotted using the Weibull distribution, which proved to be the best fit for the data. Convenience samples of haycrop silage, corn silage, and TMR from farms in the northeastern US were analyzed using the forage and TMR separator, and the range of observed values are given.

  19. Compositional and Microtextural Analysis of Basaltic Feedstock Materials Used for the 2010 ISRU Field Tests, Mauna Kea, Hawaii

    NASA Astrophysics Data System (ADS)

    Marin, N.; Farmer, J. D.; Zacny, K.; Sellar, R. G.; Nunez, J.

    2011-12-01

    This study seeks to understand variations in composition and texture of basaltic pyroclastic materials used in the 2010 International Lunar Surface Operation-In-Situ Resource Utilization Analogue Test (ILSO-ISRU) held on the slopes of Mauna Kea Volcano, Hawaii (1). The quantity and quality of resources delivered by ISRU depends upon the nature of the materials processed (2). We obtained a one-meter deep auger cuttings sample of a basaltic regolith at the primary site for feed stock materials being mined for the ISRU field test. The auger sample was subdivided into six, ~16 cm depth increments and each interval was sampled and characterized in the field using the Multispectral Microscopic Imager (MMI; 3) and a portable X-ray Diffractometer (Terra, InXitu Instruments, Inc.). Splits from each sampled interval were returned to the lab and analyzed using more definitive methods, including high resolution Powder X-ray Diffraction and Thermal Infrared (TIR) spectroscopy. The mineralogy and microtexture (grain size, sorting, roundness and sphericity) of the auger samples were determined using petrographic point count measurements obtained from grain-mount thin sections. NIH Image J (http://rsb.info.nih.gov/ij/) was applied to digital images of thin sections to document changes in particle size with depth. Results from TIR showed a general predominance of volcanic glass, along with plagioclase, olivine, and clinopyroxene. In addition, thin section and XRPD analyses showed a down core increase in the abundance of hydrated iron oxides (as in situ weathering products). Quantitative point count analyses confirmed the abundance of volcanic glass in samples, but also revealed olivine and pyroxene to be minor components, that decreased in abundance with depth. Furthermore, point count and XRD analyses showed a decrease in magnetite and ilmenite with depth, accompanied by an increase in Fe3+phases, including hematite and ferrihydrite. Image J particle analysis showed that the average grain size decreased down the depth profile. This decrease in average grain size and increase in hydrated iron oxides down hole suggests that the most favorable ISRU feedstock materials were sampled in the lower half-meter of the mine section sampled.

  20. Synthesis and characterization of polyvinylimidazole-grafted superparamagnetic iron oxide nanoparticles (Si-PVIm-grafted SPION)

    NASA Astrophysics Data System (ADS)

    Erdemi, H.; Sözeri, H.; Şenel, M.; Baykal, A.

    2012-08-01

    Polyvinylimidazole (PVIm)-grafted superparamagnetic iron oxide nanoparticles (SPION) (Si-PVIm-grafted Fe3O4 NPs) were prepared by grafting of telomere of PVIm on the SPION. The product identified as magnetite, which has an average crystallite size of 9 ± 2 nm as estimated from X-ray line profile fitting. Particle size was estimated as 10.0 ± 0.5 nm from TEM micrographs. Mean particle size is found as 8.4 ± 1.0 nm which agrees well with the values calculated from XRD patterns (9 ± 2 nm). Vibrating Sample Magnetometer (VSM) analysis explained the superparamagnetic nature of the nanocomposite. Thermogravimetric analysis showed that the Si-Imi is 25 % of the Si-PVIm-grafted SPION, which means an inorganic content is about 75 %. Detailed electrical and dielectric properties of the properties of the product are also presented. The conductivity of the sample increases significantly with temperature and has the value in the range of 1.14 × 10-7-1.78 × 10-4 S cm-1. Analysis of the real and imaginary parts of the permittivities indicated temperature and frequency dependency representing interfacial polarization and temperature-assisted reorganization effects.

  1. The Effect of Hypnosis on Anxiety in Patients With Cancer: A Meta-Analysis.

    PubMed

    Chen, Pei-Ying; Liu, Ying-Mei; Chen, Mei-Ling

    2017-06-01

    Anxiety is a common form of psychological distress in patients with cancer. One recognized nonpharmacological intervention to reduce anxiety for various populations is hypnotherapy or hypnosis. However, its effect in reducing anxiety in cancer patients has not been systematically evaluated. This meta-analysis was designed to synthesize the immediate and sustained effects of hypnosis on anxiety of cancer patients and to identify moderators for these hypnosis effects. Qualified studies including randomized controlled trials (RCT) and pre-post design studies were identified by searching seven electronic databases: Scopus, Medline Ovidsp, PubMed, PsycInfo-Ovid, Academic Search Premier, CINAHL Plus with FT-EBSCO, and SDOL. Effect size (Hedges' g) was computed for each study. Random-effect modeling was used to combine effect sizes across studies. All statistical analyses were conducted with Comprehensive Meta-Analysis, version 2 (Biostat, Inc., Englewood, NJ, USA). Our meta-analysis of 20 studies found that hypnosis had a significant immediate effect on anxiety in cancer patients (Hedges' g: 0.70-1.41, p < .01) and the effect was sustained (Hedges' g: 0.61-2.77, p < .01). The adjusted mean effect size (determined by Duvan and Tweedie's trim-and-fill method) was 0.46. RCTs had a significantly higher effect size than non-RCT studies. Higher mean effect sizes were also found with pediatric study samples, hematological malignancy, studies on procedure-related stressors, and with mixed-gender samples. Hypnosis delivered by a therapist was significantly more effective than self-hypnosis. Hypnosis can reduce anxiety of cancer patients, especially for pediatric cancer patients who experience procedure-related stress. We recommend therapist-delivered hypnosis should be preferred until more effective self-hypnosis strategies are developed. © 2017 Sigma Theta Tau International.

  2. Variations in the Summer Phytoplankton Community Structure in Atlantic sub-Arctic and Arctic Waters

    NASA Astrophysics Data System (ADS)

    Small, A.; Hughes, C.; Bouman, H. A.

    2016-02-01

    Shifts in phytoplankton community structure serve not only as indicators of environmental change but also have implications for food-web interactions and biogeochemical cycles. The community structure of marine phytoplankton in sub-Arctic and Arctic waters was examined using 159 samples collected in the summer of 2013 along a latitudinal gradient spanning from 61.1 to 83.1 degrees N along the east coast of Greenland. Accessory pigment concentrations were used to infer information about the phytoplankton taxa present using CHEMTAX (CHEMical TAXonomy), an iterative MATLAB subroutine. The main algal classes found within the study region were diatoms, dinoflagellates, haptophytes, chlorophytes, cryptophytes and prasinophytes. Diatoms were present at nearly all stations and depths and were large contributors to the total pigment biomass for both ice and open water stations. Deeper samples were mainly dominated by diatoms and haptophytes. Surface sample communities were characterised by mixed assemblages, including dinoflagellates and chlorophytes although diatoms and haptophytes still comprised a significant portion of the pigment biomass. The differences in community structure were investigated in relation to the environmental conditions through multivariate statistical analysis (cluster and principle component analyses) in order to understand the factors influencing the spatial distribution of the various algal classes. Diagnostic pigment indices were also used to calculate the concentration of Chl-a attributed to three size classes (picophytoplankton 0.2-2µm, nanophytoplankton 2-20µm and microphytoplankton >20µm). These data were compared to a similar dataset from the same cruise where size fractionated Chl-a was separated by sequential filtration and quantified by fluorometric analysis. Size-fractionated Chl-a as measured directly by sequential filtration suggested a primarily mixed community across the study region. In contrast pigment based analysis suggested a strong dominance of larger cells and also resulted in the complete absence of picoplankton in some samples. These results suggest that diagnostic pigment indices may not be an accurate method of determining size classes in this region.

  3. Prospects and difficulties in TiO₂ nanoparticles analysis in cosmetic and food products using asymmetrical flow field-flow fractionation hyphenated to inductively coupled plasma mass spectrometry.

    PubMed

    López-Heras, Isabel; Madrid, Yolanda; Cámara, Carmen

    2014-06-01

    In this work, we proposed an analytical approach based on asymmetrical flow field-flow fractionation combined to an inductively coupled plasma mass spectrometry (AsFlFFF-ICP-MS) for rutile titanium dioxide nanoparticles (TiO2NPs) characterization and quantification in cosmetic and food products. AsFlFFF-ICP-MS separation of TiO2NPs was performed using 0.2% (w/v) SDS, 6% (v/v) methanol at pH 8.7 as the carrier solution. Two problems were addressed during TiO2NPs analysis by AsFlFFF-ICP-MS: size distribution determination and element quantification of the NPs. Two approaches were used for size determination: size calibration using polystyrene latex standards of known sizes and transmission electron microscopy (TEM). A method based on focused sonication for preparing NPs dispersions followed by an on-line external calibration strategy based on AsFlFFF-ICP-MS, using rutile TiO2NPs as standards is presented here for the first time. The developed method suppressed non-specific interactions between NPs and membrane, and overcame possible erroneous results obtained when quantification is performed by using ionic Ti solutions. The applicability of the quantification method was tested on cosmetic products (moisturizing cream). Regarding validation, at the 95% confidence level, no significant differences were detected between titanium concentrations in the moisturizing cream prior sample mineralization (3865±139 mg Ti/kg sample), by FIA-ICP-MS analysis prior NPs extraction (3770±24 mg Ti/kg sample), and after using the optimized on-line calibration approach (3699±145 mg Ti/kg sample). Besides the high Ti content found in the studied food products (sugar glass and coffee cream), TiO2NPs were not detected. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. At convenience and systematic random sampling: effects on the prognostic value of nuclear area assessments in breast cancer patients.

    PubMed

    Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P

    1995-01-01

    This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.

  5. Tunable microwave absorbing nano-material for X-band applications

    NASA Astrophysics Data System (ADS)

    Sadiq, Imran; Naseem, Shahzad; Ashiq, Muhammad Naeem; Khan, M. A.; Niaz, Shanawer; Rana, M. U.

    2016-03-01

    The effect of rare earth elements substitution in Sr1.96RE0.04Co2Fe27.80Mn0.2O46 (RE=Ce, Gd, Nd, La and Sm) X-type hexagonal ferrites prepared by using sol gel autocombustion method was studied. The XRD and FTIR analysis show the single phase of the prepared material. The lattice constants a (Å) and c (Å) varies with the additives. The particle size measured by Scherer formula for all the samples varies in the range of 54-100 nm and confirmed by the TEM analysis. The average grain size measured by SEM analysis lies in the range of 0.672-1.01 μm for all the samples. The Gd-substituted ferrite has higher value of coercivity (526.06 G) among all the samples which could be a good material for longitudinal recording media. The results also indicate that the Gd-substituted sample has maximum reflection loss of -25.2 dB at 11.878 GHz, can exhibit the best microwave absorption properties among all the substituted samples. Furthermore, the minimum value of reflection loss shifts towards the lower and higher frequencies with the substitution of rare earth elements which confirms that the microwave absorption properties can be tuned with the substitution of rare earth elements in pure ferrites. The peak value of attenuation constant at higher frequency agrees well the reflection loss data.

  6. Coalescence computations for large samples drawn from populations of time-varying sizes

    PubMed Central

    Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek

    2017-01-01

    We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404

  7. Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin

    NASA Astrophysics Data System (ADS)

    He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu

    2017-06-01

    This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.

  8. Nonsyndromic cleft palate: An association study at GWAS candidate loci in a multiethnic sample.

    PubMed

    Ishorst, Nina; Francheschelli, Paola; Böhmer, Anne C; Khan, Mohammad Faisal J; Heilmann-Heimbach, Stefanie; Fricker, Nadine; Little, Julian; Steegers-Theunissen, Regine P M; Peterlin, Borut; Nowak, Stefanie; Martini, Markus; Kruse, Teresa; Dunsche, Anton; Kreusch, Thomas; Gölz, Lina; Aldhorae, Khalid; Halboub, Esam; Reutter, Heiko; Mossey, Peter; Nöthen, Markus M; Rubini, Michele; Ludwig, Kerstin U; Knapp, Michael; Mangold, Elisabeth

    2018-06-01

    Nonsyndromic cleft palate only (nsCPO) is a common and multifactorial form of orofacial clefting. In contrast to successes achieved for the other common form of orofacial clefting, that is, nonsyndromic cleft lip with/without cleft palate (nsCL/P), genome wide association studies (GWAS) of nsCPO have identified only one genome wide significant locus. Aim of the present study was to investigate whether common variants contribute to nsCPO and, if so, to identify novel risk loci. We genotyped 33 SNPs at 27 candidate loci from 2 previously published nsCPO GWAS in an independent multiethnic sample. It included: (i) a family-based sample of European ancestry (n = 212); and (ii) two case/control samples of Central European (n = 94/339) and Arabian ancestry (n = 38/231), respectively. A separate association analysis was performed for each genotyped dataset, and meta-analyses were performed. After association analysis and meta-analyses, none of the 33 SNPs showed genome-wide significance. Two variants showed nominally significant association in the imputed GWAS dataset and exhibited a further decrease in p-value in a European and an overall meta-analysis including imputed GWAS data, respectively (rs395572: P MetaEU  = 3.16 × 10 -4 ; rs6809420: P MetaAll  = 2.80 × 10 -4 ). Our findings suggest that there is a limited contribution of common variants to nsCPO. However, the individual effect sizes might be too small for detection of further associations in the present sample sizes. Rare variants may play a more substantial role in nsCPO than in nsCL/P, for which GWAS of smaller sample sizes have identified genome-wide significant loci. Whole-exome/genome sequencing studies of nsCPO are now warranted. © 2018 Wiley Periodicals, Inc.

  9. Water Operations Technical Support Program: Techniques for Evaluating Aquatic Habitats in Rivers, Streams, and Reservoirs. Proceedings of a Workshop Held in Vicksburg, Mississippi on 8-10 August 1989.

    DTIC Science & Technology

    1991-08-01

    cohorts). The abundance of individuals greater than 20 mm SL and the complexity of size demography indicated longevity of 2 to 3 years for a substantial...Richard Kasul ......... ......................... . 86 DATA ANALYSIS AND INTERPRETATION ...... ................... 93 Measurement of Size Demography ...Experi- ment Station, Vicksburg, MS. Miller, A. C., and Payne, B. S. 1988. "The Need for Quantitative Sampling to Characterize Size Demography and Density

  10. Comparability of river suspended-sediment sampling and laboratory analysis methods

    USGS Publications Warehouse

    Groten, Joel T.; Johnson, Gregory D.

    2018-03-06

    Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.

  11. ROC curves in clinical chemistry: uses, misuses, and possible solutions.

    PubMed

    Obuchowski, Nancy A; Lieber, Michael L; Wians, Frank H

    2004-07-01

    ROC curves have become the standard for describing and comparing the accuracy of diagnostic tests. Not surprisingly, ROC curves are used often by clinical chemists. Our aims were to observe how the accuracy of clinical laboratory diagnostic tests is assessed, compared, and reported in the literature; to identify common problems with the use of ROC curves; and to offer some possible solutions. We reviewed every original work using ROC curves and published in Clinical Chemistry in 2001 or 2002. For each article we recorded phase of the research, prospective or retrospective design, sample size, presence/absence of confidence intervals (CIs), nature of the statistical analysis, and major analysis problems. Of 58 articles, 31% were phase I (exploratory), 50% were phase II (challenge), and 19% were phase III (advanced) studies. The studies increased in sample size from phase I to III and showed a progression in the use of prospective designs. Most phase I studies were powered to assess diagnostic tests with ROC areas >/=0.70. Thirty-eight percent of studies failed to include CIs for diagnostic test accuracy or the CIs were constructed inappropriately. Thirty-three percent of studies provided insufficient analysis for comparing diagnostic tests. Other problems included dichotomization of the gold standard scale and inappropriate analysis of the equivalence of two diagnostic tests. We identify available software and make some suggestions for sample size determination, testing for equivalence in diagnostic accuracy, and alternatives to a dichotomous classification of a continuous-scale gold standard. More methodologic research is needed in areas specific to clinical chemistry.

  12. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  13. 'Mitominis': multiplex PCR analysis of reduced size amplicons for compound sequence analysis of the entire mtDNA control region in highly degraded samples.

    PubMed

    Eichmann, Cordula; Parson, Walther

    2008-09-01

    The traditional protocol for forensic mitochondrial DNA (mtDNA) analyses involves the amplification and sequencing of the two hypervariable segments HVS-I and HVS-II of the mtDNA control region. The primers usually span fragment sizes of 300-400 bp each region, which may result in weak or failed amplification in highly degraded samples. Here we introduce an improved and more stable approach using shortened amplicons in the fragment range between 144 and 237 bp. Ten such amplicons were required to produce overlapping fragments that cover the entire human mtDNA control region. These were co-amplified in two multiplex polymerase chain reactions and sequenced with the individual amplification primers. The primers were carefully selected to minimize binding on homoplasic and haplogroup-specific sites that would otherwise result in loss of amplification due to mis-priming. The multiplexes have successfully been applied to ancient and forensic samples such as bones and teeth that showed a high degree of degradation.

  14. Aggregation in organic light emitting diodes

    NASA Astrophysics Data System (ADS)

    Meyer, Abigail

    Organic light emitting diode (OLED) technology has great potential for becoming a solid state lighting source. However, there are inefficiencies in OLED devices that need to be understood. Since these inefficiencies occur on a nanometer scale there is a need for structural data on this length scale in three dimensions which has been unattainable until now. Local Electron Atom Probe (LEAP), a specific implementation of Atom Probe Tomography (APT), is used in this work to acquire morphology data in three dimensions on a nanometer scale with much better chemical resolution than is previously seen. Before analyzing LEAP data, simulations were used to investigate how detector efficiency, sample size and cluster size affect data analysis which is done using radial distribution functions (RDFs). Data is reconstructed using the LEAP software which provides mass and position data. Two samples were then analyzed, 3% DCM2 in C60 and 2% DCM2 in Alq3. Analysis of both samples indicated little to no clustering was present in this system.

  15. Split-plot microarray experiments: issues of design, power and sample size.

    PubMed

    Tsai, Pi-Wen; Lee, Mei-Ling Ting

    2005-01-01

    This article focuses on microarray experiments with two or more factors in which treatment combinations of the factors corresponding to the samples paired together onto arrays are not completely random. A main effect of one (or more) factor(s) is confounded with arrays (the experimental blocks). This is called a split-plot microarray experiment. We utilise an analysis of variance (ANOVA) model to assess differentially expressed genes for between-array and within-array comparisons that are generic under a split-plot microarray experiment. Instead of standard t- or F-test statistics that rely on mean square errors of the ANOVA model, we use a robust method, referred to as 'a pooled percentile estimator', to identify genes that are differentially expressed across different treatment conditions. We illustrate the design and analysis of split-plot microarray experiments based on a case application described by Jin et al. A brief discussion of power and sample size for split-plot microarray experiments is also presented.

  16. Simulation of parametric model towards the fixed covariate of right censored lung cancer data

    NASA Astrophysics Data System (ADS)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila

    2017-09-01

    In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.

  17. Cognitive Behavioral Therapy: A Meta-Analysis of Race and Substance Use Outcomes

    PubMed Central

    Windsor, Liliane Cambraia; Jemal, Alexis; Alessi, Edward

    2015-01-01

    Cognitive behavioral therapy (CBT) is an effective intervention for reducing substance use. However, because CBT trials have included predominantly White samples caution must be used when generalizing these effects to Blacks and Hispanics. This meta-analysis compared the impact of CBT in reducing substance use between studies with a predominantly non-Hispanic White sample (hereafter NHW studies) and studies with a predominantly Black and/or Hispanic sample (hereafter BH studies). From 322 manuscripts identified in the literature, 17 met criteria for inclusion. Effect sizes between CBT and comparison group at posttest had similar effects on substance abuse across NHW and BH studies. However, when comparing pre-posttest effect sizes from groups receiving CBT between NHW and BH studies, CBT’s impact was significantly stronger in NHW studies. T-test comparisons indicated reduced retention/engagement in BH studies, albeit failing to reach statistical significance. Results highlight the need for further research testing CBT’s impact on substance use among Blacks and Hispanics. PMID:25285527

  18. Quantitative x-ray phase-contrast imaging using a single grating of comparable pitch to sample feature size.

    PubMed

    Morgan, Kaye S; Paganin, David M; Siu, Karen K W

    2011-01-01

    The ability to quantitatively retrieve transverse phase maps during imaging by using coherent x rays often requires a precise grating or analyzer-crystal-based setup. Imaging of live animals presents further challenges when these methods require multiple exposures for image reconstruction. We present a simple method of single-exposure, single-grating quantitative phase contrast for a regime in which the grating period is much greater than the effective pixel size. A grating is used to create a high-visibility reference pattern incident on the sample, which is distorted according to the complex refractive index and thickness of the sample. The resolution, along a line parallel to the grating, is not restricted by the grating spacing, and the detector resolution becomes the primary determinant of the spatial resolution. We present a method of analysis that maps the displacement of interrogation windows in order to retrieve a quantitative phase map. Application of this analysis to the imaging of known phantoms shows excellent correspondence.

  19. Effect of Microstructural Interfaces on the Mechanical Response of Crystalline Metallic Materials

    NASA Astrophysics Data System (ADS)

    Aitken, Zachary H.

    Advances in nano-scale mechanical testing have brought about progress in the understanding of physical phenomena in materials and a measure of control in the fabrication of novel materials. In contrast to bulk materials that display size-invariant mechanical properties, sub-micron metallic samples show a critical dependence on sample size. The strength of nano-scale single crystalline metals is well-described by a power-law function, sigma ∝ D-n, where D is a critical sample size and n is a experimentally-fit positive exponent. This relationship is attributed to source-driven plasticity and demonstrates a strengthening as the decreasing sample size begins to limit the size and number of dislocation sources. A full understanding of this size-dependence is complicated by the presence of microstructural features such as interfaces that can compete with the dominant dislocation-based deformation mechanisms. In this thesis, the effects of microstructural features such as grain boundaries and anisotropic crystallinity on nano-scale metals are investigated through uniaxial compression testing. We find that nano-sized Cu covered by a hard coating displays a Bauschinger effect and the emergence of this behavior can be explained through a simple dislocation-based analytic model. Al nano-pillars containing a single vertically-oriented coincident site lattice grain boundary are found to show similar deformation to single-crystalline nano-pillars with slip traces passing through the grain boundary. With increasing tilt angle of the grain boundary from the pillar axis, we observe a transition from dislocation-dominated deformation to grain boundary sliding. Crystallites are observed to shear along the grain boundary and molecular dynamics simulations reveal a mechanism of atomic migration that accommodates boundary sliding. We conclude with an analysis of the effects of inherent crystal anisotropy and alloying on the mechanical behavior of the Mg alloy, AZ31. Through comparison to pure Mg, we show that the size effect dominates the strength of samples below 10 microm, that differences in the size effect between hexagonal slip systems is due to the inherent crystal anisotropy, suggesting that the fundamental mechanism of the size effect in these slip systems is the same.

  20. Bed-Sediment Sampling and Analysis for Physical and Chemical Properties of the Lower Mississippi River near Memphis, Tennessee

    USGS Publications Warehouse

    Blanchard, Robert A.; Wagner, Daniel M.; Evans, Dennis A.

    2010-01-01

    In February 2010, the U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, Memphis District, investigated the presence of inorganic elements and organic compounds in bed sediments of the lower Mississippi River. Selected sites were located in the navigation channel near river miles 737, 773, and 790 near Memphis, Tennessee. Bed-sediment samples were collected using a Shipek grab sampler mounted to a boom crane with a motorized winch. Samples then were processed and shipped to the U.S. Geological Survey Sediment Laboratory in Rolla, Missouri, the USGS National Water Quality Laboratory in Denver, Colorado, and to TestAmerica Laboratory, Inc. in West Sacramento, California. Samples were analyzed for grain size, inorganic elements (including mercury), and organic compounds. Chemical results were tabulated and listed with sediment-quality guidelines and presented with the physical property results. All of the bed material samples collected during this investigation yielded concentrations that were less than the Consensus-Based Probable Effect Concentration guidelines. The physical properties were tabulated and listed using a standard U.S. Geological Survey scale of sizes by class for sediment analysis. All of the samples collected during this investigation indicated a percent composition mostly comprised of sand, ranging from less than 0.125 millimeters to less than 2 millimeters.

  1. A Note on Cluster Effects in Latent Class Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Keller, Bryan

    2011-01-01

    This article examines the effects of clustering in latent class analysis. A comprehensive simulation study is conducted, which begins by specifying a true multilevel latent class model with varying within- and between-cluster sample sizes, varying latent class proportions, and varying intraclass correlations. These models are then estimated under…

  2. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  3. Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction

    PubMed Central

    Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; Coe, Jesse; Conrad, Chelsie E.; Dörner, Katerina; Sierra, Raymond G.; Stevenson, Hilary P.; Camacho-Alanis, Fernanda; Grant, Thomas D.; Nelson, Garrett; James, Daniel; Calero, Guillermo; Wachter, Rebekka M.; Spence, John C. H.; Weierstall, Uwe; Fromme, Petra; Ros, Alexandra

    2015-01-01

    The advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles can be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ∼4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. This method will also permit an analysis of the dependence of crystal quality on crystal size. PMID:26798818

  4. Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction

    DOE PAGES

    Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; ...

    2015-08-19

    We report that the advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles canmore » be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ~4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. Ultimately, this method will also permit an analysis of the dependence of crystal quality on crystal size.« less

  5. Socioeconomic status, urbanicity and risk behaviors in Mexican youth: an analysis of three cross-sectional surveys

    PubMed Central

    2011-01-01

    Background The relationship between urbanicity and adolescent health is a critical issue for which little empirical evidence has been reported. Although an association has been suggested, a dichotomous rural versus urban comparison may not succeed in identifying differences between adolescent contexts. This study aims to assess the influence of locality size on risk behaviors in a national sample of young Mexicans living in low-income households, while considering the moderating effect of socioeconomic status (SES). Methods This is a secondary analysis of three national surveys of low-income households in Mexico in different settings: rural, semi-urban and urban areas. We analyzed risk behaviors in 15-21-year-olds and their potential relation to urbanicity. The risk behaviors explored were: tobacco and alcohol consumption, sexual initiation and condom use. The adolescents' localities of residence were classified according to the number of inhabitants in each locality. We used a logistical model to identify an association between locality size and risk behaviors, including an interaction term with SES. Results The final sample included 17,974 adolescents from 704 localities in Mexico. Locality size was associated with tobacco and alcohol consumption, showing a similar effect throughout all SES levels: the larger the size of the locality, the lower the risk of consuming tobacco or alcohol compared with rural settings. The effect of locality size on sexual behavior was more complex. The odds of adolescent condom use were higher in larger localities only among adolescents in the lowest SES levels. We found no statically significant association between locality size and sexual initiation. Conclusions The results suggest that in this sample of adolescents from low-income areas in Mexico, risk behaviors are related to locality size (number of inhabitants). Furthermore, for condom use, this relation is moderated by SES. Such heterogeneity suggests the need for more detailed analyses of both the effects of urbanicity on behavior, and the responses--which are also heterogeneous--required to address this situation. PMID:22129110

  6. Characterization of the enhancement effect of Na2CO3 on the sulfur capture capacity of limestones.

    PubMed

    Laursen, Karin; Kern, Arnt A; Grace, John R; Lim, C Jim

    2003-08-15

    It has been known for a long time that certain additives (e.g., NaCl, CaCl2, Na2CO3, Fe2O3) can increase the sulfur dioxide capture-capacity of limestones. In a recent study we demonstrated that very small amounts of Na2CO3 can be very beneficial for producing sorbents of very high sorption capacities. This paper explores what contributes to these significant increases. Mercury porosimetry measurements of calcined limestone samples reveal a change in the pore-size from 0.04-0.2 microm in untreated samples to 2-10 microm in samples treated with Na2CO3--a pore-size more favorable for penetration of sulfur into the particles. The change in pore-size facilitates reaction with lime grains throughout the whole particle without rapid plugging of pores, avoiding premature change from a fast chemical reaction to a slow solid-state diffusion controlled process, as seen for untreated samples. Calcination in a thermogravimetric reactor showed that Na2CO3 increased the rate of calcination of CaCO3 to CaO, an effect which was slightly larger at 825 degrees C than at 900 degrees C. Peak broadening analysis of powder X-ray diffraction data of the raw, calcined, and sulfated samples revealed an unaffected calcite size (approximately 125-170 nm) but a significant increase in the crystallite size for lime (approximately 60-90 nm to approximately 250-300 nm) and less for anhydrite (approximately 125-150 nm to approximately 225-250 nm). The increase in the crystallite and pore-size of the treated limestones is attributed to an increase in ionic mobility in the crystal lattice due to formation of vacancies in the crystals when Ca is partly replaced by Na.

  7. Hindlimb muscle architecture in non-human great apes and a comparison of methods for analysing inter-species variation

    PubMed Central

    Myatt, Julia P; Crompton, Robin H; Thorpe, Susannah K S

    2011-01-01

    By relating an animal's morphology to its functional role and the behaviours performed, we can further develop our understanding of the selective factors and constraints acting on the adaptations of great apes. Comparison of muscle architecture between different ape species, however, is difficult because only small sample sizes are ever available. Further, such samples are often comprised of different age–sex classes, so studies have to rely on scaling techniques to remove body mass differences. However, the reliability of such scaling techniques has been questioned. As datasets increase in size, more reliable statistical analysis may eventually become possible. Here we employ geometric and allometric scaling techniques, and ancovas (a form of general linear model, GLM) to highlight and explore the different methods available for comparing functional morphology in the non-human great apes. Our results underline the importance of regressing data against a suitable body size variable to ascertain the relationship (geometric or allometric) and of choosing appropriate exponents by which to scale data. ancova models, while likely to be more robust than scaling for species comparisons when sample sizes are high, suffer from reduced power when sample sizes are low. Therefore, until sample sizes are radically increased it is preferable to include scaling analyses along with ancovas in data exploration. Overall, the results obtained from the different methods show little significant variation, whether in muscle belly mass, fascicle length or physiological cross-sectional area between the different species. This may reflect relatively close evolutionary relationships of the non-human great apes; a universal influence on morphology of generalised orthograde locomotor behaviours or, quite likely, both. PMID:21507000

  8. Probing defects in chemically synthesized ZnO nanostrucures by positron annihilation and photoluminescence spectroscopy

    NASA Astrophysics Data System (ADS)

    Chaudhuri, S. K.; Ghosh, Manoranjan; Das, D.; Raychaudhuri, A. K.

    2010-09-01

    The present article describes the size induced changes in the structural arrangement of intrinsic defects present in chemically synthesized ZnO nanoparticles of various sizes. Routine x-ray diffraction and transmission electron microscopy have been performed to determine the shapes and sizes of the nanocrystalline ZnO samples. Detailed studies using positron annihilation spectroscopy reveals the presence of zinc vacancy. Whereas analysis of photoluminescence results predict the signature of charged oxygen vacancies. The size induced changes in positron parameters as well as the photoluminescence properties, has shown contrasting or nonmonotonous trends as size varies from 4 to 85 nm. Small spherical particles below a critical size (˜23 nm) receive more positive surface charge due to the higher occupancy of the doubly charge oxygen vacancy as compared to the bigger nanostructures where singly charged oxygen vacancy predominates. This electronic alteration has been seen to trigger yet another interesting phenomenon, described as positron confinement inside nanoparticles. Finally, based on all the results, a model of the structural arrangement of the intrinsic defects in the present samples has been reconciled.

  9. The grain size(s) of Black Hills Quartzite deformed in the dislocation creep regime

    NASA Astrophysics Data System (ADS)

    Heilbronner, Renée; Kilian, Rüdiger

    2017-10-01

    General shear experiments on Black Hills Quartzite (BHQ) deformed in the dislocation creep regimes 1 to 3 have been previously analyzed using the CIP method (Heilbronner and Tullis, 2002, 2006). They are reexamined using the higher spatial and orientational resolution of EBSD. Criteria for coherent segmentations based on c-axis orientation and on full crystallographic orientations are determined. Texture domains of preferred c-axis orientation (Y and B domains) are extracted and analyzed separately. Subdomains are recognized, and their shape and size are related to the kinematic framework and the original grains in the BHQ. Grain size analysis is carried out for all samples, high- and low-strain samples, and separately for a number of texture domains. When comparing the results to the recrystallized quartz piezometer of Stipp and Tullis (2003), it is found that grain sizes are consistently larger for a given flow stress. It is therefore suggested that the recrystallized grain size also depends on texture, grain-scale deformation intensity, and the kinematic framework (of axial vs. general shear experiments).

  10. Using e-mail recruitment and an online questionnaire to establish effect size: A worked example.

    PubMed

    Kirkby, Helen M; Wilson, Sue; Calvert, Melanie; Draper, Heather

    2011-06-09

    Sample size calculations require effect size estimations. Sometimes, effect size estimations and standard deviation may not be readily available, particularly if efficacy is unknown because the intervention is new or developing, or the trial targets a new population. In such cases, one way to estimate the effect size is to gather expert opinion. This paper reports the use of a simple strategy to gather expert opinion to estimate a suitable effect size to use in a sample size calculation. Researchers involved in the design and analysis of clinical trials were identified at the University of Birmingham and via the MRC Hubs for Trials Methodology Research. An email invited them to participate.An online questionnaire was developed using the free online tool 'Survey Monkey©'. The questionnaire described an intervention, an electronic participant information sheet (e-PIS), which may increase recruitment rates to a trial. Respondents were asked how much they would need to see recruitment rates increased by, based on 90%. 70%, 50% and 30% baseline rates, (in a hypothetical study) before they would consider using an e-PIS in their research.Analyses comprised simple descriptive statistics. The invitation to participate was sent to 122 people; 7 responded to say they were not involved in trial design and could not complete the questionnaire, 64 attempted it, 26 failed to complete it. Thirty-eight people completed the questionnaire and were included in the analysis (response rate 33%; 38/115). Of those who completed the questionnaire 44.7% (17/38) were at the academic grade of research fellow 26.3% (10/38) senior research fellow, and 28.9% (11/38) professor. Dependent upon the baseline recruitment rates presented in the questionnaire, participants wanted recruitment rate to increase from 6.9% to 28.9% before they would consider using the intervention. This paper has shown that in situations where effect size estimations cannot be collected from previous research, opinions from researchers and trialists can be quickly and easily collected by conducting a simple study using email recruitment and an online questionnaire. The results collected from the survey were successfully used in sample size calculations for a PhD research study protocol.

  11. A novel approach for small sample size family-based association studies: sequential tests.

    PubMed

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  12. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy

    PubMed Central

    Mayberry, Addison; Perkins, David L.; Holcomb, Daniel E.

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples. PMID:29509786

  13. Sample Size Requirements for Studies of Treatment Effects on Beta-Cell Function in Newly Diagnosed Type 1 Diabetes

    PubMed Central

    Lachin, John M.; McGee, Paula L.; Greenbaum, Carla J.; Palmer, Jerry; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of -cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(), log(+1) and square-root transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8–12 years of age, adolescents (13–17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13–17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(+1) and transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes. PMID:22102862

  14. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    PubMed

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes.

  15. Pituitary gland volumes in bipolar disorder.

    PubMed

    Clark, Ian A; Mackay, Clare E; Goodwin, Guy M

    2014-12-01

    Bipolar disorder has been associated with increased Hypothalamic-Pituitary-Adrenal axis function. The mechanism is not well understood, but there may be associated increases in pituitary gland volume (PGV) and these small increases may be functionally significant. However, research investigating PGV in bipolar disorder reports mixed results. The aim of the current study was twofold. First, to assess PGV in two novel samples of patients with bipolar disorder and matched healthy controls. Second, to perform a meta-analysis comparing PGV across a larger sample of patients and matched controls. Sample 1 consisted of 23 established patients and 32 matched controls. Sample 2 consisted of 39 medication-naïve patients and 42 matched controls. PGV was measured on structural MRI scans. Seven further studies were identified comparing PGV between patients and matched controls (total n; 244 patients, 308 controls). Both novel samples showed a small (approximately 20mm(3) or 4%), but non-significant, increase in PGV in patients. Combining the two novel samples showed a significant association of age and PGV. Meta-analysis showed a trend towards a larger pituitary gland in patients (effect size: .23, CI: -.14, .59). While results suggest a possible small difference in pituitary gland volume between patients and matched controls, larger mega-analyses with sample sizes greater even than those used in the current meta-analysis are still required. There is a small but potentially functionally significant increase in PGV in patients with bipolar disorder compared to controls. Results demonstrate the difficulty of finding potentially important but small effects in functional brain disorders. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Is the permeability of naturally fractured rocks scale dependent?

    NASA Astrophysics Data System (ADS)

    Azizmohammadi, Siroos; Matthäi, Stephan K.

    2017-09-01

    The equivalent permeability, keq of stratified fractured porous rocks and its anisotropy is important for hydrocarbon reservoir engineering, groundwater hydrology, and subsurface contaminant transport. However, it is difficult to constrain this tensor property as it is strongly influenced by infrequent large fractures. Boreholes miss them and their directional sampling bias affects the collected geostatistical data. Samples taken at any scale smaller than that of interest truncate distributions and this bias leads to an incorrect characterization and property upscaling. To better understand this sampling problem, we have investigated a collection of outcrop-data-based Discrete Fracture and Matrix (DFM) models with mechanically constrained fracture aperture distributions, trying to establish a useful Representative Elementary Volume (REV). Finite-element analysis and flow-based upscaling have been used to determine keq eigenvalues and anisotropy. While our results indicate a convergence toward a scale-invariant keq REV with increasing sample size, keq magnitude can have multi-modal distributions. REV size relates to the length of dilated fracture segments as opposed to overall fracture length. Tensor orientation and degree of anisotropy also converge with sample size. However, the REV for keq anisotropy is larger than that for keq magnitude. Across scales, tensor orientation varies spatially, reflecting inhomogeneity of the fracture patterns. Inhomogeneity is particularly pronounced where the ambient stress selectively activates late- as opposed to early (through-going) fractures. While we cannot detect any increase of keq with sample size as postulated in some earlier studies, our results highlight a strong keq anisotropy that influences scale dependence.

  17. Digital simulation of scalar optical diffraction: revisiting chirp function sampling criteria and consequences.

    PubMed

    Voelz, David G; Roggemann, Michael C

    2009-11-10

    Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.

  18. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    PubMed

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  19. Generalizing the Network Scale-Up Method: A New Estimator for the Size of Hidden Populations*

    PubMed Central

    Feehan, Dennis M.; Salganik, Matthew J.

    2018-01-01

    The network scale-up method enables researchers to estimate the size of hidden populations, such as drug injectors and sex workers, using sampled social network data. The basic scale-up estimator offers advantages over other size estimation techniques, but it depends on problematic modeling assumptions. We propose a new generalized scale-up estimator that can be used in settings with non-random social mixing and imperfect awareness about membership in the hidden population. Further, the new estimator can be used when data are collected via complex sample designs and from incomplete sampling frames. However, the generalized scale-up estimator also requires data from two samples: one from the frame population and one from the hidden population. In some situations these data from the hidden population can be collected by adding a small number of questions to already planned studies. For other situations, we develop interpretable adjustment factors that can be applied to the basic scale-up estimator. We conclude with practical recommendations for the design and analysis of future studies. PMID:29375167

  20. Survival analysis and classification methods for forest fire size

    PubMed Central

    2018-01-01

    Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at “being held” (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at “being held” exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances. PMID:29320497

  1. Survival analysis and classification methods for forest fire size.

    PubMed

    Tremblay, Pier-Olivier; Duchesne, Thierry; Cumming, Steven G

    2018-01-01

    Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at "being held" (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at "being held" exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances.

  2. Interpretation of correlations in clinical research.

    PubMed

    Hung, Man; Bounsanga, Jerry; Voss, Maren Wright

    2017-11-01

    Critically analyzing research is a key skill in evidence-based practice and requires knowledge of research methods, results interpretation, and applications, all of which rely on a foundation based in statistics. Evidence-based practice makes high demands on trained medical professionals to interpret an ever-expanding array of research evidence. As clinical training emphasizes medical care rather than statistics, it is useful to review the basics of statistical methods and what they mean for interpreting clinical studies. We reviewed the basic concepts of correlational associations, violations of normality, unobserved variable bias, sample size, and alpha inflation. The foundations of causal inference were discussed and sound statistical analyses were examined. We discuss four ways in which correlational analysis is misused, including causal inference overreach, over-reliance on significance, alpha inflation, and sample size bias. Recent published studies in the medical field provide evidence of causal assertion overreach drawn from correlational findings. The findings present a primer on the assumptions and nature of correlational methods of analysis and urge clinicians to exercise appropriate caution as they critically analyze the evidence before them and evaluate evidence that supports practice. Critically analyzing new evidence requires statistical knowledge in addition to clinical knowledge. Studies can overstate relationships, expressing causal assertions when only correlational evidence is available. Failure to account for the effect of sample size in the analyses tends to overstate the importance of predictive variables. It is important not to overemphasize the statistical significance without consideration of effect size and whether differences could be considered clinically meaningful.

  3. Evaluation of Hydrodynamic Chromatography Coupled with UV-Visible, Fluorescence and Inductively Coupled Plasma Mass Spectrometry Detectors for Sizing and Quantifying Colloids in Environmental Media

    PubMed Central

    Philippe, Allan; Schaumann, Gabriele E.

    2014-01-01

    In this study, we evaluated hydrodynamic chromatography (HDC) coupled with inductively coupled plasma mass spectrometry (ICP-MS) for the analysis of nanoparticles in environmental samples. Using two commercially available columns (Polymer Labs-PDSA type 1 and 2), a set of well characterised calibrants and a new external time marking method, we showed that flow rate and eluent composition have few influence on the size resolution and, therefore, can be adapted to the sample particularity. Monitoring the agglomeration of polystyrene nanoparticles over time succeeded without observable disagglomeration suggesting that even weak agglomerates can be measured using HDC. Simultaneous determination of gold colloid concentration and size using ICP-MS detection was validated for elemental concentrations in the ppb range. HDC-ICP-MS was successfully applied to samples containing a high organic and ionic background. Indeed, online combination of UV-visible, fluorescence and ICP-MS detectors allowed distinguishing between organic molecules and inorganic colloids during the analysis of Ag nanoparticles in synthetic surface waters and TiO2 and ZnO nanoparticles in commercial sunscreens. Taken together, our results demonstrate that HDC-ICP-MS is a flexible, sensitive and reliable method to measure the size and the concentration of inorganic colloids in complex media and suggest that there may be a promising future for the application of HDC in environmental science. Nonetheless the rigorous measurements of agglomerates and of matrices containing natural colloids still need to be studied in detail. PMID:24587393

  4. Evaluation of hydrodynamic chromatography coupled with UV-visible, fluorescence and inductively coupled plasma mass spectrometry detectors for sizing and quantifying colloids in environmental media.

    PubMed

    Philippe, Allan; Schaumann, Gabriele E

    2014-01-01

    In this study, we evaluated hydrodynamic chromatography (HDC) coupled with inductively coupled plasma mass spectrometry (ICP-MS) for the analysis of nanoparticles in environmental samples. Using two commercially available columns (Polymer Labs-PDSA type 1 and 2), a set of well characterised calibrants and a new external time marking method, we showed that flow rate and eluent composition have few influence on the size resolution and, therefore, can be adapted to the sample particularity. Monitoring the agglomeration of polystyrene nanoparticles over time succeeded without observable disagglomeration suggesting that even weak agglomerates can be measured using HDC. Simultaneous determination of gold colloid concentration and size using ICP-MS detection was validated for elemental concentrations in the ppb range. HDC-ICP-MS was successfully applied to samples containing a high organic and ionic background. Indeed, online combination of UV-visible, fluorescence and ICP-MS detectors allowed distinguishing between organic molecules and inorganic colloids during the analysis of Ag nanoparticles in synthetic surface waters and TiO₂ and ZnO nanoparticles in commercial sunscreens. Taken together, our results demonstrate that HDC-ICP-MS is a flexible, sensitive and reliable method to measure the size and the concentration of inorganic colloids in complex media and suggest that there may be a promising future for the application of HDC in environmental science. Nonetheless the rigorous measurements of agglomerates and of matrices containing natural colloids still need to be studied in detail.

  5. Phase-contrast x-ray computed tomography for biological imaging

    NASA Astrophysics Data System (ADS)

    Momose, Atsushi; Takeda, Tohoru; Itai, Yuji

    1997-10-01

    We have shown so far that 3D structures in biological sot tissues such as cancer can be revealed by phase-contrast x- ray computed tomography using an x-ray interferometer. As a next step, we aim at applications of this technique to in vivo observation, including radiographic applications. For this purpose, the size of view field is desired to be more than a few centimeters. Therefore, a larger x-ray interferometer should be used with x-rays of higher energy. We have evaluated the optimal x-ray energy from an aspect of does as a function of sample size. Moreover, desired spatial resolution to an image sensor is discussed as functions of x-ray energy and sample size, basing on a requirement in the analysis of interference fringes.

  6. Some radiation effects on organic binders in X-ray fluorescence spectrometry

    NASA Astrophysics Data System (ADS)

    Novosel-Radović, Vj.; MaljkoviĆ, Da.; NenadiĆ, N.

    The paper deals with diminished wear resistance of standard samples in X-ray fluorescence spectrometry. The effect of X-ray irradiation on pellet samples, pressed with starch as organic binder, was investigated by sieve analysis and scanning electron microscopy. A change in the starch grain size was found as a result of swelling and cracking.

  7. The Relation of Economic Status to Subjective Well-Being in Developing Countries: A Meta-Analysis

    ERIC Educational Resources Information Center

    Howell, Ryan T.; Howell, Colleen J.

    2008-01-01

    The current research synthesis integrates the findings of 111 independent samples from 54 economically developing countries that examined the relation between economic status and subjective well-being (SWB). The average economic status-SWB effect size was strongest among low-income developing economies (r = 0.28) and for samples that were least…

  8. Model Choice and Sample Size in Item Response Theory Analysis of Aphasia Tests

    ERIC Educational Resources Information Center

    Hula, William D.; Fergadiotis, Gerasimos; Martin, Nadine

    2012-01-01

    Purpose: The purpose of this study was to identify the most appropriate item response theory (IRT) measurement model for aphasia tests requiring 2-choice responses and to determine whether small samples are adequate for estimating such models. Method: Pyramids and Palm Trees (Howard & Patterson, 1992) test data that had been collected from…

  9. Evaluation of errors in quantitative determination of asbestos in rock

    NASA Astrophysics Data System (ADS)

    Baietto, Oliviero; Marini, Paola; Vitaliti, Martina

    2016-04-01

    The quantitative determination of the content of asbestos in rock matrices is a complex operation which is susceptible to important errors. The principal methodologies for the analysis are Scanning Electron Microscopy (SEM) and Phase Contrast Optical Microscopy (PCOM). Despite the PCOM resolution is inferior to that of SEM, PCOM analysis has several advantages, including more representativity of the analyzed sample, more effective recognition of chrysotile and a lower cost. The DIATI LAA internal methodology for the analysis in PCOM is based on a mild grinding of a rock sample, its subdivision in 5-6 grain size classes smaller than 2 mm and a subsequent microscopic analysis of a portion of each class. The PCOM is based on the optical properties of asbestos and of the liquids with note refractive index in which the particles in analysis are immersed. The error evaluation in the analysis of rock samples, contrary to the analysis of airborne filters, cannot be based on a statistical distribution. In fact for airborne filters a binomial distribution (Poisson), which theoretically defines the variation in the count of fibers resulting from the observation of analysis fields, chosen randomly on the filter, can be applied. The analysis in rock matrices instead cannot lean on any statistical distribution because the most important object of the analysis is the size of the of asbestiform fibers and bundles of fibers observed and the resulting relationship between the weights of the fibrous component compared to the one granular. The error evaluation generally provided by public and private institutions varies between 50 and 150 percent, but there are not, however, specific studies that discuss the origin of the error or that link it to the asbestos content. Our work aims to provide a reliable estimation of the error in relation to the applied methodologies and to the total content of asbestos, especially for the values close to the legal limits. The error assessments must be made through the repetition of the same analysis on the same sample to try to estimate the error on the representativeness of the sample and the error related to the sensitivity of the operator, in order to provide a sufficiently reliable uncertainty of the method. We used about 30 natural rock samples with different asbestos content, performing 3 analysis on each sample to obtain a trend sufficiently representative of the percentage. Furthermore we made on one chosen sample 10 repetition of the analysis to try to define more specifically the error of the methodology.

  10. Effect of modulation of the particle size distributions in the direct solid analysis by total-reflection X-ray fluorescence

    NASA Astrophysics Data System (ADS)

    Fernández-Ruiz, Ramón; Friedrich K., E. Josue; Redrejo, M. J.

    2018-02-01

    The main goal of this work was to investigate, in a systematic way, the influence of the controlled modulation of the particle size distribution of a representative solid sample with respect to the more relevant analytical parameters of the Direct Solid Analysis (DSA) by Total-reflection X-Ray Fluorescence (TXRF) quantitative method. In particular, accuracy, uncertainty, linearity and detection limits were correlated with the main parameters of their size distributions for the following elements; Al, Si, P, S, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Rb, Sr, Ba and Pb. In all cases strong correlations were finded. The main conclusion of this work can be resumed as follows; the modulation of particles shape to lower average sizes next to a minimization of the width of particle size distributions, produce a strong increment of accuracy, minimization of uncertainties and limit of detections for DSA-TXRF methodology. These achievements allow the future use of the DSA-TXRF analytical methodology for development of ISO norms and standardized protocols for the direct analysis of solids by mean of TXRF.

  11. Micrometer-scale particle sizing by laser diffraction: critical impact of the imaginary component of refractive index.

    PubMed

    Beekman, Alice; Shan, Daxian; Ali, Alana; Dai, Weiguo; Ward-Smith, Stephen; Goldenberg, Merrill

    2005-04-01

    This study evaluated the effect of the imaginary component of the refractive index on laser diffraction particle size data for pharmaceutical samples. Excipient particles 1-5 microm in diameter (irregular morphology) were measured by laser diffraction. Optical parameters were obtained and verified based on comparison of calculated vs. actual particle volume fraction. Inappropriate imaginary components of the refractive index can lead to inaccurate results, including false peaks in the size distribution. For laser diffraction measurements, obtaining appropriate or "effective" imaginary components of the refractive index was not always straightforward. When the recommended criteria such as the concentration match and the fit of the scattering data gave similar results for very different calculated size distributions, a supplemental technique, microscopy with image analysis, was used to decide between the alternatives. Use of effective optical parameters produced a good match between laser diffraction data and microscopy/image analysis data. The imaginary component of the refractive index can have a major impact on particle size results calculated from laser diffraction data. When performed properly, laser diffraction and microscopy with image analysis can yield comparable results.

  12. How conservative is Fisher's exact test? A quantitative evaluation of the two-sample comparative binomial trial.

    PubMed

    Crans, Gerald G; Shuster, Jonathan J

    2008-08-15

    The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd

  13. A Systematic Review of Surgical Randomized Controlled Trials: Part 2. Funding Source, Conflict of Interest, and Sample Size in Plastic Surgery.

    PubMed

    Voineskos, Sophocles H; Coroneos, Christopher J; Ziolkowski, Natalia I; Kaur, Manraj N; Banfield, Laura; Meade, Maureen O; Chung, Kevin C; Thoma, Achilleas; Bhandari, Mohit

    2016-02-01

    The authors examined industry support, conflict of interest, and sample size in plastic surgery randomized controlled trials that compared surgical interventions. They hypothesized that industry-funded trials demonstrate statistically significant outcomes more often, and randomized controlled trials with small sample sizes report statistically significant results more frequently. An electronic search identified randomized controlled trials published between 2000 and 2013. Independent reviewers assessed manuscripts and performed data extraction. Funding source, conflict of interest, primary outcome direction, and sample size were examined. Chi-squared and independent-samples t tests were used in the analysis. The search identified 173 randomized controlled trials, of which 100 (58 percent) did not acknowledge funding status. A relationship between funding source and trial outcome direction was not observed. Both funding status and conflict of interest reporting improved over time. Only 24 percent (six of 25) of industry-funded randomized controlled trials reported authors to have independent control of data and manuscript contents. The mean number of patients randomized was 73 per trial (median, 43, minimum, 3, maximum, 936). Small trials were not found to be positive more often than large trials (p = 0.87). Randomized controlled trials with small sample size were common; however, this provides great opportunity for the field to engage in further collaboration and produce larger, more definitive trials. Reporting of trial funding and conflict of interest is historically poor, but it greatly improved over the study period. Underreporting at author and journal levels remains a limitation when assessing the relationship between funding source and trial outcomes. Improved reporting and manuscript control should be goals that both authors and journals can actively achieve.

  14. Synthesis, structural, optical and morphological characterization of hematite through the precipitation method: Effect of varying the nature of the base

    NASA Astrophysics Data System (ADS)

    Lassoued, Abdelmajid; Lassoued, Mohamed Saber; Dkhil, Brahim; Gadri, Abdellatif; Ammar, Salah

    2017-08-01

    Iron oxide (α-Fe2O3) nanoparticles were synthesized using the precipitation synthesis method focusing only on (FeCl3, 6H2O), NaOH, KOH and NH4OH as raw materials. The impact of varying the nature of the base on the crystalline phase, size and morphology of α-Fe2O3 products was explored. XRD spectra revealed that samples crystallize in the rhombohedral (hexagonal) system at 800 °C.The Transmission Electron Microscopy (TEM) and Scanning Electron Microscopy (SEM) were used to detect the morphology of synthesized nanoparticles and specify their sizes. However, the Fourier Transform Infra-Red (FT-IR) spectroscopy has permitted the observation of vibration band Fe-O. Raman spectroscopy was used not only to prove that we have synthesized hematite but also to identify their phonon modes. The Thermo Gravimetric Analysis (TGA) findings allow the thermal cycle determination of samples whereas Differential Thermal Analysis (DTA) findings allow the phase transition temperature identification. Besides, the optical investigation revealed that samples have an optical gap of about 2.1 eV. Findings highlight that the nature of the agent precipitant plays a significant role in the morphology of the products and the formation of the crystalline phase. Hematite synthesis with the base NH4OH brought about much stronger, sharper and wider diffraction peaks of α-Fe2O3. The morphology of samples are spherical with a size of about 61 nm while the size of the nanoparticles of hematite which we have synthesized with NaOH and KOH is respectively of the order of 82 and 79 nm.

  15. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models.

    PubMed

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-12-15

    Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  16. Selective Laser Melting of Metal Powder Of Steel 3161

    NASA Astrophysics Data System (ADS)

    Smelov, V. G.; Sotov, A. V.; Agapovichev, A. V.; Tomilina, T. M.

    2016-08-01

    In this article the results of experimental study of the structure and mechanical properties of materials obtained by selective laser melting (SLM), metal powder steel 316L was carried out. Before the process of cultivation of samples as the input control, the morphology of the surface of the powder particles was studied and particle size analysis was carried out. Also, 3D X-ray quality control of the grown samples was carried out in order to detect hidden defects, their qualitative and quantitative assessment. To determine the strength characteristics of the samples synthesized by the SLM method, static tensile tests were conducted. To determine the stress X-ray diffraction analysis was carried out in the material samples.

  17. Ranking metrics in gene set enrichment analysis: do they matter?

    PubMed

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.

  18. DRME: Count-based differential RNA methylation analysis at small sample size scenario.

    PubMed

    Liu, Lian; Zhang, Shao-Wu; Gao, Fan; Zhang, Yixin; Huang, Yufei; Chen, Runsheng; Meng, Jia

    2016-04-15

    Differential methylation, which concerns difference in the degree of epigenetic regulation via methylation between two conditions, has been formulated as a beta or beta-binomial distribution to address the within-group biological variability in sequencing data. However, a beta or beta-binomial model is usually difficult to infer at small sample size scenario with discrete reads count in sequencing data. On the other hand, as an emerging research field, RNA methylation has drawn more and more attention recently, and the differential analysis of RNA methylation is significantly different from that of DNA methylation due to the impact of transcriptional regulation. We developed DRME to better address the differential RNA methylation problem. The proposed model can effectively describe within-group biological variability at small sample size scenario and handles the impact of transcriptional regulation on RNA methylation. We tested the newly developed DRME algorithm on simulated and 4 MeRIP-Seq case-control studies and compared it with Fisher's exact test. It is in principle widely applicable to several other RNA-related data types as well, including RNA Bisulfite sequencing and PAR-CLIP. The code together with an MeRIP-Seq dataset is available online (https://github.com/lzcyzm/DRME) for evaluation and reproduction of the figures shown in this article. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. New Insights into the Composition and Texture of Lunar Regolith Using Ultrafast Automated Electron-Beam Analysis

    NASA Technical Reports Server (NTRS)

    Rickman, Doug; Wentworth, Susan J.; Schrader, Christian M.; Stoeser, Doug; Botha, Pieter WSK; Butcher, Alan R.; Horsch, Hanna E.; Benedictus, Aukje; Gottlieb, Paul; McKay, David

    2008-01-01

    Sieved grain mounts of Apollo 16 drive tube samples have been examined using QEMSCAN - an innovative electron beam technology. By combining multiple energy-dispersive X-ray detectors, fully automated control, and off-line image processing, to produce digital mineral maps of particles exposed on polished surfaces, the result is an unprecedented quantity of mineralogical and petrographic data, on a particle-by-particle basis. Experimental analysis of four size fractions (500-250 microns, 150-90 microns, 75-45 microns and < 20 microns), prepared from two samples (64002,374 and 64002,262), has produced a robust and uniform dataset which allows for the quantification of mineralogy; texture; particle shape, size and density; and the digital classification of distinct particle types in each measured sample. These preliminary data show that there is a decrease in plagioclase modal content and an opposing increase in glass modal content, with decreasing particle size. These findings, together with data on trace phases (metals, sulphides, phosphates, and oxides), provide not only new insights into the make-up of lunar regolith at the Apollo 16 landing site, but also key physical parameters which can be used to design lunar simulants, and compute Figures of Merit for each material produced.

  20. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

Top