Sample records for current sample size

  1. Influence of Sample Size of Polymer Materials on Aging Characteristics in the Salt Fog Test

    NASA Astrophysics Data System (ADS)

    Otsubo, Masahisa; Anami, Naoya; Yamashita, Seiji; Honda, Chikahisa; Takenouchi, Osamu; Hashimoto, Yousuke

    Polymer insulators have been used in worldwide because of some superior properties; light weight, high mechanical strength, good hydrophobicity etc., as compared with porcelain insulators. In this paper, effect of sample size on the aging characteristics in the salt fog test is examined. Leakage current was measured by using 100 MHz AD board or 100 MHz digital oscilloscope and separated three components as conductive current, corona discharge current and dry band arc discharge current by using FFT and the current differential method newly proposed. Each component cumulative charge was estimated automatically by a personal computer. As the results, when the sample size increased under the same average applied electric field, the peak values of leakage current and each component current increased. Especially, the cumulative charges and the arc discharge length of dry band arc discharge increased remarkably with the increase of gap length.

  2. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  3. Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference

    PubMed Central

    Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.

    2016-01-01

    Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243

  4. Malaria prevalence metrics in low- and middle-income countries: an assessment of precision in nationally-representative surveys.

    PubMed

    Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M

    2017-11-21

    One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.

  5. Substantial Expansion of Detectable Size Range in Ionic Current Sensing through Pores by Using a Microfluidic Bridge Circuit.

    PubMed

    Yasaki, Hirotoshi; Yasui, Takao; Yanagida, Takeshi; Kaji, Noritada; Kanai, Masaki; Nagashima, Kazuki; Kawai, Tomoji; Baba, Yoshinobu

    2017-10-11

    Measuring ionic currents passing through nano- or micropores has shown great promise for the electrical discrimination of various biomolecules, cells, bacteria, and viruses. However, conventional measurements have shown there is an inherent limitation to the detectable particle volume (1% of the pore volume), which critically hinders applications to real mixtures of biomolecule samples with a wide size range of suspended particles. Here we propose a rational methodology that can detect samples with the detectable particle volume of 0.01% of the pore volume by measuring a transient current generated from the potential differences in a microfluidic bridge circuit. Our method substantially suppresses the background ionic current from the μA level to the pA level, which essentially lowers the detectable particle volume limit even for relatively large pore structures. Indeed, utilizing a microscale long pore structure (volume of 5.6 × 10 4 aL; height and width of 2.0 × 2.0 μm; length of 14 μm), we successfully detected various samples including polystyrene nanoparticles (volume: 4 aL), bacteria, cancer cells, and DNA molecules. Our method will expand the applicability of ionic current sensing systems for various mixed biomolecule samples with a wide size range, which have been difficult to measure by previously existing pore technologies.

  6. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  7. An Investigation of Sample Size Splitting on ATFIND and DIMTEST

    ERIC Educational Resources Information Center

    Socha, Alan; DeMars, Christine E.

    2013-01-01

    Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…

  8. State Estimates of Disability in America. Disability Statistics Report 3.

    ERIC Educational Resources Information Center

    LaPlante, Mitchell P.

    This study presents and discusses existing data on disability by state, from the 1980 and 1990 censuses, the Current Population Survey (CPS), and the National Health Interview Survey (NHIS). The study used direct methods for states with large sample sizes and synthetic estimates for states with low sample sizes. The study's highlighted findings…

  9. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  10. Obesity and Body Size Preferences of Jordanian Women

    ERIC Educational Resources Information Center

    Madanat, Hala; Hawks, Steven R.; Angeles, Heidi N.

    2011-01-01

    The nutrition transition is associated with increased obesity rates and increased desire to be thin. This study evaluates the relationship between actual body size and desired body size among a representative sample of 800 Jordanian women. Using Stunkard's body silhouettes, women were asked to identify their current and ideal body sizes, healthy…

  11. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  12. Thoracic and respirable particle definitions for human health risk assessment.

    PubMed

    Brown, James S; Gordon, Terry; Price, Owen; Asgharian, Bahman

    2013-04-10

    Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects.

  13. Thoracic and respirable particle definitions for human health risk assessment

    PubMed Central

    2013-01-01

    Background Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. Methods We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Results Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. Conclusions By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects. PMID:23575443

  14. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    PubMed

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  15. Classifying plant series-level forest potential types: methods for subbasins sampled in the midscale assessment of the interior Columbia basin.

    Treesearch

    Paul F. Hessburg; Bradley G. Smith; Scott D. Kreiter; Craig A. Miller; Cecilia H. McNicoll; Michele. Wasienko-Holland

    2000-01-01

    In the interior Columbia River basin midscale ecological assessment, we mapped and characterized historical and current vegetation composition and structure of 337 randomly sampled subwatersheds (9500 ha average size) in 43 subbasins (404 000 ha average size). We compared landscape patterns, vegetation structure and composition, and landscape vulnerability to wildfires...

  16. The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation

    PubMed Central

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-01-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333

  17. The impact of accelerating faster than exponential population growth on genetic variation.

    PubMed

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-03-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.

  18. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  19. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  20. Synthesis of thoria nano-particles at low temperature through base electrogeneration on steel 316L surface: Effect of current density

    NASA Astrophysics Data System (ADS)

    Yousefi, Taher; Torab-Mostaedi, Meisam; Mobtaker, Hossein Ghasemi; Keshtkar, Ali Reza

    2016-10-01

    The strategy developed in this study, offers significant advantages (simplicity and cleanness of method and also a product purity and new morphology of the product) over the conventional routes for the synthesis of ThO2 nanostructure. The effect of current density on morphology was studied. The synthesized powder was characterized by means of Powder X-ray Diffraction (PXRD), Transmission Electron Microscopy (TEM, Phillips EM 2085) Brunauer-Emmett-Teller (BET) and Fourier Transform Infrared (FT-IR) spectroscopy. The results show that the current density has a great effect on the morphology of the samples. The average size of the particles decreases as the applied current density increases and the average size of the samples decreases from 50 to 15 nm when the current density increases from 2 to 5 mA cm-2.

  1. Investigation on the structural characterization of pulsed p-type porous silicon

    NASA Astrophysics Data System (ADS)

    Wahab, N. H. Abd; Rahim, A. F. Abd; Mahmood, A.; Yusof, Y.

    2017-08-01

    P-type Porous silicon (PS) was sucessfully formed by using an electrochemical pulse etching (PC) and conventional direct current (DC) etching techniques. The PS was etched in the Hydrofluoric (HF) based solution at a current density of J = 10 mA/cm2 for 30 minutes from a crystalline silicon wafer with (100) orientation. For the PC process, the current was supplied through a pulse generator with 14 ms cycle time (T) with 10 ms on time (Ton) and pause time (Toff) of 4 ms respectively. FESEM, EDX, AFM, and XRD have been used to characterize the morphological properties of the PS. FESEM images showed that pulse PS (PPC) sample produces more uniform circular structures with estimated average pore sizes of 42.14 nm compared to DC porous (PDC) sample with estimated average size of 16.37nm respectively. The EDX spectrum for both samples showed higher Si content with minimal presence of oxide.

  2. Measures of precision for dissimilarity-based multivariate analysis of ecological communities

    PubMed Central

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826

  3. The cost of large numbers of hypothesis tests on power, effect size and sample size.

    PubMed

    Lazzeroni, L C; Ray, A

    2012-01-01

    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.

  4. A novel measure of effect size for mediation analysis.

    PubMed

    Lachowicz, Mark J; Preacher, Kristopher J; Kelley, Ken

    2018-06-01

    Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. Effect of Sampling Plans on the Risk of Escherichia coli O157 Illness.

    PubMed

    Kiermeier, Andreas; Sumner, John; Jenson, Ian

    2015-07-01

    Australia exports about 150,000 to 200,000 tons of manufacturing beef to the United States annually. Each lot is tested for Escherichia coli O157 using the N-60 sampling protocol, where 60 small pieces of surface meat from each lot of production are tested. A risk assessment of E. coli O157 illness from the consumption of hamburgers made from Australian manufacturing meat formed the basis to evaluate the effect of sample size and amount on the number of illnesses predicted. The sampling plans evaluated included no sampling (resulting in an estimated 55.2 illnesses per annum), the current N-60 plan (50.2 illnesses), N-90 (49.6 illnesses), N-120 (48.4 illnesses), and a more stringent N-60 sampling plan taking five 25-g samples from each of 12 cartons (47.4 illnesses per annum). While sampling may detect some highly contaminated lots, it does not guarantee that all such lots are removed from commerce. It is concluded that increasing the sample size or sample amount from the current N-60 plan would have a very small public health effect.

  6. Exact tests using two correlated binomial variables in contemporary cancer clinical trials.

    PubMed

    Yu, Jihnhee; Kepner, James L; Iyer, Renuka

    2009-12-01

    New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.

  7. Field size, length, and width distributions based on LACIE ground truth data. [large area crop inventory experiment

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Badhwar, G.

    1980-01-01

    The development of agricultural remote sensing systems requires knowledge of agricultural field size distributions so that the sensors, sampling frames, image interpretation schemes, registration systems, and classification systems can be properly designed. Malila et al. (1976) studied the field size distribution for wheat and all other crops in two Kansas LACIE (Large Area Crop Inventory Experiment) intensive test sites using ground observations of the crops and measurements of their field areas based on current year rectified aerial photomaps. The field area and size distributions reported in the present investigation are derived from a representative subset of a stratified random sample of LACIE sample segments. In contrast to previous work, the obtained results indicate that most field-size distributions are not log-normally distributed. The most common field size observed in this study was 10 acres for most crops studied.

  8. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  9. Modeling change in potential landscape vulnerability to forest insect and pathogen disturbances: methods for forested subwatersheds sampled in the midscale interior Columbia River basin assessment.

    Treesearch

    Paul F. Hessburg; Bradley G. Smith; Craig A. Miller; Scott D. Kreiter; R. Brion Salter

    1999-01-01

    In the interior Columbia River basin midscale ecological assessment, including portions of the Klamath and Great Basins, we mapped and characterized historical and current vegetation composition and structure of 337 randomly sampled subwatersheds (9500 ha average size) in 43 subbasins (404 000 ha average size). We compared landscape patterns, vegetation structure and...

  10. Single and simultaneous binary mergers in Wright-Fisher genealogies.

    PubMed

    Melfi, Andrew; Viswanath, Divakar

    2018-05-01

    The Kingman coalescent is a commonly used model in genetics, which is often justified with reference to the Wright-Fisher (WF) model. Current proofs of convergence of WF and other models to the Kingman coalescent assume a constant sample size. However, sample sizes have become quite large in human genetics. Therefore, we develop a convergence theory that allows the sample size to increase with population size. If the haploid population size is N and the sample size is N 1∕3-ϵ , ϵ>0, we prove that Wright-Fisher genealogies involve at most a single binary merger in each generation with probability converging to 1 in the limit of large N. Single binary merger or no merger in each generation of the genealogy implies that the Kingman partition distribution is obtained exactly. If the sample size is N 1∕2-ϵ , Wright-Fisher genealogies may involve simultaneous binary mergers in a single generation but do not involve triple mergers in the large N limit. The asymptotic theory is verified using numerical calculations. Variable population sizes are handled algorithmically. It is found that even distant bottlenecks can increase the probability of triple mergers as well as simultaneous binary mergers in WF genealogies. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. 50 CFR 648.90 - NE multispecies assessment, framework procedures and specifications, and flexible area action...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...; survey results; stock status; current estimates of fishing mortality and overfishing levels; social and... survey data or, if sea sampling data are unavailable, length frequency information from trawl surveys... size; sea sampling, port sampling, and survey data or, if sea sampling data are unavailable, length...

  12. MSeq-CNV: accurate detection of Copy Number Variation from Sequencing of Multiple samples.

    PubMed

    Malekpour, Seyed Amir; Pezeshk, Hamid; Sadeghi, Mehdi

    2018-03-05

    Currently a few tools are capable of detecting genome-wide Copy Number Variations (CNVs) based on sequencing of multiple samples. Although aberrations in mate pair insertion sizes provide additional hints for the CNV detection based on multiple samples, the majority of the current tools rely only on the depth of coverage. Here, we propose a new algorithm (MSeq-CNV) which allows detecting common CNVs across multiple samples. MSeq-CNV applies a mixture density for modeling aberrations in depth of coverage and abnormalities in the mate pair insertion sizes. Each component in this mixture density applies a Binomial distribution for modeling the number of mate pairs with aberration in the insertion size and also a Poisson distribution for emitting the read counts, in each genomic position. MSeq-CNV is applied on simulated data and also on real data of six HapMap individuals with high-coverage sequencing, in 1000 Genomes Project. These individuals include a CEU trio of European ancestry and a YRI trio of Nigerian ethnicity. Ancestry of these individuals is studied by clustering the identified CNVs. MSeq-CNV is also applied for detecting CNVs in two samples with low-coverage sequencing in 1000 Genomes Project and six samples form the Simons Genome Diversity Project.

  13. Sample size in psychological research over the past 30 years.

    PubMed

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  14. Accounting for between-study variation in incremental net benefit in value of information methodology.

    PubMed

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Spatio-temporal Evolution of Velocity Structure, Concentration and Grain-size Stratification within Experimental Particulate Gravity Flows: Potential Input Parameters for Numerical Models

    NASA Astrophysics Data System (ADS)

    McCaffrey, W.; Choux, C.; Baas, J.; Haughton, P.

    2001-12-01

    Little is known about the combined spatio-temporal evolution of velocity structure, concentration and grain size stratification within particulate gravity currents. Yet these data are of primary importance for numerical model validation, prior to application to natural flows, such as pyroclastic density currents and turbidity currents. A comprehensive study was carried out on a series of experimental particulate gravity flows of 5% by volume initial concentration. The sediment analogue was polydisperse silica flour (mean grain size ~8 microns). A uniform 30 liter suspension was prepared in an overhead reservoir, then allowed to drain (in about one minute) into an flume 10 m long and 0.3 m wide, water-filled to a depth of 0.3 m. Each flow was siphoned continuously for 52 s at 5 different heights (spaced evenly from 0.6 to 4.6 cm) with samples collected at a frequency of 0.25Hz, generating 325 samples for grain-size and concentration analysis. Simultaneously, six 4-MHz UDVP (Ultrasonic Doppler Velocity Profiling) probes recorded the horizontal component of flow velocity. All but the highest probe were positioned at the same height as the siphons. The sampling location was shifted 1.32m down-current for each of five nominally identical flows, yielding sample locations at 1.32, 2.64, 3.96, 5.28 and 6.60m from the inlet point. These data can be combined to give both the temporal and spatial evolution of a single idealised flow. The concentration data can be used to defined the structure of the flow. The flow first propagated as a jet, then became stratified. The length of the head increased with increasing distance from the reservoir (although the head propagation velocity was uniform). The maximum concentration was located at the base of the flow towards the rear of the head. Grain-size analysis showed that the head was enriched in coarse particles even at the most distal sampling location. Distinct flow stratification developed at a distance between 1.3 m and 2.6 m from the reservoir. In the body of the current, the suspended sediment was normally graded, whereas the tail exhibited inverse grading. This inverse grading may be linked to coarse particles in the head being swept upwards and backwards, then falling back into the body of the current. Alternatively, body turbulence may inhibit the settling of coarse particles. Turbulence may also explain the presence of coarse particles in the flow's head, with turbulence intensity apparently correlated with the flow competence.

  16. Free flux flow in two single crystals of V3Si with slightly different pinning strengths

    NASA Astrophysics Data System (ADS)

    Gafarov, O.; Gapud, A. A.; Moraes, S.; Thompson, J. R.; Christen, D. K.; Reyes, A. P.

    2010-10-01

    Results of recent measurements on two very clean, single-crystal samples of the A15 superconductor V3Si are presented. Magnetization and transport data already confirmed the ``clean'' quality of both samples, as manifested by: (i) high residual resistivity ratio, (ii) very low critical current densities, and (iii) a ``peak'' effect in the field dependence of critical current. The (H,T) phase line for this peak effect is shifted in the slightly ``dirtier'' sample, which consequently also has higher critical current density Jc(H). High-current Lorentz forces are applied on mixed-state vortices in order to induce the highly ordered free flux flow (FFF) phase, using the same methods as in previous work. A traditional model by Bardeen and Stephen (BS) predicts a simple field dependence of flux flow resistivity ρf(H), presuming a field-independent flux core size. A model by Kogan and Zelezhina (KZ) takes core size into account, and predict a clear deviation from BS. In this study, ρf(H) is confirmed to be consistent with predictions of KZ, as will be discussed.

  17. Association Between Smoking and Size of Anal Warts in HIV-infected Women

    PubMed Central

    Luu, HN; Amirian, ES; Beasley, RP; Piller, L; Chan, W; Scheurer, ME

    2015-01-01

    While the association between smoking and HPV infection, cervical cancer, and anal cancer has been well studied, evidence on the association between cigarette smoking and anal warts is limited. The purpose of this study was to investigate if cigarette smoking status influences the size of anal warts over time in HIV-infected women in a sample of 976 HIV-infected women from the Women’s Interagency HIV Study (WIHS). A linear mixed model was used to determine the effect of smoking on anal wart size. Even though women who were currently smokers had larger anal warts at baseline and slower growth rate of anal wart size after each visit than women who were not current smokers, there was no association between size of anal wart and current smoking status over time. Further studies on the role of smoking and interaction between smoking and other risk factors, however, should be explored. PMID:23155099

  18. Measures of precision for dissimilarity-based multivariate analysis of ecological communities.

    PubMed

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.

  19. EPICS Controlled Collimator for Controlling Beam Sizes in HIPPO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napolitano, Arthur Soriano; Vogel, Sven C.

    2017-08-03

    Controlling the beam spot size and shape in a diffraction experiment determines the probed sample volume. The HIPPO - High-Pressure-Preferred Orientation– neutron time-offlight diffractometer is located at the Lujan Neutron Scattering Center in Los Alamos National Laboratories. HIPPO characterizes microstructural parameters, such as phase composition, strains, grain size, or texture, of bulk (cm-sized) samples. In the current setup, the beam spot has a 10 mm diameter. Using a collimator, consisting of two pairs of neutron absorbing boron-nitride slabs, horizontal and vertical dimensions of a rectangular beam spot can be defined. Using the HIPPO robotic sample changer for sample motion, themore » collimator would enable scanning of e.g. cylindrical samples along the cylinder axis by probing slices of such samples. The project presented here describes implementation of such a collimator, in particular the motion control software. We utilized the EPICS (Experimental Physics Interface and Control System) software interface to integrate the collimator control into the HIPPO instrument control system. Using EPICS, commands are sent to commercial stepper motors that move the beam windows.« less

  20. Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling

    PubMed Central

    2006-01-01

    Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling. PMID:16937083

  1. Maturation and sexual ontogeny in the spangled emperor Lethrinus nebulosus.

    PubMed

    Marriott, R J; Jarvis, N D C; Adams, D J; Gallash, A E; Norriss, J; Newman, S J

    2010-04-01

    The reproductive development and sexual ontogeny of spangled emperor Lethrinus nebulosus populations in the Ningaloo Marine Park (NMP) were investigated to obtain an improved understanding of its evolved reproductive strategy and data for fisheries management. Evidence derived from (1) analyses of histological data and sampled sex ratios with size and age, (2) the identification of residual previtellogenic oocytes in immature and mature testes sampled during the spawning season and (3) observed changes in testis internal structure with increasing fish size and age, demonstrated a non-functional protogynous hermaphroditic strategy (or functional gonochorism). All the smallest and youngest fish sampled were female until they either changed sex to male at a mean 277.5 mm total length (L(T)) and 2.3 years old or remained female and matured at a larger mean L(T) (392.1 mm) and older age (3.5 years). Gonad masses were similar for males and females over the size range sampled and throughout long reproductive lives (up to a maximum estimated age of c. 31 years), which was another correlate of functional gonochorism. That the mean L(T) at sex change and female maturity were below the current minimum legal size (MLS) limit (410 mm) demonstrated that the current MLS limit is effective for preventing recreational fishers in the NMP retaining at least half of the juvenile males and females in their landed catches.

  2. 77 FR 60671 - Notice of Intent To Request Revision and Extension of a Currently Approved Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-04

    ... approved information collection, the List Sampling Frame Surveys. Revision to burden hours will be needed due to changes in the size of the target population, sampling design, and/or questionnaire length... Agriculture, (202) 720-4333. SUPPLEMENTARY INFORMATION: Title: List Sampling Frame Surveys. OMB Control Number...

  3. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome.

    PubMed

    Kristunas, Caroline A; Smith, Karen L; Gray, Laura J

    2017-03-07

    The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.

  4. Relation of sortable silt grain-size to deep-sea current speeds: Calibration of the 'Mud Current Meter'

    NASA Astrophysics Data System (ADS)

    McCave, I. N.; Thornalley, D. J. R.; Hall, I. R.

    2017-09-01

    Fine grain-size parameters have been used for inference of palaeoflow speeds of near-bottom currents in the deep-sea. The basic idea stems from observations of varying sediment size parameters on a continental margin with a gradient from slower flow speeds at shallower depths to faster at deeper. In the deep-sea, size-sorting occurs during deposition after benthic storm resuspension events. At flow speeds below 10-15 cm s-1 mean grain-size in the terrigenous non-cohesive 'sortable silt' range (denoted by SS bar , mean of 10-63 μm) is controlled by selective deposition, whereas above that range removal of finer material by winnowing is also argued to play a role. A calibration of the SS bar grain-size flow speed proxy based on sediment samples taken adjacent to sites of long-term current meters set within 100 m of the sea bed for more than a year is presented here. Grain-size has been measured by either Sedigraph or Coulter Counter, in some cases both, between which there is an excellent correlation for SS bar (r = 0.96). Size-speed data indicate calibration relationships with an overall sensitivity of 1.36 ± 0.19 cm s-1/μm. A calibration line comprising 12 points including 9 from the Iceland overflow region is well defined, but at least two other smaller groups (Weddell/Scotia Sea and NW Atlantic continental rise/Rockall Trough) are fitted by sub-parallel lines with a smaller constant. This suggests a possible influence of the calibre of material supplied to the site of deposition (not the initial source supply) which, if depleted in very coarse silt (31-63 μm), would limit SS bar to smaller values for a given speed than with a broader size-spectrum supply. Local calibrations, or a core-top grain-size and local flow speed, are thus necessary to infer absolute speeds from grain-size. The trend of the calibrations diverges markedly from the slope of experimental critical erosion and deposition flow speeds versus grain-size, making it unlikely that the SS bar (or any deposit size for that matter) is simply predicted by the deposition threshold. A more probable control is the rate of deposition of the different size fractions under changing flows over several tens of years (the typical averaging period of a centimetre of deposited sediment). This suggestion is supported by a simple depositional model for which the deposited SS bar is calculated from measured currents with a size-varying depositional threshold. More surficial sediment samples taken near long-term current meter sites are needed to make calibrations more robust and explore regional differences.

  5. The effect of machine learning regression algorithms and sample size on individualized behavioral prediction with functional connectivity features.

    PubMed

    Cui, Zaixu; Gong, Gaolang

    2018-06-02

    Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    PubMed

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  7. Self-objectification and disordered eating: A meta-analysis.

    PubMed

    Schaefer, Lauren M; Thompson, J Kevin

    2018-06-01

    Objectification theory posits that self-objectification increases risk for disordered eating. The current study sought to examine the relationship between self-objectification and disordered eating using meta-analytic techniques. Data from 53 cross-sectional studies (73 effect sizes) revealed a significant moderate positive overall effect (r = .39), which was moderated by gender, ethnicity, sexual orientation, and measurement of self-objectification. Specifically, larger effect sizes were associated with female samples and the Objectified Body Consciousness Scale. Effect sizes were smaller among heterosexual men and African American samples. Age, body mass index, country of origin, measurement of disordered eating, sample type and publication type were not significant moderators. Overall, results from the first meta-analysis to examine the relationship between self-objectification and disordered eating provide support for one of the major tenets of objectification theory and suggest that self-objectification may be a meaningful target in eating disorder interventions, though further work is needed to establish temporal and causal relationships. Findings highlight current gaps in the literature (e.g., limited representation of males, and ethnic and sexual minorities) with implications for guiding future research. © 2018 Wiley Periodicals, Inc.

  8. Comparative study of various pixel photodiodes for digital radiography: Junction structure, corner shape and noble window opening

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Uk; Cho, Minsik; Lee, Dae Hee; Yoo, Hyunjun; Kim, Myung Soo; Bae, Jun Hyung; Kim, Hyoungtaek; Kim, Jongyul; Kim, Hyunduk; Cho, Gyuseong

    2012-05-01

    Recently, large-size 3-transistors (3-Tr) active pixel complementary metal-oxide silicon (CMOS) image sensors have been being used for medium-size digital X-ray radiography, such as dental computed tomography (CT), mammography and nondestructive testing (NDT) for consumer products. We designed and fabricated 50 µm × 50 µm 3-Tr test pixels having a pixel photodiode with various structures and shapes by using the TSMC 0.25-m standard CMOS process to compare their optical characteristics. The pixel photodiode output was continuously sampled while a test pixel was continuously illuminated by using 550-nm light at a constant intensity. The measurement was repeated 300 times for each test pixel to obtain reliable results on the mean and the variance of the pixel output at each sampling time. The sampling rate was 50 kHz, and the reset period was 200 msec. To estimate the conversion gain, we used the mean-variance method. From the measured results, the n-well/p-substrate photodiode, among 3 photodiode structures available in a standard CMOS process, showed the best performance at a low illumination equivalent to the typical X-ray signal range. The quantum efficiencies of the n+/p-well, n-well/p-substrate, and n+/p-substrate photodiodes were 18.5%, 62.1%, and 51.5%, respectively. From a comparison of pixels with rounded and rectangular corners, we found that a rounded corner structure could reduce the dark current in large-size pixels. A pixel with four rounded corners showed a reduced dark current of about 200fA compared to a pixel with four rectangular corners in our pixel sample size. Photodiodes with round p-implant openings showed about 5% higher dark current, but about 34% higher sensitivities, than the conventional photodiodes.

  9. Electrical conductivity and magnetic field dependent current-voltage characteristics of nanocrystalline nickel ferrite

    NASA Astrophysics Data System (ADS)

    Ghosh, P.; Bhowmik, R. N.; Das, M. R.; Mitra, P.

    2017-04-01

    We have studied the grain size dependent electrical conductivity, dielectric relaxation and magnetic field dependent current voltage (I - V) characteristics of nickel ferrite (NiFe2O4) . The material has been synthesized by sol-gel self-combustion technique, followed by ball milling at room temperature in air environment to control the grain size. The material has been characterized using X-ray diffraction (refined with MAUD software analysis) and Transmission electron microscopy. Impedance spectroscopy and I - V characteristics in the presence of variable magnetic fields have confirmed the increase of resistivity for the fine powdered samples (grain size 5.17±0.6 nm), resulted from ball milling of the chemical routed sample. Activation energy of the material for electrical charge hopping process has increased with the decrease of grain size by mechanical milling of chemical routed sample. The I - V curves showed many highly non-linear and irreversible electrical features, e.g., I - V loop and bi-stable electronic states (low resistance state-LRS and high resistance state-HRS) on cycling the electrical bias voltage direction during I-V curve measurement. The electrical dc resistance for the chemically routed (without milled) sample in HRS (∼3.4876×104 Ω) at 20 V in presence of magnetic field 10 kOe has enhanced to ∼3.4152×105 Ω for the 10 h milled sample. The samples exhibited an unusual negative differential resistance (NDR) effect that gradually decreased on decreasing the grain size of the material. The magneto-resistance of the samples at room temperature has been found substantially large (∼25-65%). The control of electrical charge transport properties under magnetic field, as observed in the present ferrimagnetic material, indicate the magneto-electric coupling in the materials and the results could be useful in spintronics applications.

  10. A LDR-PCR approach for multiplex polymorphisms genotyping of severely degraded DNA with fragment sizes <100 bp.

    PubMed

    Zhang, Zhen; Wang, Bao-Jie; Guan, Hong-Yu; Pang, Hao; Xuan, Jin-Feng

    2009-11-01

    Reducing amplicon sizes has become a major strategy for analyzing degraded DNA typical of forensic samples. However, amplicon sizes in current mini-short tandem repeat-polymerase chain reaction (PCR) and mini-sequencing assays are still not suitable for analysis of severely degraded DNA. In this study, we present a multiplex typing method that couples ligase detection reaction with PCR that can be used to identify single nucleotide polymorphisms and small-scale insertion/deletions in a sample of severely fragmented DNA. This method adopts thermostable ligation for allele discrimination and subsequent PCR for signal enhancement. In this study, four polymorphic loci were used to assess the ability of this technique to discriminate alleles in an artificially degraded sample of DNA with fragment sizes <100 bp. Our results showed clear allelic discrimination of single or multiple loci, suggesting that this method might aid in the analysis of extremely degraded samples in which allelic drop out of larger fragments is observed.

  11. Comparison of free flux flow in two single crystals of V3Si with slightly different pinning strengths

    NASA Astrophysics Data System (ADS)

    Gafarov, Ozarfar; Gapud, Albert A.; Moraes, Sunhee; Thompson, James R.; Christen, David K.; Reyes, Arneil P.

    2011-03-01

    Results of recent measurements on two very clean, single-crystal samples of the A15 superconductor V3 Si are presented. Magnetization and transport data confirm the ``clean'' quality of both samples, as manifested by: (i) high residual resistivity ratio, (ii) low critical current densities, and (iii) a ``peak'' effect in the field dependence of critical current. The (H,T) phase line for this peak effect is shifted in the slightly ``dirtier'' sample, which also has higher critical current density Jc (H). High-current Lorentz forces are applied on mixed-state vortices in order to induce the highly ordered free flux flow (FFF) phase, using the same methods as in previous work. A traditional model by Bardeen and Stephen (BS) predicts a simple field dependence of flux flow resistivity ρf (H), presuming a field-independent flux core size. A model by Kogan and Zelezhina (KZ) takes core size into account, and predicts a deviation from BS. In this study, ρf (H) is confirmed to be consistent with predictions of KZ, as will be discussed. Funded by Research Corporation and the National Science Foundation.

  12. Free flux flow in two single crystals of V3Si with differing pinning strengths

    NASA Astrophysics Data System (ADS)

    Gafarov, O.; Gapud, A. A.; Moraes, S.; Thompson, J. R.; Christen, D. K.; Reyes, A. P.

    2011-10-01

    Results of measurements on two very clean, single-crystal samples of the A15 superconductor V3Si are presented. Magnetization and transport data have confirmed the ``clean'' quality of both samples, as manifested by: (i) high residual electrical resistivity ratio, (ii) very low critical current densities Jc, and (iii) a ``peak'' effect in the field dependence of critical current. The (H,T) phase line for this peak effect is shifted down for the slightly ``dirtier'' sample, which consequently also has higher critical current density Jc(H). Large Lorentz forces are applied on mixed-state vortices via large currents, in order to induce the highly ordered free flux flow (FFF) phase, using experimental methods developed previously. The traditional model by Bardeen and Stephen (BS) predicts a simple field dependence of flux flow resistivity ρf(H) ˜ H/Hc2, presuming a field-independent flux core size. A model by Kogan and Zelezhina (KZ) takes into account the effects of magnetic field on core size, and predict a clear deviation from the linear BS dependence. In this study, ρf(H) is confirmed to be consistent with predictions of KZ.

  13. Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size

    PubMed Central

    Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa

    2016-01-01

    Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913

  14. Increasing efficiency of preclinical research by group sequential designs

    PubMed Central

    Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich

    2017-01-01

    Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371

  15. Sample size requirements for indirect association studies of gene-environment interactions (G x E).

    PubMed

    Hein, Rebecca; Beckmann, Lars; Chang-Claude, Jenny

    2008-04-01

    Association studies accounting for gene-environment interactions (G x E) may be useful for detecting genetic effects. Although current technology enables very dense marker spacing in genetic association studies, the true disease variants may not be genotyped. Thus, causal genes are searched for by indirect association using genetic markers in linkage disequilibrium (LD) with the true disease variants. Sample sizes needed to detect G x E effects in indirect case-control association studies depend on the true genetic main effects, disease allele frequencies, whether marker and disease allele frequencies match, LD between loci, main effects and prevalence of environmental exposures, and the magnitude of interactions. We explored variables influencing sample sizes needed to detect G x E, compared these sample sizes with those required to detect genetic marginal effects, and provide an algorithm for power and sample size estimations. Required sample sizes may be heavily inflated if LD between marker and disease loci decreases. More than 10,000 case-control pairs may be required to detect G x E. However, given weak true genetic main effects, moderate prevalence of environmental exposures, as well as strong interactions, G x E effects may be detected with smaller sample sizes than those needed for the detection of genetic marginal effects. Moreover, in this scenario, rare disease variants may only be detectable when G x E is included in the analyses. Thus, the analysis of G x E appears to be an attractive option for the detection of weak genetic main effects of rare variants that may not be detectable in the analysis of genetic marginal effects only.

  16. Investigating the effect of sputtering conditions on the physical properties of aluminum thin film and the resulting alumina template

    NASA Astrophysics Data System (ADS)

    Taheriniya, Shabnam; Parhizgar, Sara Sadat; Sari, Amir Hossein

    2018-06-01

    To study the alumina template pore size distribution as a function of Al thin film grain size distribution, porous alumina templates were prepared by anodizing sputtered aluminum thin films. To control the grain size the aluminum samples were sputtered with the rate of 0.5, 1 and 2 Å/s and the substrate temperature was either 25, 75 or 125 °C. All samples were anodized for 120 s in 1 M sulfuric acid solution kept at 1 °C while a 15 V potential was being applied. The standard deviation value for samples deposited at room temperature but with different rates is roughly 2 nm in both thin film and porous template form but it rises to approximately 4 nm with substrate temperature. Samples with the average grain size of 13, 14, 18.5 and 21 nm respectively produce alumina templates with an average pore size of 8.5, 10, 15 and 16 nm in that order which shows the average grain size limits the average pore diameter in the resulting template. Lateral correlation length and grain boundary effect are other factors that affect the pore formation process and pore size distribution by limiting the initial current density.

  17. Improvements to sample processing and measurement to enable more widespread environmental application of tritium.

    PubMed

    Moran, James; Alexander, Thomas; Aalseth, Craig; Back, Henning; Mace, Emily; Overman, Cory; Seifert, Allen; Freeburg, Wilcox

    2017-08-01

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133Bq of total T activity. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of both natural and artificial T behavior in the environment. Copyright © 2017. Published by Elsevier Ltd.

  18. Improvements to sample processing and measurement to enable more widespread environmental application of tritium

    DOE PAGES

    Moran, James; Alexander, Thomas; Aalseth, Craig; ...

    2017-01-26

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. Here, we present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We also identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133 Bq of total T activity. Furthermore, this enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps inmore » our understanding of both natural and artificial T behavior in the environment.« less

  19. Drawing a representative sample from the NCSS soil database: Building blocks for the national wind erosion network

    USDA-ARS?s Scientific Manuscript database

    Developing national wind erosion models for the continental United States requires a comprehensive spatial representation of continuous soil particle size distributions (PSD) for model input. While the current coverage of soil survey is nearly complete, the most detailed particle size classes have c...

  20. Particle size distributions of metal and non-metal elements in an urban near-highway environment

    EPA Science Inventory

    Determination of the size-resolved elemental composition of near-highway particulate matter (PM) is important due to the health and environmental risks it poses. In the current study, twelve 24 h PM samples were collected (in July-August 2006) using a low-pressure impactor positi...

  1. Topological Analysis and Gaussian Decision Tree: Effective Representation and Classification of Biosignals of Small Sample Size.

    PubMed

    Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong

    2017-09-01

    Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.

  2. CIDR

    Science.gov Websites

    studies. Investigators must supply positive and negative controls. Current pricing for CIDR Program studies are for a minimum study size of 90 samples and increasing in multiples of 90. Please inquire for for the assay is included for CIDR Program studies. FFPE samples are supported for MethylationEPIC

  3. The Contribution of Expanding Portion Sizes to the US Obesity Epidemic

    PubMed Central

    Young, Lisa R.; Nestle, Marion

    2002-01-01

    Objectives. Because larger food portions could be contributing to the increasing prevalence of overweight and obesity, this study was designed to weigh samples of marketplace foods, identify historical changes in the sizes of those foods, and compare current portions with federal standards. Methods. We obtained information about current portions from manufacturers or from direct weighing; we obtained information about past portions from manufacturers or contemporary publications. Results. Marketplace food portions have increased in size and now exceed federal standards. Portion sizes began to grow in the 1970s, rose sharply in the 1980s, and have continued in parallel with increasing body weights. Conclusions. Because energy content increases with portion size, educational and other public health efforts to address obesity should focus on the need for people to consume smaller portions. PMID:11818300

  4. Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective.

    PubMed

    Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke

    2015-12-01

    We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model-based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. © 2015, Human Factors and Ergonomics Society.

  5. Water quality monitoring: A comparative case study of municipal and Curtin Sarawak's lake samples

    NASA Astrophysics Data System (ADS)

    Anand Kumar, A.; Jaison, J.; Prabakaran, K.; Nagarajan, R.; Chan, Y. S.

    2016-03-01

    In this study, particle size distribution and zeta potential of the suspended particles in municipal water and lake surface water of Curtin Sarawak's lake were compared and the samples were analysed using dynamic light scattering method. High concentration of suspended particles affects the water quality as well as suppresses the aquatic photosynthetic systems. A new approach has been carried out in the current work to determine the particle size distribution and zeta potential of the suspended particles present in the water samples. The results for the lake samples showed that the particle size ranges from 180nm to 1345nm and the zeta potential values ranges from -8.58 mV to -26.1 mV. High zeta potential value was observed in the surface water samples of Curtin Sarawak's lake compared to the municipal water. The zeta potential values represent that the suspended particles are stable and chances of agglomeration is lower in lake water samples. Moreover, the effects of physico-chemical parameters on zeta potential of the water samples were also discussed.

  6. Accounting for missing data in the estimation of contemporary genetic effective population size (N(e) ).

    PubMed

    Peel, D; Waples, R S; Macbeth, G M; Do, C; Ovenden, J R

    2013-03-01

    Theoretical models are often applied to population genetic data sets without fully considering the effect of missing data. Researchers can deal with missing data by removing individuals that have failed to yield genotypes and/or by removing loci that have failed to yield allelic determinations, but despite their best efforts, most data sets still contain some missing data. As a consequence, realized sample size differs among loci, and this poses a problem for unbiased methods that must explicitly account for random sampling error. One commonly used solution for the calculation of contemporary effective population size (N(e) ) is to calculate the effective sample size as an unweighted mean or harmonic mean across loci. This is not ideal because it fails to account for the fact that loci with different numbers of alleles have different information content. Here we consider this problem for genetic estimators of contemporary effective population size (N(e) ). To evaluate bias and precision of several statistical approaches for dealing with missing data, we simulated populations with known N(e) and various degrees of missing data. Across all scenarios, one method of correcting for missing data (fixed-inverse variance-weighted harmonic mean) consistently performed the best for both single-sample and two-sample (temporal) methods of estimating N(e) and outperformed some methods currently in widespread use. The approach adopted here may be a starting point to adjust other population genetics methods that include per-locus sample size components. © 2012 Blackwell Publishing Ltd.

  7. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  8. A 1 kA-class cryogen-free critical current characterization system for superconducting coated conductors.

    PubMed

    Strickland, N M; Hoffmann, C; Wimbush, S C

    2014-11-01

    A cryogenic electrical transport measurement system is described that is particularly designed to meet the requirements for routine and effective characterization of commercial second generation high-temperature superconducting (HTS) wires in the form of coated conductors based on YBa2Cu3O7. Specific design parameters include a base temperature of 20 K, an applied magnetic field capability of 8 T (provided by a HTS split-coil magnet), and a measurement current capacity approaching 1 kA. The system accommodates samples up to 12 mm in width (the widest conductor size presently commercially available) and 40 mm long, although this is not a limiting size. The sample is able to be rotated freely with respect to the magnetic field direction about an axis parallel to the current flow, producing field angle variations in the standard maximum Lorentz force configuration. The system is completely free of liquid cryogens for both sample cooling and magnet cool-down and operation. Software enables the system to conduct a full characterization of the temperature, magnetic field, and field angle dependence of the critical current of a sample without any user interaction. The system has successfully been used to measure a wide range of experimental and commercially-available superconducting wire samples sourced from different manufacturers across the full range of operating conditions. The system encapsulates significant advances in HTS magnet design and efficient cryogen-free cooling technologies together with the capability for routine and automated high-current electrical transport measurements at cryogenic temperatures. It will be of interest to both research scientists investigating superconductor behavior and commercial wire manufacturers seeking to accurately characterize the performance of their product under all desired operating conditions.

  9. An Integrated Tool for System Analysis of Sample Return Vehicles

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Maddock, Robert W.; Winski, Richard G.

    2012-01-01

    The next important step in space exploration is the return of sample materials from extraterrestrial locations to Earth for analysis. Most mission concepts that return sample material to Earth share one common element: an Earth entry vehicle. The analysis and design of entry vehicles is multidisciplinary in nature, requiring the application of mass sizing, flight mechanics, aerodynamics, aerothermodynamics, thermal analysis, structural analysis, and impact analysis tools. Integration of a multidisciplinary problem is a challenging task; the execution process and data transfer among disciplines should be automated and consistent. This paper describes an integrated analysis tool for the design and sizing of an Earth entry vehicle. The current tool includes the following disciplines: mass sizing, flight mechanics, aerodynamics, aerothermodynamics, and impact analysis tools. Python and Java languages are used for integration. Results are presented and compared with the results from previous studies.

  10. Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective

    PubMed Central

    Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke

    2015-01-01

    Objective We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Background Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. Method An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. Results The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. Conclusion This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. Application The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model–based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. PMID:26169309

  11. Concurrent measurements of size-segregated particulate sulfate, nitrate and ammonium using quartz fiber filters, glass fiber filters and cellulose membranes

    NASA Astrophysics Data System (ADS)

    Tian, Shili; Pan, Yuepeng; Wang, Jian; Wang, Yuesi

    2016-11-01

    Current science and policy requirements have focused attention on the need to expand and improve particulate matter (PM) sampling methods. To explore how sampling filter type affects artifacts in PM composition measurements, size-resolved particulate SO42-, NO3- and NH4+ (SNA) were measured on quartz fiber filters (QFF), glass fiber filters (GFF) and cellulose membranes (CM) concurrently in an urban area of Beijing on both clean and hazy days. The results showed that SNA concentrations in most of the size fractions exhibited the following patterns on different filters: CM > QFF > GFF for NH4+; GFF > QFF > CM for SO42-; and GFF > CM > QFF for NO3-. The different patterns in coarse particles were mainly affected by filter acidity, and that in fine particles were mainly affected by hygroscopicity of the filters (especially in size fraction of 0.65-2.1 μm). Filter acidity and hygroscopicity also shifted the peaks of the annual mean size distributions of SNA on QFF from 0.43-0.65 μm on clean days to 0.65-1.1 μm on hazy days. However, this size shift was not as distinct for samples measured with CM and GFF. In addition, relative humidity (RH) and pollution levels are important factors that can enhance particulate size mode shifts of SNA on clean and hazy days. Consequently, the annual mean size distributions of SNA had maxima at 0.65-1.1 μm for QFF samples and 0.43-0.65 μm for GFF and CM samples. Compared with NH4+ and SO42-, NO3- is more sensitive to RH and pollution levels, accordingly, the annual mean size distribution of NO3- exhibited peak at 0.65-1.1 μm for CM samples instead of 0.43-0.65 μm. These methodological uncertainties should be considered when quantifying the concentrations and size distributions of SNA under different RH and haze conditions.

  12. Chemical Characterization of an Envelope A Sample from Hanford Tank 241-AN-103

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hay, M.S.

    2000-08-23

    A whole tank composite sample from Hanford waste tank 241-AN-103 was received at the Savannah River Technology Center (SRTC) and chemically characterized. Prior to characterization the sample was diluted to {approximately}5 M sodium concentration. The filtered supernatant liquid, the total dried solids of the diluted sample, and the washed insoluble solids obtained from filtration of the diluted sample were analyzed. A mass balance calculation of the three fractions of the sample analyzed indicate the analytical results appear relatively self-consistent for major components of the sample. However, some inconsistency was observed between results where more than one method of determination wasmore » employed and for species present in low concentrations. A direct comparison to previous analyses of material from tank 241-AN-103 was not possible due to unavailability of data for diluted samples of tank 241-AN-103 whole tank composites. However, the analytical data for other types of samples from 241-AN-103 we re mathematically diluted and compare reasonably with the current results. Although the segments of the core samples used to prepare the sample received at SRTC were combined in an attempt to produce a whole tank composite, determination of how well the results of the current analysis represent the actual composition of the Hanford waste tank 241-AN-103 remains problematic due to the small sample size and the large size of the non-homogenized waste tank.« less

  13. Detecting spatial structures in throughfall data: the effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-04-01

    In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.

  14. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.

  15. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  16. Particle size distributions of currently used pesticides in ambient air of an agricultural Mediterranean area

    NASA Astrophysics Data System (ADS)

    Coscollà, Clara; Muñoz, Amalia; Borrás, Esther; Vera, Teresa; Ródenas, Milagros; Yusà, Vicent

    2014-10-01

    This work presents first data on the particle size distribution of 16 pesticides currently used in Mediterranean agriculture in the atmosphere. Particulate matter air samples were collected using a cascade impactor distributed into four size fractions in a rural site of Valencia Region, during July to September in 2012 and from May to July in 2013. A total of 16 pesticides were detected, including six fungicides, seven insecticides and three herbicides. The total concentrations in the particulate phase (TSP: Total Suspended Particulate) ranged from 3.5 to 383.1 pg m-3. Most of the pesticides (such as carbendazim, tebuconazole, chlorpyrifos-ethyl and chlorpyrifos-methyl) were accumulated in the ultrafine-fine (<1 μm) and coarse (2.5-10 μm) particle size fractions. Others like omethoate, dimethoate and malathion were presented only in the ultrafine-fine size fraction (<1 μm). Finally, diuron, diphenylamine and terbuthylazine-desethyl-2-OH also show a bimodal distribution but mainly in the coarse size fractions.

  17. MUDMASTER: A Program for Calculating Crystalline Size Distributions and Strain from the Shapes of X-Ray Diffraction Peaks

    USGS Publications Warehouse

    Eberl, D.D.; Drits, V.A.; Środoń, Jan; Nüesch, R.

    1996-01-01

    Particle size may strongly influence the physical and chemical properties of a substance (e.g. its rheology, surface area, cation exchange capacity, solubility, etc.), and its measurement in rocks may yield geological information about ancient environments (sediment provenance, degree of metamorphism, degree of weathering, current directions, distance to shore, etc.). Therefore mineralogists, geologists, chemists, soil scientists, and others who deal with clay-size material would like to have a convenient method for measuring particle size distributions. Nano-size crystals generally are too fine to be measured by light microscopy. Laser scattering methods give only average particle sizes; therefore particle size can not be measured in a particular crystallographic direction. Also, the particles measured by laser techniques may be composed of several different minerals, and may be agglomerations of individual crystals. Measurement by electron and atomic force microscopy is tedious, expensive, and time consuming. It is difficult to measure more than a few hundred particles per sample by these methods. This many measurements, often taking several days of intensive effort, may yield an accurate mean size for a sample, but may be too few to determine an accurate distribution of sizes. Measurement of size distributions by X-ray diffraction (XRD) solves these shortcomings. An X-ray scan of a sample occurs automatically, taking a few minutes to a few hours. The resulting XRD peaks average diffraction effects from billions of individual nano-size crystals. The size that is measured by XRD may be related to the size of the individual crystals of the mineral in the sample, rather than to the size of particles formed from the agglomeration of these crystals. Therefore one can determine the size of a particular mineral in a mixture of minerals, and the sizes in a particular crystallographic direction of that mineral.

  18. Identification of missing variants by combining multiple analytic pipelines.

    PubMed

    Ren, Yingxue; Reddy, Joseph S; Pottier, Cyril; Sarangi, Vivekananda; Tian, Shulan; Sinnwell, Jason P; McDonnell, Shannon K; Biernacka, Joanna M; Carrasquillo, Minerva M; Ross, Owen A; Ertekin-Taner, Nilüfer; Rademakers, Rosa; Hudson, Matthew; Mainzer, Liudmila Sergeevna; Asmann, Yan W

    2018-04-16

    After decades of identifying risk factors using array-based genome-wide association studies (GWAS), genetic research of complex diseases has shifted to sequencing-based rare variants discovery. This requires large sample sizes for statistical power and has brought up questions about whether the current variant calling practices are adequate for large cohorts. It is well-known that there are discrepancies between variants called by different pipelines, and that using a single pipeline always misses true variants exclusively identifiable by other pipelines. Nonetheless, it is common practice today to call variants by one pipeline due to computational cost and assume that false negative calls are a small percent of total. We analyzed 10,000 exomes from the Alzheimer's Disease Sequencing Project (ADSP) using multiple analytic pipelines consisting of different read aligners and variant calling strategies. We compared variants identified by using two aligners in 50,100, 200, 500, 1000, and 1952 samples; and compared variants identified by adding single-sample genotyping to the default multi-sample joint genotyping in 50,100, 500, 2000, 5000 and 10,000 samples. We found that using a single pipeline missed increasing numbers of high-quality variants correlated with sample sizes. By combining two read aligners and two variant calling strategies, we rescued 30% of pass-QC variants at sample size of 2000, and 56% at 10,000 samples. The rescued variants had higher proportions of low frequency (minor allele frequency [MAF] 1-5%) and rare (MAF < 1%) variants, which are the very type of variants of interest. In 660 Alzheimer's disease cases with earlier onset ages of ≤65, 4 out of 13 (31%) previously-published rare pathogenic and protective mutations in APP, PSEN1, and PSEN2 genes were undetected by the default one-pipeline approach but recovered by the multi-pipeline approach. Identification of the complete variant set from sequencing data is the prerequisite of genetic association analyses. The current analytic practice of calling genetic variants from sequencing data using a single bioinformatics pipeline is no longer adequate with the increasingly large projects. The number and percentage of quality variants that passed quality filters but are missed by the one-pipeline approach rapidly increased with sample size.

  19. Imaging of zymogen granules in fully wet cells: evidence for restricted mechanism of granule growth.

    PubMed

    Hammel, Ilan; Anaby, Debbie

    2007-09-01

    The introduction of wet SEM imaging technology permits electron microscopy of wet samples. Samples are placed in sealed specimen capsules and are insulated from the vacuum in the SEM chamber by an impermeable, electron-transparent membrane. The complete insulation of the sample from the vacuum allows direct imaging of fully hydrated, whole-mount tissue. In the current work, we demonstrate direct inspection of thick pancreatic tissue slices (above 400 mum). In the case of scanning of the pancreatic surface, the boundaries of intracellular features are seen directly. Thus no unfolding is required to ascertain the actual particle size distribution based on the sizes of the sections. This method enabled us to investigate the true granule size distribution and confirm early studies of improved conformity to a Poisson-like distribution, suggesting that the homotypic granule growth results from a mechanism, which favors the addition of a single unit granule to mature granules.

  20. Bayesian Power Prior Analysis and Its Application to Operational Risk and Rasch Model

    ERIC Educational Resources Information Center

    Zhang, Honglian

    2010-01-01

    When sample size is small, informative priors can be valuable in increasing the precision of estimates. Pooling historical data and current data with equal weights under the assumption that both of them are from the same population may be misleading when heterogeneity exists between historical data and current data. This is particularly true when…

  1. 76 FR 6598 - Notice of Intent To Request Revision and Extension of a Currently Approved Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-07

    ... Clearance for Survey Research Studies. Revision to burden hours may be needed due to changes in the size of the target population, sampling design, and/or questionnaire length. DATES: Comments on this notice... Survey Research Studies. OMB Control Number: 0535-0248. Type of Request: To revise and extend a currently...

  2. 77 FR 75120 - Notice of Intent To Request Revision and Extension of a Currently Approved Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-19

    ... Clearance for Survey Research Studies. Revision to burden hours will be needed due to changes in the size of the target population, sampling design, and/or questionnaire length. DATES: Comments on this notice... Survey Research Studies. OMB Control Number: 0535-0248. Type of Request: To revise and extend a currently...

  3. Wind-driven upwelling effects on cephalopod paralarvae: Octopus vulgaris and Loliginidae off the Galician coast (NE Atlantic)

    NASA Astrophysics Data System (ADS)

    Otero, Jaime; Álvarez-Salgado, X. Antón; González, Ángel F.; Souto, Carlos; Gilcoto, Miguel; Guerra, Ángel

    2016-02-01

    Circulation patterns of coastal upwelling areas may have central consequences for the abundance and cross-shelf transport of the larval stages of many species. Previous studies have provided evidences that larvae distribution results from a combination of subtidal circulation, species-specific behaviour and larval sources. However, most of these works were conducted on organisms characterised by small-sized and abundant early life phases. Here, we studied the influence of the hydrography and circulation of the Ría de Vigo and adjacent shelf (NW Iberian upwelling system) on the paralarval abundance of two contrasting cephalopods, the benthic common octopus (Octopus vulgaris) and the pelagic squids (Loliginidae). We sampled repeatedly a cross-shore transect during the years 2003-2005 and used zero inflated models to accommodate the scarcity and patchy distribution of cephalopod paralarvae. The probability of catching early stages of both cephalopods was higher at night. Octopus paralarvae were more abundant in the surface layer at night whereas loliginids preferred the bottom layer regardless of the sampling time. Abundance of both cephalopods increased when shelf currents flowed polewards, water temperature was high and water column stability was low. The probability of observing an excess of zero catches decreased during the year for octopus and at high current speed for loliginids. In addition, the circulation pattern conditioned the body size distribution of both paralarvae; while the average size of the captured octopuses increased (decreased) with poleward currents at daylight (nighttime), squids were smaller with poleward currents regardless of the sampling time. These results contribute to the understanding of the effects that the hydrography and subtidal circulation of a coastal upwelling have on the fate of cephalopod early life stages.

  4. Nanopore Sequencing as a Rapidly Deployable Ebola Outbreak Tool.

    PubMed

    Hoenen, Thomas; Groseth, Allison; Rosenke, Kyle; Fischer, Robert J; Hoenen, Andreas; Judson, Seth D; Martellaro, Cynthia; Falzarano, Darryl; Marzi, Andrea; Squires, R Burke; Wollenberg, Kurt R; de Wit, Emmie; Prescott, Joseph; Safronetz, David; van Doremalen, Neeltje; Bushmaker, Trenton; Feldmann, Friederike; McNally, Kristin; Bolay, Fatorma K; Fields, Barry; Sealy, Tara; Rayfield, Mark; Nichol, Stuart T; Zoon, Kathryn C; Massaquoi, Moses; Munster, Vincent J; Feldmann, Heinz

    2016-02-01

    Rapid sequencing of RNA/DNA from pathogen samples obtained during disease outbreaks provides critical scientific and public health information. However, challenges exist for exporting samples to laboratories or establishing conventional sequencers in remote outbreak regions. We successfully used a novel, pocket-sized nanopore sequencer at a field diagnostic laboratory in Liberia during the current Ebola virus outbreak.

  5. The Relation of Economic Status to Subjective Well-Being in Developing Countries: A Meta-Analysis

    ERIC Educational Resources Information Center

    Howell, Ryan T.; Howell, Colleen J.

    2008-01-01

    The current research synthesis integrates the findings of 111 independent samples from 54 economically developing countries that examined the relation between economic status and subjective well-being (SWB). The average economic status-SWB effect size was strongest among low-income developing economies (r = 0.28) and for samples that were least…

  6. Developing and refining NIR calibrations for total carbohydrate composition and isoflavones and saponins in ground whole soy meal

    USDA-ARS?s Scientific Manuscript database

    Although many near infrared (NIR) spectrometric calibrations exist for a variety of components in soy, current calibration methods are often limited by either a small sample size on which the calibrations are based or a wide variation in sample preparation and measurement methods, which yields unrel...

  7. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp; Zhang, Xu

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources andmore » pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.« less

  8. A log-linear model approach to estimation of population size using the line-transect sampling method

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1978-01-01

    The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.

  9. Lot quality assurance sampling techniques in health surveys in developing countries: advantages and current constraints.

    PubMed

    Lanata, C F; Black, R E

    1991-01-01

    Traditional survey methods, which are generally costly and time-consuming, usually provide information at the regional or national level only. The utilization of lot quality assurance sampling (LQAS) methodology, developed in industry for quality control, makes it possible to use small sample sizes when conducting surveys in small geographical or population-based areas (lots). This article describes the practical use of LQAS for conducting health surveys to monitor health programmes in developing countries. Following a brief description of the method, the article explains how to build a sample frame and conduct the sampling to apply LQAS under field conditions. A detailed description of the procedure for selecting a sampling unit to monitor the health programme and a sample size is given. The sampling schemes utilizing LQAS applicable to health surveys, such as simple- and double-sampling schemes, are discussed. The interpretation of the survey results and the planning of subsequent rounds of LQAS surveys are also discussed. When describing the applicability of LQAS in health surveys in developing countries, the article considers current limitations for its use by health planners in charge of health programmes, and suggests ways to overcome these limitations through future research. It is hoped that with increasing attention being given to industrial sampling plans in general, and LQAS in particular, their utilization to monitor health programmes will provide health planners in developing countries with powerful techniques to help them achieve their health programme targets.

  10. High-resolution, submicron particle size distribution analysis using gravitational-sweep sedimentation.

    PubMed Central

    Mächtle, W

    1999-01-01

    Sedimentation velocity is a powerful tool for the analysis of complex solutions of macromolecules. However, sample turbidity imposes an upper limit to the size of molecular complexes currently amenable to such analysis. Furthermore, the breadth of the particle size distribution, combined with possible variations in the density of different particles, makes it difficult to analyze extremely complex mixtures. These same problems are faced in the polymer industry, where dispersions of latices, pigments, lacquers, and emulsions must be characterized. There is a rich history of methods developed for the polymer industry finding use in the biochemical sciences. Two such methods are presented. These use analytical ultracentrifugation to determine the density and size distributions for submicron-sized particles. Both methods rely on Stokes' equations to estimate particle size and density, whereas turbidity, corrected using Mie's theory, provides the concentration measurement. The first method uses the sedimentation time in dispersion media of different densities to evaluate the particle density and size distribution. This method works provided the sample is chemically homogeneous. The second method splices together data gathered at different sample concentrations, thus permitting the high-resolution determination of the size distribution of particle diameters ranging from 10 to 3000 nm. By increasing the rotor speed exponentially from 0 to 40,000 rpm over a 1-h period, size distributions may be measured for extremely broadly distributed dispersions. Presented here is a short history of particle size distribution analysis using the ultracentrifuge, along with a description of the newest experimental methods. Several applications of the methods are provided that demonstrate the breadth of its utility, including extensions to samples containing nonspherical and chromophoric particles. PMID:9916040

  11. Bulk critical state and fundamental length scales of superconducting nanocrystalline Nb3Al in Nb-Al matrix

    NASA Astrophysics Data System (ADS)

    Mondal, Puspen; Manekar, Meghmalhar; Srivastava, A. K.; Roy, S. B.

    2009-07-01

    We present the results of magnetization measurements on an as-cast nanocrystalline Nb3Al superconductor embedded in Nb-Al matrix. The typical grain size of Nb3Al ranges from about 2-8 nm with the maximum number of grains at around 3.5 nm, as visualized using transmission electron microscopy. The isothermal magnetization hysteresis loops in the superconducting state can be reasonably fitted within the well-known Kim-Anderson critical-state model. By using the same fitting parameters, we calculate the variation in field with respect to distance inside the sample and show the existence of a critical state over length scales much larger than the typical size of the superconducting grains. Our results indicate that a bulk critical current is possible in a system comprising of nanoparticles. The nonsuperconducting Nb-Al matrix thus appears to play a major role in the bulk current flow through the sample. The superconducting coherence length ξ is estimated to be around 3 nm, which is comparable to the typical grain size. The penetration depth λ is estimated to be about 94 nm, which is much larger than the largest of the superconducting grains. Our results could be useful for tuning the current carrying capability of conductors made out of composite materials which involve superconducting nanoparticles.

  12. Coalescent: an open-science framework for importance sampling in coalescent theory.

    PubMed

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.

  13. Synthesis and characterization of porous silicon as hydroxyapatite host matrix of biomedical applications.

    PubMed

    Dussan, A; Bertel, S D; Melo, S F; Mesa, F

    2017-01-01

    In this work, porous-silicon samples were prepared by electrochemical etching on p-type (B-doped) Silicon (Si) wafers. Hydrofluoric acid (HF)-ethanol (C2H5OH) [HF:Et] and Hydrofluoric acid (HF)-dimethylformamide (DMF-C3H7NO) [HF:DMF] solution concentrations were varied between [1:2]-[1:3] and [1:7]-[1:9], respectively. Effects of synthesis parameters, like current density, solution concentrations, reaction time, on morphological properties were studied by scanning electron microscopy (SEM) and atomic force microscopy (AFM) measurements. Pore sizes varying from 20 nm to micrometers were obtained for long reaction times and [HF:Et] [1:2] concentrations; while pore sizes in the same order were observed for [HF:DMF] [1:7], but for shorter reaction time. Greater surface uniformity and pore distribution was obtained for a current density of around 8 mA/cm2 using solutions with DMF. A correlation between reflectance measurements and pore size is presented. The porous-silicon samples were used as substrate for hydroxyapatite growth by sol-gel method. X-ray diffraction (XRD) and SEM were used to characterize the layers grown. It was found that the layer topography obtained on PS samples was characterized by the evidence of Hydroxyapatite in the inter-pore regions and over the surface.

  14. Synthesis and characterization of porous silicon as hydroxyapatite host matrix of biomedical applications

    PubMed Central

    Dussan, A.; Bertel, S. D.; Melo, S. F.

    2017-01-01

    In this work, porous-silicon samples were prepared by electrochemical etching on p-type (B-doped) Silicon (Si) wafers. Hydrofluoric acid (HF)-ethanol (C2H5OH) [HF:Et] and Hydrofluoric acid (HF)-dimethylformamide (DMF-C3H7NO) [HF:DMF] solution concentrations were varied between [1:2]—[1:3] and [1:7]—[1:9], respectively. Effects of synthesis parameters, like current density, solution concentrations, reaction time, on morphological properties were studied by scanning electron microscopy (SEM) and atomic force microscopy (AFM) measurements. Pore sizes varying from 20 nm to micrometers were obtained for long reaction times and [HF:Et] [1:2] concentrations; while pore sizes in the same order were observed for [HF:DMF] [1:7], but for shorter reaction time. Greater surface uniformity and pore distribution was obtained for a current density of around 8 mA/cm2 using solutions with DMF. A correlation between reflectance measurements and pore size is presented. The porous-silicon samples were used as substrate for hydroxyapatite growth by sol-gel method. X-ray diffraction (XRD) and SEM were used to characterize the layers grown. It was found that the layer topography obtained on PS samples was characterized by the evidence of Hydroxyapatite in the inter-pore regions and over the surface. PMID:28291792

  15. STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples resultsmore » [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).« less

  16. STATISTICAL ANALYSIS OF TANK 19F FLOOR SAMPLE RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples resultsmore » to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).« less

  17. Pesticides in the atmosphere: a comparison of gas-particle partitioning and particle size distribution of legacy and current-use pesticides

    NASA Astrophysics Data System (ADS)

    Degrendele, C.; Okonski, K.; Melymuk, L.; Landlová, L.; Kukučka, P.; Audy, O.; Kohoutek, J.; Čupr, P.; Klánová, J.

    2015-09-01

    This study presents a comparison of seasonal variation, gas-particle partitioning and particle-phase size distribution of organochlorine pesticides (OCPs) and current-use pesticides (CUPs) in air. Two years (2012/2013) of weekly air samples were collected at a background site in the Czech Republic using a high-volume air sampler. To study the particle-phase size distribution, air samples were also collected at an urban and rural site in the area of Brno, Czech Republic, using a cascade impactor separating atmospheric particulates according to six size fractions. The timing and frequencies of detection of CUPs related to their legal status, usage amounts and their environmental persistence, while OCPs were consistently detected throughout the year. Two different seasonal trends were noted: certain compounds had higher concentrations only during the growing season (April-September) and other compounds showed two peaks, first in the growing season and second in plowing season (October-November). In general, gas-particle partitioning of pesticides was governed by physicochemical properties, with higher vapor pressure leading to higher gas phase fractions, and associated seasonality in gas-particle partitioning was observed in nine pesticides. However, some anomalous partitioning was observed for fenpropimorph and chlorpyrifos suggesting the influence of current pesticide application on gas-particle distributions. Nine pesticides had highest particle phase concentrations on fine particles (< 0.95 μm) and four pesticides on coarser (> 1.5 μm) particles.

  18. Photographic techniques for characterizing streambed particle sizes

    USGS Publications Warehouse

    Whitman, Matthew S.; Moran, Edward H.; Ourso, Robert T.

    2003-01-01

    We developed photographic techniques to characterize coarse (>2-mm) and fine (≤2-mm) streambed particle sizes in 12 streams in Anchorage, Alaska. Results were compared with current sampling techniques to assess which provided greater sampling efficiency and accuracy. The streams sampled were wadeable and contained gravel—cobble streambeds. Gradients ranged from about 5% at the upstream sites to about 0.25% at the downstream sites. Mean particle sizes and size-frequency distributions resulting from digitized photographs differed significantly from those resulting from Wolman pebble counts for five sites in the analysis. Wolman counts were biased toward selecting larger particles. Photographic analysis also yielded a greater number of measured particles (mean = 989) than did the Wolman counts (mean = 328). Stream embeddedness ratings assigned from field and photographic observations were significantly different at 5 of the 12 sites, although both types of ratings showed a positive relationship with digitized surface fines. Visual estimates of embeddedness and digitized surface fines may both be useful indicators of benthic conditions, but digitizing surface fines produces quantitative rather than qualitative data. Benefits of the photographic techniques include reduced field time, minimal streambed disturbance, convenience of postfield processing, easy sample archiving, and improved accuracy and replication potential.

  19. Effect of nanosized Co{sub 0.5}Ni{sub 0.5}Fe{sub 2}O{sub 4} on the transport critical current density of Bi{sub 1.6}Pb{sub 0.4}Sr{sub 2}Ca{sub 2}Cu{sub 3}O{sub 10} superconductor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafiz, M.; Abd-Shukor, R.

    2014-09-03

    The effects of nano-sized Co{sub 0.5}Ni{sub 0.5}Fe{sub 2}O{sub 4} addition on the superconducting and transport properties of Bi{sub 1.6}Pb{sub 0.4}Sr{sub 2}Ca{sub 2}Cu{sub 3}O{sub 10} (Bi-2223) in bulk form has been investigated. Bi-2223 superconductor was fabricated using co-precipitation method and 0.01 – 0.05 wt% of Co{sub 0.5}Ni{sub 0.5}Fe{sub 2}O{sub 4} nanoparticles with average size of 20 nm were added into the samples. The critical temperature (T{sub c}) and critical current density (J{sub c}) of the samples were measured by using the four-point probe method, while the phase formation and microstructure of the samples were examined using x-ray diffraction and SEM respectively.more » It was found that J{sub c} of all samples added with Co{sub 0.5}Ni{sub 0.5}Fe{sub 2}O{sub 4} were higher than non-added sample, with x = 0.01 wt. % sample showing the highest J{sub c}. This study showed that small addition of nano-Co{sub 0.5}Ni{sub 0.5}Fe{sub 2}O{sub 4} can effectively enhance the transport critical current density in Bi-2223 superconductor.« less

  20. Improving the Selection, Classification, and Utilization of Army Enlisted Personnel. Project A: Research Plan

    DTIC Science & Technology

    1983-05-01

    occur. 4) It is also true that during a given time period, at a given base, not all of the people in the sample will actually be available for testing...taken sample sizes into consideration, we currently estimate that with few exceptions, we will have adequate samples to perform the analysis of simple ...aalanced Half Sample Repli- cations (BHSA). His analyses of simple cases have shown that this method is substantially more efficient than the

  1. Velocity profile, water-surface slope, and bed-material size for selected streams in Colorado

    USGS Publications Warehouse

    Marchand, J.P.; Jarrett, R.D.; Jones, L.L.

    1984-01-01

    Existing methods for determining the mean velocity in a vertical sampling section do not address the conditions present in high-gradient, shallow-depth streams common to mountainous regions such as Colorado. The report presents velocity-profile data that were collected for 11 streamflow-gaging stations in Colorado using both a standard Price type AA current meter and a prototype Price Model PAA current meter. Computational results are compiled that will enable mean velocities calculated from measurements by the two current meters to be compared with each other and with existing methods for determining mean velocity. Water-surface slope, bed-material size, and flow-characteristic data for the 11 sites studied also are presented. (USGS)

  2. Thermal conductivity of nanocrystalline silicon: importance of grain size and frequency-dependent mean free paths.

    PubMed

    Wang, Zhaojie; Alaniz, Joseph E; Jang, Wanyoung; Garay, Javier E; Dames, Chris

    2011-06-08

    The thermal conductivity reduction due to grain boundary scattering is widely interpreted using a scattering length assumed equal to the grain size and independent of the phonon frequency (gray). To assess these assumptions and decouple the contributions of porosity and grain size, five samples of undoped nanocrystalline silicon have been measured with average grain sizes ranging from 550 to 64 nm and porosities from 17% to less than 1%, at temperatures from 310 to 16 K. The samples were prepared using current activated, pressure assisted densification (CAPAD). At low temperature the thermal conductivities of all samples show a T(2) dependence which cannot be explained by any traditional gray model. The measurements are explained over the entire temperature range by a new frequency-dependent model in which the mean free path for grain boundary scattering is inversely proportional to the phonon frequency, which is shown to be consistent with asymptotic analysis of atomistic simulations from the literature. In all cases the recommended boundary scattering length is smaller than the average grain size. These results should prove useful for the integration of nanocrystalline materials in devices such as advanced thermoelectrics.

  3. Current versus ideal skin tones and tanning behaviors in Caucasian college women.

    PubMed

    Hemrich, Ashley; Pawlow, Laura; Pomerantz, Andrew; Segrist, Dan

    2014-01-01

    To explore tanning behaviors and whether a discrepancy between current and ideal skin tones exists. The sample included 78 Caucasian women from a mid-sized midwestern university. Data were collected in spring 2012 via a paper questionnaire. Sixty-two percent of the sample regularly engaged in salon tanning at least once per week, with an average frequency of 2.5 visits per week. Thirteen percent endorsed regularly tanning 4 or more times per week, and 26% reported visiting a tanning bed more than once in a 24-hour period. Ninety-four percent wished their current skin tone was darker, and ideal tone was significantly darker than current tone. The data suggest that the young Caucasian women in this sample tend to be dissatisfied with their current skin tone to an extent that leads the majority of them to engage in risky, potentially cancer-causing behavior by either salon tanning or considering tanning in the future as time and finances become available.

  4. Use of Non-invasive Uterine Electromyography in the Diagnosis of Preterm Labour

    PubMed Central

    Lucovnik, M.; Novak-Antolic, Z.; Garfield, R.E.

    2012-01-01

    Predictive values of methods currently used in the clinics to diagnose preterm labour are low. This leads to missed opportunities to improve neonatal outcomes and, on the other hand, to unnecessary hospitalizations and treatments. In addition, research of new and potentially more effective preterm labour treatments is hindered by the inability to include only patients in true preterm labour into studies. Uterine electromyography (EMG) detects changes in cell excitability and coupling required for labour and has higher predictive values for preterm delivery than currently available methods. This methodology could also provide a better means to evaluate various therapeutic interventions for preterm labour. Our manuscript presents a review of uterine EMG studies examining the potential clinical value that this technology possesses over what is available to physicians currently. We also evaluated the impact that uterine EMG could have on investigation of preterm labour treatments by calculating sample sizes for studies using EMG vs. current methods to enrol women. Besides helping clinicians to make safer and more cost-effective decisions when managing patients with preterm contractions, implementation of uterine EMG for diagnosis of preterm labour would also greatly reduce sample sizes required for studies of treatments. PMID:24753891

  5. Nanopore Sequencing as a Rapidly Deployable Ebola Outbreak Tool

    PubMed Central

    Groseth, Allison; Rosenke, Kyle; Fischer, Robert J.; Hoenen, Andreas; Judson, Seth D.; Martellaro, Cynthia; Falzarano, Darryl; Marzi, Andrea; Squires, R. Burke; Wollenberg, Kurt R.; de Wit, Emmie; Prescott, Joseph; Safronetz, David; van Doremalen, Neeltje; Bushmaker, Trenton; Feldmann, Friederike; McNally, Kristin; Bolay, Fatorma K.; Fields, Barry; Sealy, Tara; Rayfield, Mark; Nichol, Stuart T.; Zoon, Kathryn C.; Massaquoi, Moses; Munster, Vincent J.; Feldmann, Heinz

    2016-01-01

    Rapid sequencing of RNA/DNA from pathogen samples obtained during disease outbreaks provides critical scientific and public health information. However, challenges exist for exporting samples to laboratories or establishing conventional sequencers in remote outbreak regions. We successfully used a novel, pocket-sized nanopore sequencer at a field diagnostic laboratory in Liberia during the current Ebola virus outbreak. PMID:26812583

  6. Theory of Mind Development in Chinese Children: A Meta-Analysis of False-Belief Understanding across Cultures and Languages

    ERIC Educational Resources Information Center

    Liu, David; Wellman, Henry M.; Tardif, Twila; Sabbagh, Mark A.

    2008-01-01

    Theory of mind is claimed to develop universally among humans across cultures with vastly different folk psychologies. However, in the attempt to test and confirm a claim of universality, individual studies have been limited by small sample sizes, sample specificities, and an overwhelming focus on Anglo-European children. The current meta-analysis…

  7. A methodology to investigate the intrinsic effect of the pulsed electric current during the spark plasma sintering of electrically conductive powders

    PubMed Central

    Locci, Antonio Mario; Cincotti, Alberto; Todde, Sara; Orrù, Roberto; Cao, Giacomo

    2010-01-01

    A novel methodology is proposed for investigating the effect of the pulsed electric current during the spark plasma sintering (SPS) of electrically conductive powders without potential misinterpretation of experimental results. First, ensemble configurations (geometry, size and material of the powder sample, die, plunger and spacers) are identified where the electric current is forced to flow only through either the sample or the die, so that the sample is heated either through the Joule effect or by thermal conduction, respectively. These ensemble configurations are selected using a recently proposed mathematical model of an SPS apparatus, which, once suitably modified, makes it possible to carry out detailed electrical and thermal analysis. Next, SPS experiments are conducted using the ensemble configurations theoretically identified. Using aluminum powders as a case study, we find that the temporal profiles of sample shrinkage, which indicate densification behavior, as well as the final density of the sample are clearly different when the electric current flows only through the sample or through the die containing it, whereas the temperature cycle and mechanical load are the same in both cases. PMID:27877354

  8. Grain size statistics and depositional pattern of the Ecca Group sandstones, Karoo Supergroup in the Eastern Cape Province, South Africa

    NASA Astrophysics Data System (ADS)

    Baiyegunhi, Christopher; Liu, Kuiwu; Gwavava, Oswald

    2017-11-01

    Grain size analysis is a vital sedimentological tool used to unravel the hydrodynamic conditions, mode of transportation and deposition of detrital sediments. In this study, detailed grain-size analysis was carried out on thirty-five sandstone samples from the Ecca Group in the Eastern Cape Province of South Africa. Grain-size statistical parameters, bivariate analysis, linear discriminate functions, Passega diagrams and log-probability curves were used to reveal the depositional processes, sedimentation mechanisms, hydrodynamic energy conditions and to discriminate different depositional environments. The grain-size parameters show that most of the sandstones are very fine to fine grained, moderately well sorted, mostly near-symmetrical and mesokurtic in nature. The abundance of very fine to fine grained sandstones indicate the dominance of low energy environment. The bivariate plots show that the samples are mostly grouped, except for the Prince Albert samples that show scattered trend, which is due to the either mixture of two modes in equal proportion in bimodal sediments or good sorting in unimodal sediments. The linear discriminant function analysis is dominantly indicative of turbidity current deposits under shallow marine environments for samples from the Prince Albert, Collingham and Ripon Formations, while those samples from the Fort Brown Formation are lacustrine or deltaic deposits. The C-M plots indicated that the sediments were deposited mainly by suspension and saltation, and graded suspension. Visher diagrams show that saltation is the major process of transportation, followed by suspension.

  9. ALCHEMY: a reliable method for automated SNP genotype calling for small batch sizes and highly homozygous populations

    PubMed Central

    Wright, Mark H.; Tung, Chih-Wei; Zhao, Keyan; Reynolds, Andy; McCouch, Susan R.; Bustamante, Carlos D.

    2010-01-01

    Motivation: The development of new high-throughput genotyping products requires a significant investment in testing and training samples to evaluate and optimize the product before it can be used reliably on new samples. One reason for this is current methods for automated calling of genotypes are based on clustering approaches which require a large number of samples to be analyzed simultaneously, or an extensive training dataset to seed clusters. In systems where inbred samples are of primary interest, current clustering approaches perform poorly due to the inability to clearly identify a heterozygote cluster. Results: As part of the development of two custom single nucleotide polymorphism genotyping products for Oryza sativa (domestic rice), we have developed a new genotype calling algorithm called ‘ALCHEMY’ based on statistical modeling of the raw intensity data rather than modelless clustering. A novel feature of the model is the ability to estimate and incorporate inbreeding information on a per sample basis allowing accurate genotyping of both inbred and heterozygous samples even when analyzed simultaneously. Since clustering is not used explicitly, ALCHEMY performs well on small sample sizes with accuracy exceeding 99% with as few as 18 samples. Availability: ALCHEMY is available for both commercial and academic use free of charge and distributed under the GNU General Public License at http://alchemy.sourceforge.net/ Contact: mhw6@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20926420

  10. Single-arm phase II trial design under parametric cure models.

    PubMed

    Wu, Jianrong

    2015-01-01

    The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.

  11. A new evaluation method of electron optical performance of high beam current probe forming systems.

    PubMed

    Fujita, Shin; Shimoyama, Hiroshi

    2005-10-01

    A new numerical simulation method is presented for the electron optical property analysis of probe forming systems with point cathode guns such as cold field emitters and the Schottky emitters. It has long been recognized that the gun aberrations are important parameters to be considered since the intrinsically high brightness of the point cathode gun is reduced due to its spherical aberration. The simulation method can evaluate the 'threshold beam current I(th)' above which the apparent brightness starts to decrease from the intrinsic value. It is found that the threshold depends on the 'electron gun focal length' as well as on the spherical aberration of the gun. Formulas are presented to estimate the brightness reduction as a function of the beam current. The gun brightness reduction must be included when the probe property (the relation between the beam current l(b) and the probe size on the sample, d) of the entire electron optical column is evaluated. Formulas that explicitly consider the gun aberrations into account are presented. It is shown that the probe property curve consists of three segments in the order of increasing beam current: (i) the constant probe size region, (ii) the brightness limited region where the probe size increases as d approximately I(b)(3/8), and (iii) the angular current intensity limited region in which the beam size increases rapidly as d approximately I(b)(3/2). Some strategies are suggested to increase the threshold beam current and to extend the effective beam current range of the point cathode gun into micro ampere regime.

  12. Particle size analysis on density, surface morphology and specific capacitance of carbon electrode from rubber wood sawdust

    NASA Astrophysics Data System (ADS)

    Taer, E.; Kurniasih, B.; Sari, F. P.; Zulkifli, Taslim, R.; Sugianto, Purnama, A.; Apriwandi, Susanti, Y.

    2018-02-01

    The particle size analysis for supercapacitor carbon electrodes from rubber wood sawdust (SGKK) has been done successfully. The electrode particle size was reviewed against the properties such as density, degree of crystallinity, surface morphology and specific capacitance. The variations in particle size were made by different treatment on the grinding and sieving process. The sample particle size was distinguished as 53-100 µm for 20 h (SA), 38-53 µm for 20 h (SB) and < 38 µm with variations of grinding time for 40 h (SC) and 80 h (SD) respectively. All of the samples were activated by 0.4 M KOH solution. Carbon electrodes were carbonized at temperature of 600oC in N2 gas environment and then followed by CO2 gas activation at a temperature of 900oC for 2 h. The densities for each variation in the particle size were 1.034 g cm-3, 0.849 g cm-3, 0.892 g cm-3 and 0.982 g cm-3 respectively. The morphological study identified the distance between the particles more closely at 38-53 µm (SB) particle size. The electrochemical properties of supercapacitor cells have been investigated using electrochemical methods such as impedance spectroscopy and charge-discharge at constant current using Solatron 1280 tools. Electrochemical properties testing results have shown SB samples with a particle size of 38-53 µm produce supercapacitor cells with optimum capacitive performance.

  13. Friction Stir Processing of Stainless Steel for Ascertaining Its Superlative Performance in Bioimplant Applications.

    PubMed

    Perumal, G; Ayyagari, A; Chakrabarti, A; Kannan, D; Pati, S; Grewal, H S; Mukherjee, S; Singh, S; Arora, H S

    2017-10-25

    Substrate-cell interactions for a bioimplant are driven by substrate's surface characteristics. In addition, the performance of an implant and resistance to degradation are primarily governed by its surface properties. A bioimplant typically degrades by wear and corrosion in the physiological environment, resulting in metallosis. Surface engineering strategies for limiting degradation of implants and enhancing their performance may reduce or eliminate the need for implant removal surgeries and the associated cost. In the current study, we tailored the surface properties of stainless steel using submerged friction stir processing (FSP), a severe plastic deformation technique. FSP resulted in significant microstructural refinement from 22 μm grain size for the as-received alloy to 0.8 μm grain size for the processed sample with increase in hardness by nearly 1.5 times. The wear and corrosion behavior of the processed alloy was evaluated in simulated body fluid. The processed sample demonstrated remarkable improvement in both wear and corrosion resistance, which is explained by surface strengthening and formation of a highly stable passive layer. The methylthiazol tetrazolium assay demonstrated that the processed sample is better in supporting cell attachment, proliferation with minimal toxicity, and hemolysis. The athrombogenic characteristic of the as-received and processed samples was evaluated by fibrinogen adsorption and platelet adhesion via the enzyme-linked immunosorbent assay and lactate dehydrogenase assay, respectively. The processed sample showed less platelet and fibrinogen adhesion compared with the as-received alloy, signifying its high thromboresistance. The current study suggests friction stir processing to be a versatile toolbox for enhancing the performance and reliability of currently used bioimplant materials.

  14. Genetic structure and natal origins of immature hawksbill turtles (Eretmochelys imbricata) in Brazilian waters.

    PubMed

    Proietti, Maira C; Reisser, Julia; Marins, Luis Fernando; Rodriguez-Zarate, Clara; Marcovaldi, Maria A; Monteiro, Danielle S; Pattiaratchi, Charitha; Secchi, Eduardo R

    2014-01-01

    Understanding the connections between sea turtle populations is fundamental for their effective conservation. Brazil hosts important hawksbill feeding areas, but few studies have focused on how they connect with nesting populations in the Atlantic. Here, we (1) characterized mitochondrial DNA control region haplotypes of immature hawksbills feeding along the coast of Brazil (five areas ranging from equatorial to temperate latitudes, 157 skin samples), (2) analyzed genetic structure among Atlantic hawksbill feeding populations, and (3) inferred natal origins of hawksbills in Brazilian waters using genetic, oceanographic, and population size information. We report ten haplotypes for the sampled Brazilian sites, most of which were previously observed at other Atlantic feeding grounds and rookeries. Genetic profiles of Brazilian feeding areas were significantly different from those in other regions (Caribbean and Africa), and a significant structure was observed between Brazilian feeding grounds grouped into areas influenced by the South Equatorial/North Brazil Current and those influenced by the Brazil Current. Our genetic analysis estimates that the studied Brazilian feeding aggregations are mostly composed of animals originating from the domestic rookeries Bahia and Pipa, but some contributions from African and Caribbean rookeries were also observed. Oceanographic data corroborated the local origins, but showed higher connection with West Africa and none with the Caribbean. High correlation was observed between origins estimated through genetics/rookery size and oceanographic/rookery size data, demonstrating that ocean currents and population sizes influence haplotype distribution of Brazil's hawksbill populations. The information presented here highlights the importance of national conservation strategies and international cooperation for the recovery of endangered hawksbill turtle populations.

  15. Genetic Structure and Natal Origins of Immature Hawksbill Turtles (Eretmochelys imbricata) in Brazilian Waters

    PubMed Central

    Proietti, Maira C.; Reisser, Julia; Marins, Luis Fernando; Rodriguez-Zarate, Clara; Marcovaldi, Maria A.; Monteiro, Danielle S.; Pattiaratchi, Charitha; Secchi, Eduardo R.

    2014-01-01

    Understanding the connections between sea turtle populations is fundamental for their effective conservation. Brazil hosts important hawksbill feeding areas, but few studies have focused on how they connect with nesting populations in the Atlantic. Here, we (1) characterized mitochondrial DNA control region haplotypes of immature hawksbills feeding along the coast of Brazil (five areas ranging from equatorial to temperate latitudes, 157 skin samples), (2) analyzed genetic structure among Atlantic hawksbill feeding populations, and (3) inferred natal origins of hawksbills in Brazilian waters using genetic, oceanographic, and population size information. We report ten haplotypes for the sampled Brazilian sites, most of which were previously observed at other Atlantic feeding grounds and rookeries. Genetic profiles of Brazilian feeding areas were significantly different from those in other regions (Caribbean and Africa), and a significant structure was observed between Brazilian feeding grounds grouped into areas influenced by the South Equatorial/North Brazil Current and those influenced by the Brazil Current. Our genetic analysis estimates that the studied Brazilian feeding aggregations are mostly composed of animals originating from the domestic rookeries Bahia and Pipa, but some contributions from African and Caribbean rookeries were also observed. Oceanographic data corroborated the local origins, but showed higher connection with West Africa and none with the Caribbean. High correlation was observed between origins estimated through genetics/rookery size and oceanographic/rookery size data, demonstrating that ocean currents and population sizes influence haplotype distribution of Brazil's hawksbill populations. The information presented here highlights the importance of national conservation strategies and international cooperation for the recovery of endangered hawksbill turtle populations. PMID:24558419

  16. Computed Tomography to Estimate the Representative Elementary Area for Soil Porosity Measurements

    PubMed Central

    Borges, Jaqueline Aparecida Ribaski; Pires, Luiz Fernando; Belmont Pereira, André

    2012-01-01

    Computed tomography (CT) is a technique that provides images of different solid and porous materials. CT could be an ideal tool to study representative sizes of soil samples because of the noninvasive characteristic of this technique. The scrutiny of such representative elementary sizes (RESs) has been the target of attention of many researchers related to soil physics field owing to the strong relationship between physical properties and size of the soil sample. In the current work, data from gamma-ray CT were used to assess RES in measurements of soil porosity (ϕ). For statistical analysis, a study on the full width at a half maximum (FWHM) of the adjustment of distribution of ϕ at different areas (1.2 to 1162.8 mm2) selected inside of tomographic images was proposed herein. The results obtained point out that samples with a section area corresponding to at least 882.1 mm2 were the ones that provided representative values of ϕ for the studied Brazilian tropical soil. PMID:22666133

  17. Chemical properties and particle-size distribution of 39 surface-mine spoils in southern West Virginia

    Treesearch

    William T. Plass; Willis G. Vogel

    1973-01-01

    A survey of 39 surface-mine sites in southern West Virginia showed that most of the spoils from current mining operations had a pH of 5.0 or higher. Soil-size material averaged 37 percent of the weight of the spoils sampled. A major problem for the establishment of vegetation was a deficiency of nitrogen and phosphorus. This can be corrected with additions of...

  18. Estimating the breeding population of long-billed curlew in the United States

    USGS Publications Warehouse

    Stanley, T.R.; Skagen, S.K.

    2007-01-01

    Determining population size and long-term trends in population size for species of high concern is a priority of international, national, and regional conservation plans. Long-billed curlews (Numenius americanus) are a species of special concern in North America due to apparent declines in their population. Because long-billed curlews are not adequately monitored by existing programs, we undertook a 2-year study with the goals of 1) determining present long-billed curlew distribution and breeding population size in the United States and 2) providing recommendations for a long-term long-billed curlew monitoring protocol. We selected a stratified random sample of survey routes in 16 western states for sampling in 2004 and 2005, and we analyzed count data from these routes to estimate detection probabilities and abundance. In addition, we evaluated habitat along roadsides to determine how well roadsides represented habitat throughout the sampling units. We estimated there were 164,515 (SE = 42,047) breeding long-billed curlews in 2004, and 109,533 (SE = 31,060) breeding individuals in 2005. These estimates far exceed currently accepted estimates based on expert opinion. We found that habitat along roadsides was representative of long-billed curlew habitat in general. We make recommendations for improving sampling methodology, and we present power curves to provide guidance on minimum sample sizes required to detect trends in abundance.

  19. Highly sensitive molecular diagnosis of prostate cancer using surplus material washed off from biopsy needles

    PubMed Central

    Bermudo, R; Abia, D; Mozos, A; García-Cruz, E; Alcaraz, A; Ortiz, Á R; Thomson, T M; Fernández, P L

    2011-01-01

    Introduction: Currently, final diagnosis of prostate cancer (PCa) is based on histopathological analysis of needle biopsies, but this process often bears uncertainties due to small sample size, tumour focality and pathologist's subjective assessment. Methods: Prostate cancer diagnostic signatures were generated by applying linear discriminant analysis to microarray and real-time RT–PCR (qRT–PCR) data from normal and tumoural prostate tissue samples. Additionally, after removal of biopsy tissues, material washed off from transrectal biopsy needles was used for molecular profiling and discriminant analysis. Results: Linear discriminant analysis applied to microarray data for a set of 318 genes differentially expressed between non-tumoural and tumoural prostate samples produced 26 gene signatures, which classified the 84 samples used with 100% accuracy. To identify signatures potentially useful for the diagnosis of prostate biopsies, surplus material washed off from routine biopsy needles from 53 patients was used to generate qRT–PCR data for a subset of 11 genes. This analysis identified a six-gene signature that correctly assigned the biopsies as benign or tumoural in 92.6% of the cases, with 88.8% sensitivity and 96.1% specificity. Conclusion: Surplus material from prostate needle biopsies can be used for minimal-size gene signature analysis for sensitive and accurate discrimination between non-tumoural and tumoural prostates, without interference with current diagnostic procedures. This approach could be a useful adjunct to current procedures in PCa diagnosis. PMID:22009027

  20. Epistemological Issues in Astronomy Education Research: How Big of a Sample is "Big Enough"?

    NASA Astrophysics Data System (ADS)

    Slater, Stephanie; Slater, T. F.; Souri, Z.

    2012-01-01

    As astronomy education research (AER) continues to evolve into a sophisticated enterprise, we must begin to grapple with defining our epistemological parameters. Moreover, as we attempt to make pragmatic use of our findings, we must make a concerted effort to communicate those parameters in a sensible way to the larger astronomical community. One area of much current discussion involves a basic discussion of methodologies, and subsequent sample sizes, that should be considered appropriate for generating knowledge in the field. To address this question, we completed a meta-analysis of nearly 1,000 peer-reviewed studies published in top tier professional journals. Data related to methodologies and sample sizes were collected from "hard science” and "human science” journals to compare the epistemological systems of these two bodies of knowledge. Working back in time from August 2011, the 100 most recent studies reported in each journal were used as a data source: Icarus, ApJ and AJ, NARST, IJSE and SciEd. In addition, data was collected from the 10 most recent AER dissertations, a set of articles determined by the science education community to be the most influential in the field, and the nearly 400 articles used as reference materials for the NRC's Taking Science to School. Analysis indicates these bodies of knowledge have a great deal in common; each relying on a large variety of methodologies, and each building its knowledge through studies that proceed from surprisingly low sample sizes. While both fields publish a small percentage of studies with large sample sizes, the vast majority of top tier publications consist of rich studies of a small number of objects. We conclude that rigor in each field is determined not by a circumscription of methodologies and sample sizes, but by peer judgments that the methods and sample sizes are appropriate to the research question.

  1. Estimation of the Human Extrathoracic Deposition Fraction of Inhaled Particles Using a Polyurethane Foam Collection Substrate in an IOM Sampler.

    PubMed

    Sleeth, Darrah K; Balthaser, Susan A; Collingwood, Scott; Larson, Rodney R

    2016-03-07

    Extrathoracic deposition of inhaled particles (i.e., in the head and throat) is an important exposure route for many hazardous materials. Current best practices for exposure assessment of aerosols in the workplace involve particle size selective sampling methods based on particle penetration into the human respiratory tract (i.e., inhalable or respirable sampling). However, the International Organization for Standardization (ISO) has recently adopted particle deposition sampling conventions (ISO 13138), including conventions for extrathoracic (ET) deposition into the anterior nasal passage (ET₁) and the posterior nasal and oral passages (ET₂). For this study, polyurethane foam was used as a collection substrate inside an inhalable aerosol sampler to provide an estimate of extrathoracic particle deposition. Aerosols of fused aluminum oxide (five sizes, 4.9 µm-44.3 µm) were used as a test dust in a low speed (0.2 m/s) wind tunnel. Samplers were placed on a rotating mannequin inside the wind tunnel to simulate orientation-averaged personal sampling. Collection efficiency data for the foam insert matched well to the extrathoracic deposition convention for the particle sizes tested. The concept of using a foam insert to match a particle deposition sampling convention was explored in this study and shows promise for future use as a sampling device.

  2. Estimation of the Human Extrathoracic Deposition Fraction of Inhaled Particles Using a Polyurethane Foam Collection Substrate in an IOM Sampler

    PubMed Central

    Sleeth, Darrah K.; Balthaser, Susan A.; Collingwood, Scott; Larson, Rodney R.

    2016-01-01

    Extrathoracic deposition of inhaled particles (i.e., in the head and throat) is an important exposure route for many hazardous materials. Current best practices for exposure assessment of aerosols in the workplace involve particle size selective sampling methods based on particle penetration into the human respiratory tract (i.e., inhalable or respirable sampling). However, the International Organization for Standardization (ISO) has recently adopted particle deposition sampling conventions (ISO 13138), including conventions for extrathoracic (ET) deposition into the anterior nasal passage (ET1) and the posterior nasal and oral passages (ET2). For this study, polyurethane foam was used as a collection substrate inside an inhalable aerosol sampler to provide an estimate of extrathoracic particle deposition. Aerosols of fused aluminum oxide (five sizes, 4.9 µm–44.3 µm) were used as a test dust in a low speed (0.2 m/s) wind tunnel. Samplers were placed on a rotating mannequin inside the wind tunnel to simulate orientation-averaged personal sampling. Collection efficiency data for the foam insert matched well to the extrathoracic deposition convention for the particle sizes tested. The concept of using a foam insert to match a particle deposition sampling convention was explored in this study and shows promise for future use as a sampling device. PMID:26959046

  3. Experimental measurement of the plasma conductivity of Z93 and Z93P thermal control paint

    NASA Technical Reports Server (NTRS)

    Hillard, G. Barry

    1993-01-01

    Two samples each of Z93 and Z93P thermal control paint were exposed to a simulated space environment in a plasma chamber. The samples were biased through a series of voltages ranging from -200 volts to +300 volts and electron and ion currents measured. By comparing the currents to those of pure metal samples of the same size and shape, the conductivity of the samples was calculated. Measured conductivity was dependent on the bias potential in all cases. For Z93P, conductivity was approximately constant over much of the bias range and we find a value of 0.5 micro-mhos per square meter for both electron and ion current. For Z93, the dependence on bias was much more pronounced but conductivity can be said to be approximately one order of magnitude larger. In addition to presenting these results, this report documents all of the experimental data as well as the statistical analyses performed.

  4. A management system for evaluating the Virginia periodic motor vehicle inspection program.

    DOT National Transportation Integrated Search

    1977-01-01

    A system for management evaluation of Virginia's periodic motor vehicle inspection (PMVI) program was developed which is similar to that currently in use by the Virginia Department of State Police, except for changes in the sample size of inspection ...

  5. Multi-Mission System Analysis for Planetary Entry (M-SAPE) Version 1

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid; Glaab, Louis; Winski, Richard G.; Maddock, Robert W.; Emmett, Anjie L.; Munk, Michelle M.; Agrawal, Parul; Sepka, Steve; Aliaga, Jose; Zarchi, Kerry; hide

    2014-01-01

    This report describes an integrated system for Multi-mission System Analysis for Planetary Entry (M-SAPE). The system in its current form is capable of performing system analysis and design for an Earth entry vehicle suitable for sample return missions. The system includes geometry, mass sizing, impact analysis, structural analysis, flight mechanics, TPS, and a web portal for user access. The report includes details of M-SAPE modules and provides sample results. Current M-SAPE vehicle design concept is based on Mars sample return (MSR) Earth entry vehicle design, which is driven by minimizing risk associated with sample containment (no parachute and passive aerodynamic stability). By M-SAPE exploiting a common design concept, any sample return mission, particularly MSR, will benefit from significant risk and development cost reductions. The design provides a platform by which technologies and design elements can be evaluated rapidly prior to any costly investment commitment.

  6. Demagnetizing fields of crystallites and a method for measuring the thermodynamic fields of quasi-single-crystal and polycrystalline thin YBa{sub 2}Cu{sub 3}O{sub 7-x} disks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rostami, Kh. R.

    The role of the demagnetizing fields of crystallites in HTSC samples is studied. An increase in the crystallite size is shown to suppress the intra-and intercrystalline critical currents of the sample in lower fields. The demagnetizing fields of crystallites are shown to be one of the main causes of the fact that the Bean model is invalid for HTSC samples. A method is proposed to measure the thermodynamic field of a superconductor; this method allows the first thermodynamic critical magnetic fields of the sample and its crystallites and 'subcrystallites' to be measured with a high accuracy. The first thermodynamic criticalmore » magnetic fields are used to estimate the critical current density J{sub c} of the sample, crystallites, and subcrystallites.« less

  7. How conservative is Fisher's exact test? A quantitative evaluation of the two-sample comparative binomial trial.

    PubMed

    Crans, Gerald G; Shuster, Jonathan J

    2008-08-15

    The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd

  8. Genome-wide meta-analyses of stratified depression in Generation Scotland and UK Biobank.

    PubMed

    Hall, Lynsey S; Adams, Mark J; Arnau-Soler, Aleix; Clarke, Toni-Kim; Howard, David M; Zeng, Yanni; Davies, Gail; Hagenaars, Saskia P; Maria Fernandez-Pujals, Ana; Gibson, Jude; Wigmore, Eleanor M; Boutin, Thibaud S; Hayward, Caroline; Scotland, Generation; Porteous, David J; Deary, Ian J; Thomson, Pippa A; Haley, Chris S; McIntosh, Andrew M

    2018-01-10

    Few replicable genetic associations for Major Depressive Disorder (MDD) have been identified. Recent studies of MDD have identified common risk variants by using a broader phenotype definition in very large samples, or by reducing phenotypic and ancestral heterogeneity. We sought to ascertain whether it is more informative to maximize the sample size using data from all available cases and controls, or to use a sex or recurrent stratified subset of affected individuals. To test this, we compared heritability estimates, genetic correlation with other traits, variance explained by MDD polygenic score, and variants identified by genome-wide meta-analysis for broad and narrow MDD classifications in two large British cohorts - Generation Scotland and UK Biobank. Genome-wide meta-analysis of MDD in males yielded one genome-wide significant locus on 3p22.3, with three genes in this region (CRTAP, GLB1, and TMPPE) demonstrating a significant association in gene-based tests. Meta-analyzed MDD, recurrent MDD and female MDD yielded equivalent heritability estimates, showed no detectable difference in association with polygenic scores, and were each genetically correlated with six health-correlated traits (neuroticism, depressive symptoms, subjective well-being, MDD, a cross-disorder phenotype and Bipolar Disorder). Whilst stratified GWAS analysis revealed a genome-wide significant locus for male MDD, the lack of independent replication, and the consistent pattern of results in other MDD classifications suggests that phenotypic stratification using recurrence or sex in currently available sample sizes is currently weakly justified. Based upon existing studies and our findings, the strategy of maximizing sample sizes is likely to provide the greater gain.

  9. An Evaluation of Sharp Cut Cyclones for Sampling Diesel Particulate Matter Aerosol in the Presence of Respirable Dust

    PubMed Central

    Cauda, Emanuele; Sheehan, Maura; Gussman, Robert; Kenny, Lee; Volkwein, Jon

    2015-01-01

    Two prototype cyclones were the subjects of a comparative research campaign with a diesel particulate matter sampler (DPMS) that consists of a respirable cyclone combined with a downstream impactor. The DPMS is currently used in mining environments to separate dust from the diesel particulate matter and to avoid interferences in the analysis of integrated samples and direct-reading monitoring in occupational environments. The sampling characteristics of all three devices were compared using ammonium fluorescein, diesel, and coal dust aerosols. With solid spherical test aerosols at low particle loadings, the aerodynamic size-selection characteristics of all three devices were found to be similar, with 50% penetration efficiencies (d50) close to the design value of 0.8 µm, as required by the US Mine Safety and Health Administration for monitoring occupational exposure to diesel particulate matter in US mining operations. The prototype cyclones were shown to have ‘sharp cut’ size-selection characteristics that equaled or exceeded the sharpness of the DPMS. The penetration of diesel aerosols was optimal for all three samplers, while the results of the tests with coal dust induced the exclusion of one of the prototypes from subsequent testing. The sampling characteristics of the remaining prototype sharp cut cyclone (SCC) and the DPMS were tested with different loading of coal dust. While the characteristics of the SCC remained constant, the deposited respirable coal dust particles altered the size-selection performance of the currently used sampler. This study demonstrates that the SCC performed better overall than the DPMS. PMID:25060240

  10. A U-statistics based approach to sample size planning of two-arm trials with discrete outcome criterion aiming to establish either superiority or noninferiority.

    PubMed

    Wellek, Stefan

    2017-02-28

    In current practice, the most frequently applied approach to the handling of ties in the Mann-Whitney-Wilcoxon (MWW) test is based on the conditional distribution of the sum of mid-ranks, given the observed pattern of ties. Starting from this conditional version of the testing procedure, a sample size formula was derived and investigated by Zhao et al. (Stat Med 2008). In contrast, the approach we pursue here is a nonconditional one exploiting explicit representations for the variances of and the covariance between the two U-statistics estimators involved in the Mann-Whitney form of the test statistic. The accuracy of both ways of approximating the sample sizes required for attaining a prespecified level of power in the MWW test for superiority with arbitrarily tied data is comparatively evaluated by means of simulation. The key qualitative conclusions to be drawn from these numerical comparisons are as follows: With the sample sizes calculated by means of the respective formula, both versions of the test maintain the level and the prespecified power with about the same degree of accuracy. Despite the equivalence in terms of accuracy, the sample size estimates obtained by means of the new formula are in many cases markedly lower than that calculated for the conditional test. Perhaps, a still more important advantage of the nonconditional approach based on U-statistics is that it can be also adopted for noninferiority trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. NASA GRC and MSFC Space-Plasma Arc Testing Procedures

    NASA Technical Reports Server (NTRS)

    Ferguson, Dale C.; Vayner, Boris V.; Galofaro, Joel T,; Hillard, G. Barry; Vaughn, Jason; Schneider, Todd

    2005-01-01

    Tests of arcing and current collection in simulated space plasma conditions have been performed at the NASA Glenn Research Center (GRC) in Cleveland, Ohio, for over 30 years and at the Marshall Space Flight Center (MSFC) in Huntsville, Alabama, for almost as long. During this period, proper test conditions for accurate and meaningful space simulation have been worked out, comparisons with actual space performance in spaceflight tests and with real operational satellites have been made, and NASA has achieved our own internal standards for test protocols. It is the purpose of this paper to communicate the test conditions, test procedures, and types of analysis used at NASA GRC and MSFC to the space environmental testing community at large, to help with international space-plasma arcing-testing standardization. To be discussed are: 1.Neutral pressures, neutral gases, and vacuum chamber sizes. 2. Electron and ion densities, plasma uniformity, sample sizes, and Debuy lengths. 3. Biasing samples versus self-generated voltages. Floating samples versus grounded. 4. Power supplies and current limits. Isolation of samples from power supplies during arcs. 5. Arc circuits. Capacitance during biased arc-threshold tests. Capacitance during sustained arcing and damage tests. Arc detection. Prevention sustained discharges during testing. 6. Real array or structure samples versus idealized samples. 7. Validity of LEO tests for GEO samples. 8. Extracting arc threshold information from arc rate versus voltage tests. 9. Snapover and current collection at positive sample bias. Glows at positive bias. Kapon (R) pyrolisis. 10. Trigger arc thresholds. Sustained arc thresholds. Paschen discharge during sustained arcing. 11. Testing for Paschen discharge threshold. Testing for dielectric breakdown thresholds. Testing for tether arcing. 12. Testing in very dense plasmas (ie thruster plumes). 13. Arc mitigation strategies. Charging mitigation strategies. Models. 14. Analysis of test results. Finally, the necessity of testing will be emphasized, not to the exclusion of modeling, but as part of a complete strategy for determining when and if arcs will occur, and preventing them from occurring in space.

  12. NASA GRC and MSFC Space-Plasma Arc Testing Procedures

    NASA Technical Reports Server (NTRS)

    Ferguson, Dale C.a; Vayner, Boris V.; Galofaro, Joel T.; Hillard, G. Barry; Vaughn, Jason; Schneider, Todd

    2005-01-01

    Tests of arcing and current collection in simulated space plasma conditions have been performed at the NASA Glenn Research Center (GRC) in Cleveland, Ohio, for over 30 years and at the Marshall Space flight Center (MSFC) for almost as long. During this period, proper test conditions for accurate and meaningful space simulation have been worked out, comparisons with actual space performance in spaceflight tests and with real operational satellites have been made, and NASA has achieved our own internal standards for test protocols. It is the purpose of this paper to communicate the test conditions, test procedures, and types of analysis used at NASA GRC and MSFC to the space environmental testing community at large, to help with international space-plasma arcing testing standardization. To be discussed are: 1. Neutral pressures, neutral gases, and vacuum chamber sizes. 2. Electron and ion densities, plasma uniformity, sample sizes, and Debye lengths. 3. Biasing samples versus self-generated voltages. Floating samples versus grounded. 4. Power supplies and current limits. Isolation of samples from power supplies during arcs. Arc circuits. Capacitance during biased arc-threshold tests. Capacitance during sustained arcing and damage tests. Arc detection. Preventing sustained discharges during testing. 5. Real array or structure samples versus idealized samples. 6. Validity of LEO tests for GEO samples. 7. Extracting arc threshold information from arc rate versus voltage tests. 8 . Snapover and current collection at positive sample bias. Glows at positive bias. Kapton pyrolization. 9. Trigger arc thresholds. Sustained arc thresholds. Paschen discharge during sustained arcing. 10. Testing for Paschen discharge thresholds. Testing for dielectric breakdown thresholds. Testing for tether arcing. 11. Testing in very dense plasmas (ie thruster plumes). 12. Arc mitigation strategies. Charging mitigation strategies. Models. 13. Analysis of test results. Finally, the necessity of testing will be emphasized, not to the exclusion of modeling, but as part of a complete strategy for determining when and if arcs will occur, and preventing them from occurring in space.

  13. Salmonella enteritidis surveillance by egg immunology: impact of the sampling scheme on the release of contaminated table eggs.

    PubMed

    Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie

    2011-08-01

    Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.

  14. Variation in aluminum, iron, and particle concentrations in oxic ground-water samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    USGS Publications Warehouse

    Szabo, Z.; Oden, J.H.; Gibs, J.; Rice, D.E.; Ding, Y.; ,

    2001-01-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering. Variations in concentrations aluminum and iron (1 -74 and 1-199 ug/L (micrograms per liter), respectively), common indicators of the presence of particulate-borne trace elements, were greatest in sample sets from individual wells with the greatest variations in turbidity and particle concentration. Differences in trace-element concentrations in sequentially collected unfiltered samples with variable turbidity were 5 to 10 times as great as those in concurrently collected samples that were passed through various filters. These results indicate that turbidity must be both reduced and stabilized even when low-flow sample-collection techniques are used in order to obtain water samples that do not contain considerable particulate artifacts. Currently (2001) available techniques need to be refined to ensure that the measured trace-element concentrations are representative of those that are mobile in the aquifer water.

  15. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    PubMed

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. NASA GRC and MSFC Space-Plasma Arc Testing Procedures

    NASA Technical Reports Server (NTRS)

    Ferguson, Dale C.; Vayner, Boris V.; Galofaro, Joel T.; Hillard, G. Barry; Vaughn, Jason; Schneider, Todd

    2007-01-01

    Tests of arcing and current collection in simulated space plasma conditions have been performed at the NASA Glenn Research Center (GRC) in Cleveland, Ohio, for over 30 years and at the Marshall Space Flight Center (MSFC) in Huntsville, Alabama, for almost as long. During this period, proper test conditions for accurate and meaningful space simulation have been worked out, comparisons with actual space performance in spaceflight tests and with real operational satellites have been made, and NASA has achieved our own internal standards for test protocols. It is the purpose of this paper to communicate the test conditions, test procedures, and types of analysis used at NASA GRC and MSFC to the space environmental testing community at large, to help with international space-plasma arcing-testing standardization. Discussed herein are neutral gas conditions, plasma densities and uniformity, vacuum chamber sizes, sample sizes and Debye lengths, biasing samples versus self-generated voltages, floating samples versus grounded samples, test electrical conditions, arc detection, preventing sustained discharges during testing, real samples versus idealized samples, validity of LEO tests for GEO samples, extracting arc threshold information from arc rate versus voltage tests, snapover, current collection, and glows at positive sample bias, Kapton pyrolysis, thresholds for trigger arcs, sustained arcs, dielectric breakdown and Paschen discharge, tether arcing and testing in very dense plasmas (i.e. thruster plumes), arc mitigation strategies, charging mitigation strategies, models, and analysis of test results. Finally, the necessity of testing will be emphasized, not to the exclusion of modeling, but as part of a complete strategy for determining when and if arcs will occur, and preventing them from occurring in space.

  17. The Kepler Mission: Search for Habitable Planets

    NASA Technical Reports Server (NTRS)

    Borucki, William; Likins, B.; DeVincenzi, Donald L. (Technical Monitor)

    1998-01-01

    Detecting extrasolar terrestrial planets orbiting main-sequence stars is of great interest and importance. Current ground-based methods are only capable of detecting objects about the size or mass of Jupiter or larger. The difficulties encountered with direct imaging of Earth-size planets from space are expected to be resolved in the next twenty years. Spacebased photometry of planetary transits is currently the only viable method for detection of terrestrial planets (30-600 times less massive than Jupiter). This method searches the extended solar neighborhood, providing a statistically large sample and the detailed characteristics of each individual case. A robust concept has been developed and proposed as a Discovery-class mission. Its capabilities and strengths are presented.

  18. Random Distribution Pattern and Non-adaptivity of Genome Size in a Highly Variable Population of Festuca pallens

    PubMed Central

    Šmarda, Petr; Bureš, Petr; Horová, Lucie

    2007-01-01

    Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968

  19. Evaluation of Pump Pulsation in Respirable Size-Selective Sampling: Part II. Changes in Sampling Efficiency

    PubMed Central

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M.; Harper, Martin

    2015-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the DO cyclone. However, for three models of pumps producing 30%, 56%, and 70% pulsations, substantial changes were confirmed. The GK2.69 cyclone showed a similar pattern to that of the DO cyclone, i.e. no change in sampling efficiency for the Legacy producing 15% pulsation and a substantial change for the Elite12 producing 41% pulsation. Pulse shape did not cause any change in sampling efficiency when compared to the single sine wave. The findings suggest that 25% pulsation at the inlet of the cyclone as measured by this test can be acceptable for the respirable particle collection. If this test is used in place of that currently in European standards (EN 1232–1997 and EN 12919-1999) or is used in any International Organization for Standardization standard, then a 25% pulsation criterion could be adopted. This work suggests that a 10% criterion as currently specified in the European standards for testing may be overly restrictive and not able to be met by many pumps on the market. Further work is recommended to determine which criterion would be applicable to this test if it is to be retained in its current form. PMID:24064963

  20. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    PubMed

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the DO cyclone. However, for three models of pumps producing 30%, 56%, and 70% pulsations, substantial changes were confirmed. The GK2.69 cyclone showed a similar pattern to that of the DO cyclone, i.e. no change in sampling efficiency for the Legacy producing 15% pulsation and a substantial change for the Elite12 producing 41% pulsation. Pulse shape did not cause any change in sampling efficiency when compared to the single sine wave. The findings suggest that 25% pulsation at the inlet of the cyclone as measured by this test can be acceptable for the respirable particle collection. If this test is used in place of that currently in European standards (EN 1232-1997 and EN 12919-1999) or is used in any International Organization for Standardization standard, then a 25% pulsation criterion could be adopted. This work suggests that a 10% criterion as currently specified in the European standards for testing may be overly restrictive and not able to be met by many pumps on the market. Further work is recommended to determine which criterion would be applicable to this test if it is to be retained in its current form.

  1. Minimal-assumption inference from population-genomic data

    NASA Astrophysics Data System (ADS)

    Weissman, Daniel; Hallatschek, Oskar

    Samples of multiple complete genome sequences contain vast amounts of information about the evolutionary history of populations, much of it in the associations among polymorphisms at different loci. Current methods that take advantage of this linkage information rely on models of recombination and coalescence, limiting the sample sizes and populations that they can analyze. We introduce a method, Minimal-Assumption Genomic Inference of Coalescence (MAGIC), that reconstructs key features of the evolutionary history, including the distribution of coalescence times, by integrating information across genomic length scales without using an explicit model of recombination, demography or selection. Using simulated data, we show that MAGIC's performance is comparable to PSMC' on single diploid samples generated with standard coalescent and recombination models. More importantly, MAGIC can also analyze arbitrarily large samples and is robust to changes in the coalescent and recombination processes. Using MAGIC, we show that the inferred coalescence time histories of samples of multiple human genomes exhibit inconsistencies with a description in terms of an effective population size based on single-genome data.

  2. Sample size requirements for separating out the effects of combination treatments: randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis.

    PubMed

    Wolbers, Marcel; Heemskerk, Dorothee; Chau, Tran Thi Hong; Yen, Nguyen Thi Bich; Caws, Maxine; Farrar, Jeremy; Day, Jeremy

    2011-02-02

    In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 x 2 factorial design. We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 x 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Current Controlled Trials ISRCTN61649292.

  3. Effect of immunomagnetic bead size on recovery of foodborne pathogenic bacteria

    USDA-ARS?s Scientific Manuscript database

    Long culture enrichment is currently a speed-limiting step in both traditional and rapid detection techniques for foodborne pathogens. Immunomagnetic separation (IMS) as a culture-free enrichment sample preparation technique has gained increasing popularity in the development of rapid detection met...

  4. Brain Stimulation in Alzheimer's Disease.

    PubMed

    Chang, Chun-Hung; Lane, Hsien-Yuan; Lin, Chieh-Hsin

    2018-01-01

    Brain stimulation techniques can modulate cognitive functions in many neuropsychiatric diseases. Pilot studies have shown promising effects of brain stimulations on Alzheimer's disease (AD). Brain stimulations can be categorized into non-invasive brain stimulation (NIBS) and invasive brain stimulation (IBS). IBS includes deep brain stimulation (DBS), and invasive vagus nerve stimulation (VNS), whereas NIBS includes transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS), transcranial alternating current stimulation (tACS), electroconvulsive treatment (ECT), magnetic seizure therapy (MST), cranial electrostimulation (CES), and non-invasive VNS. We reviewed the cutting-edge research on these brain stimulation techniques and discussed their therapeutic effects on AD. Both IBS and NIBS may have potential to be developed as novel treatments for AD; however, mixed findings may result from different study designs, patients selection, population, or samples sizes. Therefore, the efficacy of NIBS and IBS in AD remains uncertain, and needs to be further investigated. Moreover, more standardized study designs with larger sample sizes and longitudinal follow-up are warranted for establishing a structural guide for future studies and clinical application.

  5. Physical properties of the WAIS Divide ice core

    USGS Publications Warehouse

    Fitzpatrick, Joan J.; Voigt, Donald E.; Fegyveresi, John M.; Stevens, Nathan T.; Spencer, Matthew K.; Cole-Dai, Jihong; Alley, Richard B.; Jardine, Gabriella E.; Cravens, Eric; Wilen, Lawrence A.; Fudge, T. J.; McConnell, Joseph R.

    2014-01-01

    The WAIS (West Antarctic Ice Sheet) Divide deep ice core was recently completed to a total depth of 3405 m, ending ∼50 m above the bed. Investigation of the visual stratigraphy and grain characteristics indicates that the ice column at the drilling location is undisturbed by any large-scale overturning or discontinuity. The climate record developed from this core is therefore likely to be continuous and robust. Measured grain-growth rates, recrystallization characteristics, and grain-size response at climate transitions fit within current understanding. Significant impurity control on grain size is indicated from correlation analysis between impurity loading and grain size. Bubble-number densities and bubble sizes and shapes are presented through the full extent of the bubbly ice. Where bubble elongation is observed, the direction of elongation is preferentially parallel to the trace of the basal (0001) plane. Preferred crystallographic orientation of grains is present in the shallowest samples measured, and increases with depth, progressing to a vertical-girdle pattern that tightens to a vertical single-maximum fabric. This single-maximum fabric switches into multiple maxima as the grain size increases rapidly in the deepest, warmest ice. A strong dependence of the fabric on the impurity-mediated grain size is apparent in the deepest samples.

  6. Marine sources of ice nucleating particles: results from phytoplankton cultures and samples collected at sea

    NASA Astrophysics Data System (ADS)

    Wilbourn, E.; Thornton, D.; Brooks, S. D.; Graff, J.

    2016-12-01

    The role of marine aerosols as ice nucleating particles is currently poorly understood. Despite growing interest, there are remarkably few ice nucleation measurements on representative marine samples. Here we present results of heterogeneous ice nucleation from laboratory studies and in-situ air and sea water samples collected during NAAMES (North Atlantic Aerosol and Marine Ecosystems Study). Thalassiosira weissflogii (CCMP 1051) was grown under controlled conditions in batch cultures and the ice nucleating activity depended on the growth phase of the cultures. Immersion freezing temperatures of the lab-grown diatoms were determined daily using a custom ice nucleation apparatus cooled at a set rate. Our results show that the age of the culture had a significant impact on ice nucleation temperature, with samples in stationary phase causing nucleation at -19.9 °C, approximately nine degrees warmer than the freezing temperature during exponential growth phase. Field samples gathered during the NAAMES II cruise in May 2016 were also tested for ice nucleating ability. Two types of samples were gathered. Firstly, whole cells were fractionated by size from surface seawater using a BD Biosciences Influx Cell Sorter (BD BS ISD). Secondly, aerosols were generated using the SeaSweep and subsequently size-selected using a PIXE Cascade Impactor. Samples were tested for the presence of ice nucleating particles (INP) using the technique described above. There were significant differences in the freezing temperature of the different samples; of the three sample types the lab-grown cultures tested during stationary phase froze at the warmest temperatures, followed by the SeaSweep samples (-25.6 °C) and the size-fractionated cell samples (-31.3 °C). Differences in ice nucleation ability may be due to size differences between the INP, differences in chemical composition of the sample, or some combination of these two factors. Results will be presented and atmospheric implications discussed.

  7. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    PubMed

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  8. What is a species? A new universal method to measure differentiation and assess the taxonomic rank of allopatric populations, using continuous variables

    PubMed Central

    Donegan, Thomas M.

    2018-01-01

    Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266

  9. Single-image diffusion coefficient measurements of proteins in free solution.

    PubMed

    Zareh, Shannon Kian; DeSantis, Michael C; Kessler, Jonathan M; Li, Je-Luen; Wang, Y M

    2012-04-04

    Diffusion coefficient measurements are important for many biological and material investigations, such as studies of particle dynamics and kinetics, and size determinations. Among current measurement methods, single particle tracking (SPT) offers the unique ability to simultaneously obtain location and diffusion information about a molecule while using only femtomoles of sample. However, the temporal resolution of SPT is limited to seconds for single-color-labeled samples. By directly imaging three-dimensional diffusing fluorescent proteins and studying the widths of their intensity profiles, we were able to determine the proteins' diffusion coefficients using single protein images of submillisecond exposure times. This simple method improves the temporal resolution of diffusion coefficient measurements to submilliseconds, and can be readily applied to a range of particle sizes in SPT investigations and applications in which diffusion coefficient measurements are needed, such as reaction kinetics and particle size determinations. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  10. The Mars Orbital Catalog of Hydrated Alteration Signatures (MOCHAS) - Initial release

    NASA Astrophysics Data System (ADS)

    Carter, John; OMEGA and CRISM Teams

    2016-10-01

    Aqueous minerals have been identified from orbit at a number of localities, and their analysis allowed refining the water story of Early Mars. They are also a main science driver when selecting current and upcoming landing sites for roving missions.Available catalogs of mineral detections exhibit a number of drawbacks such as a limited sample size (a thousand sites at most), inhomogeneous sampling of the surface and of the investigation methods, and the lack of contextual information (e.g. spatial extent, morphological context). The MOCHAS project strives to address such limitations by providing a global, detailed survey of aqueous minerals on Mars based on 10 years of data from the OMEGA and CRISM imaging spectrometers. Contextual data is provided, including deposit sizes, morphology and detailed composition when available. Sampling biases are also addressed.It will be openly distributed in GIS-ready format and will be participative. For example, it will be possible for researchers to submit requests for specific mapping of regions of interest, or add/refine mineral detections.An initial release is scheduled in Fall 2016 and will feature a two orders of magnitude increase in sample size compared to previous studies.

  11. Pediatric anthropometrics are inconsistent with current guidelines for assessing rider fit on all-terrain vehicles.

    PubMed

    Bernard, Andrew C; Mullineaux, David R; Auxier, James T; Forman, Jennifer L; Shapiro, Robert; Pienkowski, David

    2010-07-01

    This study sought to establish objective anthropometric measures of fit or misfit for young riders on adult and youth-sized all-terrain vehicles and use these metrics to test the unproved historical reasoning that age alone is a sufficient measure of rider-ATV fit. Male children (6-11 years, n=8; and 12-15 years, n=11) were selected by convenience sampling. Rider-ATV fit was quantified by five measures adapted from published recommendations: (1) standing-seat clearance, (2) hand size, (3) foot vs. foot-brake position, (4) elbow angle, and (5) handlebar-to-knee distance. Youths aged 12-15 years fit the adult-sized ATV better than the ATV Safety Institute recommended age-appropriate youth model (63% of subjects fit all 5 measures on adult-sized ATV vs. 20% on youth-sized ATV). Youths aged 6-11 years fit poorly on ATVs of both sizes (0% fit all 5 parameters on the adult-sized ATV vs 12% on the youth-sized ATV). The ATV Safety Institute recommends rider-ATV fit according to age and engine displacement, but no objective data linking age or anthropometrics with ATV engine or frame size has been previously published. Age alone is a poor predictor of rider-ATV fit; the five metrics used offer an improvement compared to current recommendations. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. Pituitary gland volumes in bipolar disorder.

    PubMed

    Clark, Ian A; Mackay, Clare E; Goodwin, Guy M

    2014-12-01

    Bipolar disorder has been associated with increased Hypothalamic-Pituitary-Adrenal axis function. The mechanism is not well understood, but there may be associated increases in pituitary gland volume (PGV) and these small increases may be functionally significant. However, research investigating PGV in bipolar disorder reports mixed results. The aim of the current study was twofold. First, to assess PGV in two novel samples of patients with bipolar disorder and matched healthy controls. Second, to perform a meta-analysis comparing PGV across a larger sample of patients and matched controls. Sample 1 consisted of 23 established patients and 32 matched controls. Sample 2 consisted of 39 medication-naïve patients and 42 matched controls. PGV was measured on structural MRI scans. Seven further studies were identified comparing PGV between patients and matched controls (total n; 244 patients, 308 controls). Both novel samples showed a small (approximately 20mm(3) or 4%), but non-significant, increase in PGV in patients. Combining the two novel samples showed a significant association of age and PGV. Meta-analysis showed a trend towards a larger pituitary gland in patients (effect size: .23, CI: -.14, .59). While results suggest a possible small difference in pituitary gland volume between patients and matched controls, larger mega-analyses with sample sizes greater even than those used in the current meta-analysis are still required. There is a small but potentially functionally significant increase in PGV in patients with bipolar disorder compared to controls. Results demonstrate the difficulty of finding potentially important but small effects in functional brain disorders. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. A systematic approach to designing statistically powerful heteroscedastic 2 × 2 factorial studies while minimizing financial costs.

    PubMed

    Jan, Show-Li; Shieh, Gwowen

    2016-08-31

    The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.

  14. Icing Characteristics of Low Altitude, Supercooled Layer Clouds. Revision

    DTIC Science & Technology

    1980-05-01

    Droplet Size Distribution 5. Icing Rate Meters C. Accuracy and Sources of Error in the Measurements from the Period 1944-1950 11 1. Rotating...whether currently available LWC meters and icing rate detectors will give re- liable results when flown on helicopters. Concerning the forecasting...Max Dia. Size Distrib. Meter Samples 4 1944 MSP DP -- Al .... 4 6 1946 OR 2,4RC 2,4RHC Al 4RMC -- 3 7 1946-47 NEMO, 4RMC 4RMC AI 4RMC - 31 TN,OH, IN

  15. Improving tritium exposure reconstructions using accelerator mass spectrometry

    PubMed Central

    Hunt, J. R.; Vogel, J. S.; Knezovich, J. P.

    2010-01-01

    Direct measurement of tritium atoms by accelerator mass spectrometry (AMS) enables rapid low-activity tritium measurements from milligram-sized samples and permits greater ease of sample collection, faster throughput, and increased spatial and/or temporal resolution. Because existing methodologies for quantifying tritium have some significant limitations, the development of tritium AMS has allowed improvements in reconstructing tritium exposure concentrations from environmental measurements and provides an important additional tool in assessing the temporal and spatial distribution of chronic exposure. Tritium exposure reconstructions using AMS were previously demonstrated for a tree growing on known levels of tritiated water and for trees exposed to atmospheric releases of tritiated water vapor. In these analyses, tritium levels were measured from milligram-sized samples with sample preparation times of a few days. Hundreds of samples were analyzed within a few months of sample collection and resulted in the reconstruction of spatial and temporal exposure from tritium releases. Although the current quantification limit of tritium AMS is not adequate to determine natural environmental variations in tritium concentrations, it is expected to be sufficient for studies assessing possible health effects from chronic environmental tritium exposure. PMID:14735274

  16. Correlates of self worth and body size dissatisfaction among obese Latino youth

    PubMed Central

    Mirza, Nazrat M; Mackey, Eleanor Race; Armstrong, Bridget; Jaramillo, Ana; Palmer, Matilde M

    2011-01-01

    The current study examined self-worth and body size dissatisfaction, and their association with maternal acculturation among obese Latino youth enrolled in a community-based obesity intervention program. Upon entry to the program, a sample of 113 participants reported global self-worth comparable to general population norms, but lower athletic competence and perception of physical appearance. Interestingly, body size dissatisfaction was more prevalent among younger respondents. Youth body size dissatisfaction was associated with less acculturated mothers and higher maternal dissatisfaction with their child's body size. By contrast, although global self-worth was significantly related to body dissatisfaction, it was not influenced by mothers’ acculturation or dissatisfaction with their own or their child’s body size. Obesity intervention programs targeted to Latino youth need to address self-worth concerns among the youth as well as addressing maternal dissatisfaction with their children’s body size. PMID:21354881

  17. A randomized trial testing the efficacy of modifications to the nutrition facts table on comprehension and use of nutrition information by adolescents and young adults in Canada.

    PubMed

    Hobin, E; Sacco, J; Vanderlee, L; White, C M; Zuo, F; Sheeshka, J; McVey, G; Fodor O'Brien, M; Hammond, D

    2015-12-01

    Given the proposed changes to nutrition labelling in Canada and the dearth of research examining comprehension and use of nutrition facts tables (NFts) by adolescents and young adults, our objective was to experimentally test the efficacy of modifications to NFts on young Canadians' ability to interpret, compare and mathematically manipulate nutrition information in NFts on prepackaged food. An online survey was conducted among 2010 Canadians aged 16 to 24 years drawn from a consumer sample. Participants were randomized to view two NFts according to one of six experimental conditions, using a between-groups 2 x 3 factorial design: serving size (current NFt vs. standardized serving-sizes across similar products) x percent daily value (% DV) (current NFt vs. "low/med/high" descriptors vs. colour coding). The survey included seven performance tasks requiring participants to interpret, compare and mathematically manipulate nutrition information on NFts. Separate modified Poisson regression models were conducted for each of the three outcomes. The ability to compare two similar products was significantly enhanced in NFt conditions that included standardized serving-sizes (p ≤ .001 for all). Adding descriptors or colour coding of % DV next to calories and nutrients on NFts significantly improved participants' ability to correctly interpret % DV information (p ≤ .001 for all). Providing both standardized serving-sizes and descriptors of % DV had a modest effect on participants' ability to mathematically manipulate nutrition information to calculate the nutrient content of multiple servings of a product (relative ratio = 1.19; 95% confidence limit: 1.04-1.37). Standardizing serving-sizes and adding interpretive % DV information on NFts improved young Canadians' comprehension and use of nutrition information. Some caution should be exercised in generalizing these findings to all Canadian youth due to the sampling issues associated with the study population. Further research is needed to replicate this study in a more heterogeneous sample in Canada and across a range of food products and categories.

  18. Testing of SIR (a transformable robotic submarine) in Lake Tahoe for future deployment at West Antarctic Ice Sheet grounding lines of Siple Coast

    NASA Astrophysics Data System (ADS)

    Powell, R. D.; Scherer, R. P.; Griffiths, I.; Taylor, L.; Winans, J.; Mankoff, K. D.

    2011-12-01

    A remotely operated vehicle (ROV) has been custom-designed and built by DOER Marine to meet scientific requirements for exploring subglacial water cavities. This sub-ice rover (SIR) will explore and quantitatively document the grounding zone areas of the Ross Ice Shelf cavity using a 3km-long umbilical tether by deployment through an 800m-long ice borehole in a torpedo shape, which is also its default mode if operational failure occurs. Once in the ocean cavity it transforms via a diamond-shaped geometry into a rectangular form when all of its instruments come alive in its flight mode. Instrumentation includes 4 cameras (one forward-looking HD), a vertical scanning sonar (long-range imaging for spatial orientation and navigation), Doppler current meter (determine water current velocities), multi-beam sonar (image and swath map bottom topography), sub-bottom profiler (profile sub-sea-floor sediment for geological history), CTD (determine salinity, temperature and depth), DO meter (determine dissolved oxygen content in water), transmissometer (determine suspended particulate concentrations in water), laser particle-size analyzer (determine sizes of particles in water), triple laser-beams (determine size and volume of objects), thermistor probe (measure in situ temperatures of ice and sediment), shear vane probe (determine in situ strength of sediment), manipulator arm (deploy instrumentation packages, collect samples), shallow ice corer (collect ice samples and glacial debris), water sampler (determine sea water/freshwater composition, calibrate real-time sensors, sample microbes), shallow sediment corer (sample sea floor, in-ice and subglacial sediment for stratigraphy, facies, particle size, composition, structure, fabric, microbes). A sophisticated array of data handling, storing and displaying will allow real-time observations and environmental assessments to be made. This robotic submarine and other instruments will be tested in Lake Tahoe in September, 2011 and results will be presented on its trials and geological and biological findings down to the deepest depths of the lake. Other instruments include a 5m-ling percussion corer for sampling deeper sediments, an ice-tethered profiler with CTD and ACDP, and in situ oceanographic mooring designed to fit down a narrow (30cm-diameter) ice borehole that include interchangeable packages of ACDPs, CTDs, transmissometers, laser particle-size analyzer, DO meter, automated multi-port water sampler, water column nutrient analyzer, sediment porewater chemistry analyzer, down-looking color camera (see figure), and altimeter.

  19. Laser extensometer

    NASA Technical Reports Server (NTRS)

    Stocker, P. J.; Marcus, H. L. (Inventor)

    1977-01-01

    A drift compensated and intensity averaged extensometer for measuring the diameter or other properties of a substantially cylindrical sample based upon the shadow of the sample is described. A beam of laser light is shaped to provide a beam with a uniform intensity along an axis normal to the sample. After passing the sample, the portion of the beam not striking said sample is divided by a beam splitter into a reference signal and a measurement signal. Both of these beams are then chopped by a light chopper to fall upon two photodiode detectors. The resulting ac currents are rectified and then divided into one another, with the final output being proportional to the size of the sample shadow.

  20. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    ERIC Educational Resources Information Center

    Smith, Justin D.

    2012-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have…

  1. A thermal desorption mass spectrometer for freshly nucleated secondary aerosol particles

    NASA Astrophysics Data System (ADS)

    Held, A.; Gonser, S. G.

    2012-04-01

    Secondary aerosol formation in the atmosphere is observed in a large variety of locations worldwide, introducing new particles to the atmosphere which can grow to sizes relevant for health and climate effects of aerosols. The chemical reactions leading to atmospheric secondary aerosol formation are not yet fully understood. At the same time, analyzing the chemical composition of freshly nucleated particles is still a challenging task. We are currently finishing the development of a field portable aerosol mass spectrometer for nucleation particles with diameters smaller than 30 nm. This instrument consists of a custom-built aerosol sizing and collection unit coupled to a time-of-flight mass spectrometer (TOF-MS). The aerosol sizing and collection unit is composed of three major parts: (1) a unipolar corona aerosol charger, (2) a radial differential mobility analyzer (rDMA) for aerosol size separation, and (3) an electrostatic precipitator for aerosol collection. After collection, the aerosol sample is thermally desorbed, and the resulting gas sample is transferred to the TOF-MS for chemical analysis. The unipolar charger is based on corona discharge from carbon fibres (e.g. Han et al., 2008). This design allows efficient charging at voltages below 2 kV, thus eliminating the potential for ozone production which would interfere with the collected aerosol. With the current configuration the extrinsic charging efficiency for 20 nm particles is 32 %. The compact radial DMA similar to the design of Zhang et al. (1995) is optimized for a diameter range from 1 nm to 100 nm. Preliminary tests show that monodisperse aerosol samples (geometric standard deviation of 1.09) at 10 nm, 20 nm, and 30 nm can easily be separated from the ambient polydisperse aerosol population. Finally, the size-segregated aerosol sample is collected on a high-voltage biased metal filament. The collected sample is protected from contamination using a He sheath counterflow. Resistive heating of the filament allows temperature-controlled desorption of compounds of different volatility. We will present preliminary characterization experiments of the aerosol sizing and collection unit coupled to the mass spectrometer. Funding by the German Research Foundation (DFG) under grant DFG HE5214/3-1 is gratefully acknowledged. Han, B., Kim, H.J., Kim, Y.J., and Sioutas, C. (2008) Unipolar charging of ultrafine particles using carbon fiber ionizers. Aerosol Sci. Technol, 42, 793-800. Zhang, S.-H., Akutsu, Y., Russell, L.M., Flagan, R.C., and Seinfeld, J.H. (1995) Radial Differential Mobility Analyzer. Aerosol Sci. Technol, 23, 357-372.

  2. Field application of a multi-frequency acoustic instrument to monitor sediment for silt erosion study in Pelton turbine in Himalayan region, India

    NASA Astrophysics Data System (ADS)

    Rai, A. K.; Kumar, A.; Hies, T.; Nguyen, H. H.

    2016-11-01

    High sediment load passing through hydropower components erodes the hydraulic components resulting in loss of efficiency, interruptions in power production and downtime for repair/maintenance, especially in Himalayan regions. The size and concentration of sediment play a major role in silt erosion. The traditional process of collecting samples manually to analyse in laboratory cannot suffice the need of monitoring temporal variation in sediment properties. In this study, a multi-frequency acoustic instrument was applied at desilting chamber to monitor sediment size and concentration entering the turbine. The sediment size and concentration entering the turbine were also measured with manual samples collected twice daily. The samples collected manually were analysed in laboratory with a laser diffraction instrument for size and concentration apart from analysis by drying and filtering methods for concentration. A conductivity probe was used to calculate total dissolved solids, which was further used in results from drying method to calculate suspended solid content of the samples. The acoustic instrument was found to provide sediment concentration values similar to drying and filtering methods. However, no good match was found between mean grain size from the acoustic method with the current status of development and laser diffraction method in the first field application presented here. The future versions of the software and significant sensitivity improvements of the ultrasonic transducers are expected to increase the accuracy in the obtained results. As the instrument is able to capture the concentration and in the future most likely more accurate mean grain size of the suspended sediments, its application for monitoring silt erosion in hydropower plant shall be highly useful.

  3. A Kepler Mission, A Search for Habitable Planets: Concept, Capabilities and Strengths

    NASA Technical Reports Server (NTRS)

    Koch, David; Borucki, William; Lissauer, Jack; Dunham, Edward; Jenkins, Jon; DeVincenzi, D. (Technical Monitor)

    1998-01-01

    The detection of extrasolar terrestrial planets orbiting main-sequence stars is of great interest and importance. Current ground-based methods are only capable of detecting objects about the size or mass of Jupiter or larger. The technological challenges of direct imaging of Earth-size planets from space are expected to be resolved over the next twenty years. Spacebased photometry of planetary transits is currently the only viable method for detection of terrestrial planets (30-600 times less massive than Jupiter). The method searches the extended solar neighborhood, providing a statistically large sample and the detailed characteristics of each individual case. A robust concept has been developed and proposed as a Discovery-class mission. The concept, its capabilities and strengths are presented.

  4. Accurate potential drop sheet resistance measurements of laser-doped areas in semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinrich, Martin, E-mail: mh.seris@gmail.com; NUS Graduate School for Integrative Science and Engineering, National University of Singapore, Singapore 117456; Kluska, Sven

    2014-10-07

    It is investigated how potential drop sheet resistance measurements of areas formed by laser-assisted doping in crystalline Si wafers are affected by typically occurring experimental factors like sample size, inhomogeneities, surface roughness, or coatings. Measurements are obtained with a collinear four point probe setup and a modified transfer length measurement setup to measure sheet resistances of laser-doped lines. Inhomogeneities in doping depth are observed from scanning electron microscope images and electron beam induced current measurements. It is observed that influences from sample size, inhomogeneities, surface roughness, and coatings can be neglected if certain preconditions are met. Guidelines are given onmore » how to obtain accurate potential drop sheet resistance measurements on laser-doped regions.« less

  5. A review of accuracy assessment for object-based image analysis: From per-pixel to per-polygon approaches

    NASA Astrophysics Data System (ADS)

    Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul

    2018-07-01

    Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.

  6. Sediment concentrations, flow conditions, and downstream evolution of two turbidity currents, Monterey Canyon, USA

    USGS Publications Warehouse

    Xu, Jingping; Octavio E. Sequeiros,; Noble, Marlene A.

    2014-01-01

    The capacity of turbidity currents to carry sand and coarser sediment from shallow to deep regions in the submarine environment has attracted the attention of researchers from different disciplines. Yet not only are field measurements of oceanic turbidity currents a rare achievement, but also the data that have been collected consist mostly of velocity records with very limited or no suspended sediment concentration or grain size distribution data. This work focuses on two turbidity currents measured in Monterey Canyon in 2002 with emphasis on suspended sediment from unique samples collected within the body of these currents. It is shown that concentration and grain size of the suspended material, primarily controlled by the source of the gravity flows and their interaction with bed material, play a significant role in shaping the characteristics of the turbidity currents as they travel down the canyon. Before the flows reach their normal or quasi-steady state, which is defined by bed slope, bed roughness, and suspended grain size, they might pass through a preliminary adjustment stage where they are subject to capacity-driven deposition, and release heavy material in excess. Flows composed of fine (silt/clay) sediments tend to be thicker than those with sands. The measured velocity and concentration data confirm that flow patterns differ between the front and body of turbidity currents and that, even after reaching normal state, the flow regime can be radically disrupted by abrupt changes in canyon morphology.

  7. Compendium of Operations Research and Economic Analysis Studies

    DTIC Science & Technology

    1992-10-01

    were to: (1) review and document current po~icios and procedures, k2) identity relevant economic and non -economic decision vAriibles, (3) design a...minimize the total sample size while ensuring that the proportion of samples closely resembled the actual population proportions. Both linear and non ...would cost about $290.00. DLA-92-PlO10. Impact of Increasing the Non -Competitive Threshold from Index No. 92-26 $2,500 to $5,000 (October 1991) In

  8. Electrofishing effort requirements for estimating species richness in the Kootenai River, Idaho

    USGS Publications Warehouse

    Watkins, Carson J.; Quist, Michael C.; Shepard, Bradley B.; Ireland, Susan C.

    2016-01-01

    This study was conducted on the Kootenai River, Idaho to provide insight on sampling requirements to optimize future monitoring effort associated with the response of fish assemblages to habitat rehabilitation. Our objective was to define the electrofishing effort (m) needed to have a 95% probability of sampling 50, 75, and 100% of the observed species richness and to evaluate the relative influence of depth, velocity, and instream woody cover on sample size requirements. Sidechannel habitats required more sampling effort to achieve 75 and 100% of the total species richness than main-channel habitats. The sampling effort required to have a 95% probability of sampling 100% of the species richness was 1100 m for main-channel sites and 1400 m for side-channel sites. We hypothesized that the difference in sampling requirements between main- and side-channel habitats was largely due to differences in habitat characteristics and species richness between main- and side-channel habitats. In general, main-channel habitats had lower species richness than side-channel habitats. Habitat characteristics (i.e., depth, current velocity, and woody instream cover) were not related to sample size requirements. Our guidelines will improve sampling efficiency during monitoring effort in the Kootenai River and provide insight on sampling designs for other large western river systems where electrofishing is used to assess fish assemblages.

  9. Heat and Bleach: A Cost-Efficient Method for Extracting Microplastics from Return Activated Sludge.

    PubMed

    Sujathan, Surya; Kniggendorf, Ann-Kathrin; Kumar, Arun; Roth, Bernhard; Rosenwinkel, Karl-Heinz; Nogueira, Regina

    2017-11-01

    The extraction of plastic microparticles, so-called microplastics, from sludge is a challenging task due to the complex, highly organic material often interspersed with other benign microparticles. The current procedures for microplastic extraction from sludge are time consuming and require expensive reagents for density separation as well as large volumes of oxidizing agents for organic removal, often resulting in tiny sample sizes and thus a disproportional risk of sample bias. In this work, we present an improved extraction method tested on return activated sludge (RAS). The treatment of 100 ml of RAS requires only 6% hydrogen peroxide (H 2 O 2 ) for bleaching at 70 °C, followed by density separation with sodium nitrate/sodium thiosulfate (SNT) solution, and is completed within 24 h. Extracted particles of all sizes were chemically analyzed with confocal Raman microscopy. An extraction efficiency of 78 ± 8% for plastic particle sizes 20 µm and up was confirmed in a recovery experiment. However, glass shards with a diameter of less than 20 µm remained in the sample despite the density of glass exceeding the density of the separating SNT solution by 1.1 g/cm 3 . This indicates that density separation may be unreliable for particle sizes in the lower micrometer range.

  10. Consideration of Kaolinite Interference Correction for Quartz Measurements in Coal Mine Dust

    PubMed Central

    Lee, Taekhee; Chisholm, William P.; Kashon, Michael; Key-Schwartz, Rosa J.; Harper, Martin

    2015-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed “deviation,” not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected. PMID:23767881

  11. Consideration of kaolinite interference correction for quartz measurements in coal mine dust.

    PubMed

    Lee, Taekhee; Chisholm, William P; Kashon, Michael; Key-Schwartz, Rosa J; Harper, Martin

    2013-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed "deviation," not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected.

  12. Numerical study of the process parameters in spark plasma sintering (sps)

    NASA Astrophysics Data System (ADS)

    Chowdhury, Redwan Jahid

    Spark plasma sintering (SPS) is one of the most widely used sintering techniques that utilizes pulsed direct current together with uniaxial pressure to consolidate a wide variety of materials. The unique mechanisms of SPS enable it to sinter powder compacts at a lower temperature and in a shorter time than the conventional hot pressing, hot isostatic pressing and vacuum sintering process. One of the limitations of SPS is the presence of temperature gradients inside the sample, which could result in non-uniform physical and microstructural properties. Detailed study of the temperature and current distributions inside the sintered sample is necessary to minimize the temperature gradients and achieve desired properties. In the present study, a coupled thermal-electric model was developed using finite element codes in ABAQUS software to investigate the temperature and current distributions inside the conductive and non-conductive samples. An integrated experimental-numerical methodology was implemented to determine the system contact resistances accurately. The developed sintering model was validated by a series of experiments, which showed good agreements with simulation results. The temperature distribution inside the sample depends on some process parameters such as sample and tool geometry, punch and die position, applied current and thermal insulation around the die. The role of these parameters on sample temperature distribution was systematically analyzed. The findings of this research could prove very useful for the reliable production of large size sintered samples with controlled and tailored properties.

  13. Characterization of particulate emissions from Australian open-cut coal mines: Toward improved emission estimates.

    PubMed

    Richardson, Claire; Rutherford, Shannon; Agranovski, Igor

    2018-06-01

    Given the significance of mining as a source of particulates, accurate characterization of emissions is important for the development of appropriate emission estimation techniques for use in modeling predictions and to inform regulatory decisions. The currently available emission estimation methods for Australian open-cut coal mines relate primarily to total suspended particulates and PM 10 (particulate matter with an aerodynamic diameter <10 μm), and limited data are available relating to the PM 2.5 (<2.5 μm) size fraction. To provide an initial analysis of the appropriateness of the currently available emission estimation techniques, this paper presents results of sampling completed at three open-cut coal mines in Australia. The monitoring data demonstrate that the particulate size fraction varies for different mining activities, and that the region in which the mine is located influences the characteristics of the particulates emitted to the atmosphere. The proportion of fine particulates in the sample increased with distance from the source, with the coarse fraction being a more significant proportion of total suspended particulates close to the source of emissions. In terms of particulate composition, the results demonstrate that the particulate emissions are predominantly sourced from naturally occurring geological material, and coal comprises less than 13% of the overall emissions. The size fractionation exhibited by the sampling data sets is similar to that adopted in current Australian emission estimation methods but differs from the size fractionation presented in the U.S. Environmental Protection Agency methodology. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Comprehensive air quality monitoring was undertaken, and corresponding recommendations were provided.

  14. [Distributions of the numbers of monitoring stations in the surveillance of infectious diseases in Japan].

    PubMed

    Murakami, Y; Hashimoto, S; Taniguchi, K; Nagai, M

    1999-12-01

    To describe the characteristics of monitoring stations for the infectious disease surveillance system in Japan, we compared the distributions of the number of monitoring stations in terms of population, region, size of medical institution, and medical specialty. The distributions of annual number of reported cases in terms of the type of diseases, the size of medical institution, and medical specialty were also compared. We conducted a nationwide survey of the pediatrics stations (16 diseases), ophthalmology stations (3 diseases) and the stations of sexually transmitted diseases (STD) (5 diseases) in Japan. In the survey, we collected the data of monitoring stations and the annual reported cases of diseases. We also collected the data on the population, served by the health center where the monitoring stations existed, from the census. First, we compared the difference between the present number of monitoring stations and the current standard established by the Ministry of Health and Welfare (MHW). Second, we compared the distribution of all medical institutions in Japan and the monitoring stations in terms of the size of the medical institution. Third, we compared the average number of annual reported cases of diseases in terms of the size of medical institution and the medical specialty. In most health centers, the number of monitoring stations achieved the current standard of MHW, while a few health centers had no monitoring station, although they had a large population. Most prefectures also achieved the current standard of MHW, but some prefectures were well below the standard. Among pediatric stations, the sampling proportion of large hospitals was higher than other categories. Among the ophthalmology stations, the sampling proportion of hospitals was higher than other categories. Among the STD stations, the sampling proportion of clinics of obstetrics and gynecology was lower than other categories. Except for some diseases, it made little difference in the average number of annual reported cases of diseases in terms of the type of medical institution. Among STD, there was a great difference in the average number of annual reported cases of diseases in terms of medical specialty.

  15. Sampling efficiency of modified 37-mm sampling cassettes using computational fluid dynamics.

    PubMed

    Anthony, T Renée; Sleeth, Darrah; Volckens, John

    2016-01-01

    In the U.S., most industrial hygiene practitioners continue to rely on the closed-face cassette (CFC) to assess worker exposures to hazardous dusts, primarily because ease of use, cost, and familiarity. However, mass concentrations measured with this classic sampler underestimate exposures to larger particles throughout the inhalable particulate mass (IPM) size range (up to aerodynamic diameters of 100 μm). To investigate whether the current 37-mm inlet cap can be redesigned to better meet the IPM sampling criterion, computational fluid dynamics (CFD) models were developed, and particle sampling efficiencies associated with various modifications to the CFC inlet cap were determined. Simulations of fluid flow (standard k-epsilon turbulent model) and particle transport (laminar trajectories, 1-116 μm) were conducted using sampling flow rates of 10 L min(-1) in slow moving air (0.2 m s(-1)) in the facing-the-wind orientation. Combinations of seven inlet shapes and three inlet diameters were evaluated as candidates to replace the current 37-mm inlet cap. For a given inlet geometry, differences in sampler efficiency between inlet diameters averaged less than 1% for particles through 100 μm, but the largest opening was found to increase the efficiency for the 116 μm particles by 14% for the flat inlet cap. A substantial reduction in sampler efficiency was identified for sampler inlets with side walls extending beyond the dimension of the external lip of the current 37-mm CFC. The inlet cap based on the 37-mm CFC dimensions with an expanded 15-mm entry provided the best agreement with facing-the-wind human aspiration efficiency. The sampler efficiency was increased with a flat entry or with a thin central lip adjacent to the new enlarged entry. This work provides a substantial body of sampling efficiency estimates as a function of particle size and inlet geometry for personal aerosol samplers.

  16. Effect of charcoal doping on the superconducting properties of MgB 2 bulk

    NASA Astrophysics Data System (ADS)

    Kim, N. K.; Tan, K. S.; Jun, B.-H.; Park, H. W.; Joo, J.; Kim, C.-J.

    2008-09-01

    The effect of charcoal doping on the superconducting properties of in situ processed MgB 2 bulk samples was investigated. To understand the size effect of the dopant the charcoal powder was attrition milled for 1 h, 3 h and 6 h using ZrO 2 balls. The milled charcoal powders were mixed with magnesium and boron powders to a nominal composition of Mg(B 0.975C 0.025) 2. The Mg(B 0.975C 0.025) 2 compacts were heat-treated at 900 °C for 0.5 h in flowing Ar atmosphere. Magnetic susceptibility for the samples showed that the superconducting transition temperature ( Tc) decreased as the size of the charcoal powder decreased. The critical current density ( Jc) of Mg(B 0.975C 0.025) 2 prepared using large size charcoal powder was lower than that of the undoped MgB 2. However, a crossover of Jc value was observed at high magnetic fields of about 4 T in Mg(B 0.975C 0.025) 2 prepared using small size charcoal powder. Carbon diffusion into the boron site was easier and gave the Jc increase effect when the small size charcoal was used as a dopant.

  17. Status report: Implementation of gas measurements at the MAMS 14C AMS facility in Mannheim, Germany

    NASA Astrophysics Data System (ADS)

    Hoffmann, Helene; Friedrich, Ronny; Kromer, Bernd; Fahrni, Simon

    2017-11-01

    By implementing a Gas Interface System (GIS), CO2 gas measurements for radiocarbon dating of small environmental samples (<100 μgC) have been established at the MICADAS (Mini Carbon Dating System) AMS instrument in Mannheim, Germany. The system performance has been optimized and tested with respect to stability and ion yield by repeated blank and standard measurements for sample sizes down to 3 μgC. The highest 12C- low-energy (LE) ion currents, typically reaching 8-15 μA, could be achieved for a mixing ratio of 4% CO2 in Helium, resulting in relative counting errors of 1-2% for samples larger than 10 μgC and 3-7% for sample sizes below 10 μgC. The average count rate was ca. 500 counts per microgram C for OxII standard material. The blank is on the order of 35,000-40,000 radiocarbon years, which is comparable to similar systems. The complete setup thus enables reliable dating for most environmental samples (>3 μgC).

  18. Sample size requirements for separating out the effects of combination treatments: Randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis

    PubMed Central

    2011-01-01

    Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Trial registration Current Controlled Trials ISRCTN61649292 PMID:21288326

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jinlong, Lv, E-mail: ljlbuaa@126.com; State Key Lab of New Ceramic and Fine Processing, Tsinghua University, Beijing 100084; Tongxiang, Liang, E-mail: ljltsinghua@126.com

    The nanocrystalline pure nickels with different grain orientations were fabricated by direct current electrodeposition process. The grain size slightly decreased with the increasing of electrodeposition solution temperature. However, grain orientation was affected significantly. Comparing with samples obtained at 50 °C and 80 °C, sample obtained at 20 °C had the strongest (111) orientation plane which increased electrochemical corrosion resistance of this sample. At the same time, the lowest (111) orientation plane deteriorated electrochemical corrosion resistance of sample obtained at 50 °C. - Graphical abstract: The increased electrodeposition temperature promoted slightly grain refinement. The grain orientation was affected significantly by electrodepositionmore » solution temperature. The (111) orientation plane of sample increased significantly corrosion resistance. Display Omitted.« less

  20. Discrete element method (DEM) simulations of stratified sampling during solid dosage form manufacturing.

    PubMed

    Hancock, Bruno C; Ketterhagen, William R

    2011-10-14

    Discrete element model (DEM) simulations of the discharge of powders from hoppers under gravity were analyzed to provide estimates of dosage form content uniformity during the manufacture of solid dosage forms (tablets and capsules). For a system that exhibits moderate segregation the effects of sample size, number, and location within the batch were determined. The various sampling approaches were compared to current best-practices for sampling described in the Product Quality Research Institute (PQRI) Blend Uniformity Working Group (BUWG) guidelines. Sampling uniformly across the discharge process gave the most accurate results with respect to identifying segregation trends. Sigmoidal sampling (as recommended in the PQRI BUWG guidelines) tended to overestimate potential segregation issues, whereas truncated sampling (common in industrial practice) tended to underestimate them. The size of the sample had a major effect on the absolute potency RSD. The number of sampling locations (10 vs. 20) had very little effect on the trends in the data, and the number of samples analyzed at each location (1 vs. 3 vs. 7) had only a small effect for the sampling conditions examined. The results of this work provide greater understanding of the effect of different sampling approaches on the measured content uniformity of real dosage forms, and can help to guide the choice of appropriate sampling protocols. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. System for precise position registration

    DOEpatents

    Sundelin, Ronald M.; Wang, Tong

    2005-11-22

    An apparatus for enabling accurate retaining of a precise position, such as for reacquisition of a microscopic spot or feature having a size of 0.1 mm or less, on broad-area surfaces after non-in situ processing. The apparatus includes a sample and sample holder. The sample holder includes a base and three support posts. Two of the support posts interact with a cylindrical hole and a U-groove in the sample to establish location of one point on the sample and a line through the sample. Simultaneous contact of the third support post with the surface of the sample defines a plane through the sample. All points of the sample are therefore uniquely defined by the sample and sample holder. The position registration system of the current invention provides accuracy, as measured in x, y repeatability, of at least 140 .mu.m.

  2. Frequency Rates and Correlates of Contrapower Harassment in Higher Education

    ERIC Educational Resources Information Center

    DeSouza, Eros R.

    2011-01-01

    The current study investigated incivility, sexual harassment, and racial-ethnic harassment simultaneously when the targets were faculty members and the perpetrators were students (i.e., academic contrapower harassment; ACH). The sample constituted 257 faculty members (90% were White and 53% were women) from a medium-sized state university in the…

  3. Estimating Children’s Soil/Dust Ingestion Rates through Retrospective Analyses of Blood Lead Biomonitoring from the Bunker Hill Superfund Site in Idaho

    EPA Science Inventory

    Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study du...

  4. Correlates of Sexual Abuse and Smoking among French Adults

    ERIC Educational Resources Information Center

    King, Gary; Guilbert, Philippe; Ward, D. Gant; Arwidson, Pierre; Noubary, Farzad

    2006-01-01

    Objective: The goal of this study was to examine the association between sexual abuse (SA) and initiation, cessation, and current cigarette smoking among a large representative adult population in France. Method: A random sample size of 12,256 adults (18-75 years of age) was interviewed by telephone concerning demographic variables, health…

  5. Structural Features of Sibling Dyads and Attitudes toward Sibling Relationships in Young Adulthood

    ERIC Educational Resources Information Center

    Riggio, Heidi R.

    2006-01-01

    This study examined sibling-dyad structural variables (sex composition, age difference, current coresidence, position adjacency, family size, respondent and/or sibling ordinal position) and attitudes toward adult sibling relationships. A sample of 1,053 young adults (M age = 22.1 years) described one sibling using the Lifespan Sibling Relationship…

  6. Electronic Resource Expenditure and the Decline in Reference Transaction Statistics in Academic Libraries

    ERIC Educational Resources Information Center

    Dubnjakovic, Ana

    2012-01-01

    The current study investigates factors influencing increase in reference transactions in a typical week in academic libraries across the United States of America. Employing multiple regression analysis and general linear modeling, variables of interest from the "Academic Library Survey (ALS) 2006" survey (sample size 3960 academic libraries) were…

  7. Alternative sample sizes for verification dose experiments and dose audits

    NASA Astrophysics Data System (ADS)

    Taylor, W. A.; Hansen, J. M.

    1999-01-01

    ISO 11137 (1995), "Sterilization of Health Care Products—Requirements for Validation and Routine Control—Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2). This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report.

  8. Mapping the Solar System with LSST

    NASA Astrophysics Data System (ADS)

    Ivezic, Z.; Juric, M.; Lupton, R.; Connolly, A.; Kubica, J.; Moore, A.; Harris, A.; Bowell, T.; Bernstein, G.; Stubbs, C.; LSST Collaboration

    2004-12-01

    The currently considered LSST cadence, based on two 10 sec exposures, may result in orbital parameters, light curves and accurate colors for over a million main-belt asteroids (MBA), and about 20,000 trans-Neptunian objects (TNO). Compared to the current state-of-the-art, this sample would represent a factor of 5 increase in the number of MBAs with known orbits, a factor of 20 increase in the number of MBAs with known orbits and accurate color measurements, and a factor of 100 increase in the number of MBAs with measured variability properties. The corresponding sample increase for TNOs is 10, 100, and 1000, respectively. The LSST MBA and TNO samples will enable detailed studies of the dynamical and chemical history of the solar system. For example, they will constrain the MBA size distribution for objects larger than 100 m, and TNO size distribution for objects larger than 100 km, their physical state through variability measurements (solid body vs. a rubble pile), as well as their surface chemistry through color measurements. A proposed deep TNO survey, based on 1 hour exposures, may result in a sample of about 100,000 TNOs, while spending only 10% of the LSST observing time. Such a deep TNO survey would be capable of discovering Sedna-like objects at distances beyond 150 AU, thereby increasing the observable Solar System volume by about a factor of 7. The increase in data volume associated with LSST asteroid science will present many computational challenges to how we might extract tracks and orbits of asteroids from the underlying clutter. Tree-based algorithms for multihypothesis testing of asteroid tracks can help solve these challenges by providing the necessary 1000-fold speed-ups over current approaches while recovering 95% of the underlying asteroid populations.

  9. Selected laboratory evaluations of the whole-water sample-splitting capabilities of a prototype fourteen-liter Teflon churn splitter

    USGS Publications Warehouse

    Horowitz, A.J.; Smith, J.J.; Elrick, K.A.

    2001-01-01

    A prototype 14-L Teflon? churn splitter was evaluated for whole-water sample-splitting capabilities over a range of sediment concentratons and grain sizes as well as for potential chemical contamination from both organic and inorganic constituents. These evaluations represent a 'best-case' scenario because they were performed in the controlled environment of a laboratory, and used monomineralic silica sand slurries of known concentration made up in deionized water. Further, all splitting was performed by a single operator, and all the requisite concentration analyses were performed by a single laboratory. The prototype Teflon? churn splitter did not appear to supply significant concentrations of either organic or inorganic contaminants at current U.S. Geological Survey (USGS) National Water Quality Laboratory detection and reporting limits when test samples were prepared using current USGS protocols. As with the polyethylene equivalent of the prototype Teflon? churn, the maximum usable whole-water suspended sediment concentration for the prototype churn appears to lie between 1,000 and 10,000 milligrams per liter (mg/L). Further, the maximum grain-size limit appears to lie between 125- and 250-microns (m). Tests to determine the efficacy of the valve baffle indicate that it must be retained to facilitate representative whole-water subsampling.

  10. Monitoring diesel particulate matter and calculating diesel particulate densities using Grimm model 1.109 real-time aerosol monitors in underground mines.

    PubMed

    Kimbal, Kyle C; Pahler, Leon; Larson, Rodney; VanDerslice, Jim

    2012-01-01

    Currently, there is no Mine Safety and Health Administration (MSHA)-approved sampling method that provides real-time results for ambient concentrations of diesel particulates. This study investigated whether a commercially available aerosol spectrometer, the Grimm Portable Aerosol Spectrometer Model 1.109, could be used during underground mine operations to provide accurate real-time diesel particulate data relative to MSHA-approved cassette-based sampling methods. A subset was to estimate size-specific diesel particle densities to potentially improve the diesel particulate concentration estimates using the aerosol monitor. Concurrent sampling was conducted during underground metal mine operations using six duplicate diesel particulate cassettes, according to the MSHA-approved method, and two identical Grimm Model 1.109 instruments. Linear regression was used to develop adjustment factors relating the Grimm results to the average of the cassette results. Statistical models using the Grimm data produced predicted diesel particulate concentrations that highly correlated with the time-weighted average cassette results (R(2) = 0.86, 0.88). Size-specific diesel particulate densities were not constant over the range of particle diameters observed. The variance of the calculated diesel particulate densities by particle diameter size supports the current understanding that diesel emissions are a mixture of particulate aerosols and a complex host of gases and vapors not limited to elemental and organic carbon. Finally, diesel particulate concentrations measured by the Grimm Model 1.109 can be adjusted to provide sufficiently accurate real-time air monitoring data for an underground mining environment.

  11. Quenching of the Quantum Hall Effect in Graphene with Scrolled Edges

    NASA Astrophysics Data System (ADS)

    Cresti, Alessandro; Fogler, Michael M.; Guinea, Francisco; Castro Neto, A. H.; Roche, Stephan

    2012-04-01

    Edge nanoscrolls are shown to strongly influence transport properties of suspended graphene in the quantum Hall regime. The relatively long arclength of the scrolls in combination with their compact transverse size results in formation of many nonchiral transport channels in the scrolls. They short circuit the bulk current paths and inhibit the observation of the quantized two-terminal resistance. Unlike competing theoretical proposals, this mechanism of disrupting the Hall quantization in suspended graphene is not caused by ill-chosen placement of the contacts, singular elastic strains, or a small sample size.

  12. PM2.5 Monitors in New England | Air Quality Planning Unit ...

    EPA Pesticide Factsheets

    2017-04-10

    The New England states are currently operating a network of 58 ambient PM2.5 air quality monitors that meet EPA's Federal Reference Method (FRM) for PM2.5, which is necessary in order for the resultant data to be used for attainment/non-attainment purposes. These monitors collect particles in the ambient air smaller than 2.5 microns in size on a filter, which is weighed prior and post sampling to produce a 24-hour sample concentration.

  13. Dynamics of sediments along with their core properties in the Monastir-Bekalta coastline (Tunisia, Central Mediterranean)

    NASA Astrophysics Data System (ADS)

    Khiari, Nouha; Atoui, Abdelfattah; Khalil, Nadia; Charef, Abdelkrim; Aleya, Lotfi

    2017-10-01

    The authors report on two campaigns of high-resolution samplings along the shores of Monastir Bay in Tunisia: the first being a study of sediment dynamics, grain size and mineral composition in surface sediment, and the second, eight months later, using four sediment cores to study grain-size distribution in bottom sediments. Particle size analysis of superficial sediment shows that the sand in shallow depths is characterized by S-shaped curves, indicating a certain degree of agitation, possible transport by rip currents near the bottom and hyperbolic curves illustrating heterogeneity of sand stock. The sediments settle in a relatively calm environment. Along the bay shore (from 0 to 2 m depth), the bottom is covered by medium sand. Sediment transport is noted along the coast; from north to south and from south to north, caused by longshore drift and a rip current in the middle of the bay. These two currents are generated by wind and swell, especially by north to northeast waves which transport the finest sediment. Particle size analysis of bottom sediment indicates a mean grain size ranging from coarse to very fine sands while vertical distribution of grain size tends to decrease from surface to depth. The increase in particle size of sediment cores may be due to the coexistence of terrigenous inputs along with the sedimentary transit parallel to the coast due to the effect of longshore drift. Mineralogical analysis shows that Monastir's coastal sands and bottom sediment are composed of quartz, calcite, magnesium calcite, aragonite and hematite. The existence of a low energy zone with potential to accumulate pollutants indicates that managerial action is necessary to help preserve Monastir Bay.

  14. Evaluation of standardized sample collection, packaging, and decontamination procedures to assess cross-contamination potential during Bacillus anthracis incident response operations

    PubMed Central

    Calfee, M. Worth; Tufts, Jenia; Meyer, Kathryn; McConkey, Katrina; Mickelsen, Leroy; Rose, Laura; Dowell, Chad; Delaney, Lisa; Weber, Angela; Morse, Stephen; Chaitram, Jasmine; Gray, Marshall

    2016-01-01

    Sample collection procedures and primary receptacle (sample container and bag) decontamination methods should prevent contaminant transfer between contaminated and non-contaminated surfaces and areas during bio-incident operations. Cross-contamination of personnel, equipment, or sample containers may result in the exfiltration of biological agent from the exclusion (hot) zone and have unintended negative consequences on response resources, activities and outcomes. The current study was designed to: (1) evaluate currently recommended sample collection and packaging procedures to identify procedural steps that may increase the likelihood of spore exfiltration or contaminant transfer; (2) evaluate the efficacy of currently recommended primary receptacle decontamination procedures; and (3) evaluate the efficacy of outer packaging decontamination methods. Wet- and dry-deposited fluorescent tracer powder was used in contaminant transfer tests to qualitatively evaluate the currently-recommended sample collection procedures. Bacillus atrophaeus spores, a surrogate for Bacillus anthracis, were used to evaluate the efficacy of spray- and wipe-based decontamination procedures. Both decontamination procedures were quantitatively evaluated on three types of sample packaging materials (corrugated fiberboard, polystyrene foam, and polyethylene plastic), and two contamination mechanisms (wet or dry inoculums). Contaminant transfer results suggested that size-appropriate gloves should be worn by personnel, templates should not be taped to or removed from surfaces, and primary receptacles should be selected carefully. The decontamination tests indicated that wipe-based decontamination procedures may be more effective than spray-based procedures; efficacy was not influenced by material type but was affected by the inoculation method. Incomplete surface decontamination was observed in all tests with dry inoculums. This study provides a foundation for optimizing current B. anthracis response procedures to minimize contaminant exfiltration. PMID:27362274

  15. Evaluation of standardized sample collection, packaging, and decontamination procedures to assess cross-contamination potential during Bacillus anthracis incident response operations.

    PubMed

    Calfee, M Worth; Tufts, Jenia; Meyer, Kathryn; McConkey, Katrina; Mickelsen, Leroy; Rose, Laura; Dowell, Chad; Delaney, Lisa; Weber, Angela; Morse, Stephen; Chaitram, Jasmine; Gray, Marshall

    2016-12-01

    Sample collection procedures and primary receptacle (sample container and bag) decontamination methods should prevent contaminant transfer between contaminated and non-contaminated surfaces and areas during bio-incident operations. Cross-contamination of personnel, equipment, or sample containers may result in the exfiltration of biological agent from the exclusion (hot) zone and have unintended negative consequences on response resources, activities and outcomes. The current study was designed to: (1) evaluate currently recommended sample collection and packaging procedures to identify procedural steps that may increase the likelihood of spore exfiltration or contaminant transfer; (2) evaluate the efficacy of currently recommended primary receptacle decontamination procedures; and (3) evaluate the efficacy of outer packaging decontamination methods. Wet- and dry-deposited fluorescent tracer powder was used in contaminant transfer tests to qualitatively evaluate the currently-recommended sample collection procedures. Bacillus atrophaeus spores, a surrogate for Bacillus anthracis, were used to evaluate the efficacy of spray- and wipe-based decontamination procedures. Both decontamination procedures were quantitatively evaluated on three types of sample packaging materials (corrugated fiberboard, polystyrene foam, and polyethylene plastic), and two contamination mechanisms (wet or dry inoculums). Contaminant transfer results suggested that size-appropriate gloves should be worn by personnel, templates should not be taped to or removed from surfaces, and primary receptacles should be selected carefully. The decontamination tests indicated that wipe-based decontamination procedures may be more effective than spray-based procedures; efficacy was not influenced by material type but was affected by the inoculation method. Incomplete surface decontamination was observed in all tests with dry inoculums. This study provides a foundation for optimizing current B. anthracis response procedures to minimize contaminant exfiltration.

  16. Measuring solids concentration in stormwater runoff: comparison of analytical methods.

    PubMed

    Clark, Shirley E; Siu, Christina Y S

    2008-01-15

    Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.

  17. Optical absorption and photoluminescence studies of gold nanoparticles deposited on porous silicon

    PubMed Central

    2013-01-01

    We present an investigation on a coupled system consists of gold nanoparticles and silicon nanocrystals. Gold nanoparticles (AuNPs) embedded into porous silicon (PSi) were prepared using the electrochemical deposition method. Scanning electron microscope images and energy-dispersive X-ray results indicated that the growth of AuNPs on PSi varies with current density. X-ray diffraction analysis showed the presence of cubic gold phases with crystallite sizes around 40 to 58 nm. Size dependence on the plasmon absorption was studied from nanoparticles with various sizes. Comparison with the reference sample, PSi without AuNP deposition, showed a significant blueshift with decreasing AuNP size which was explained in terms of optical coupling between PSi and AuNPs within the pores featuring localized plasmon resonances. PMID:23331761

  18. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  19. Fabrication of low thermal expansion SiC/ZrW2O8 porous ceramics

    NASA Astrophysics Data System (ADS)

    Poowancum, A.; Matsumaru, K.; Juárez-Ramírez, I.; Torres-Martínez, L. M.; Fu, Z. Y.; Lee, S. W.; Ishizaki, K.

    2011-03-01

    Low or zero thermal expansion porous ceramics are required for several applications. In this work near zero thermal expansion porous ceramics were fabricated by using SiC and ZrW2O8 as positive and negative thermal expansion materials, respectively, bonded by soda lime glass. The mixture of SiC, ZrW2O8 and soda lime glass was sintered by Pulsed Electric Current Sintering (PECS, or sometimes called Spark Plasma Sintering, SPS) at 700 °C. Sintered samples with ZrW2O8 particle size smaller than 25 μm have high thermal expansion coefficient, because ZrW2O8 has the reaction with soda lime glass to form Na2ZrW3O12 during sintering process. The reaction between soda lime glass and ZrW2O8 is reduced by increasing particle size of ZrW2O8. Sintered sample with ZrW2O8 particle size 45-90 μm shows near zero thermal expansion.

  20. Grain size analysis and depositional environment of shallow marine to basin floor, Kelantan River Delta

    NASA Astrophysics Data System (ADS)

    Afifah, M. R. Nurul; Aziz, A. Che; Roslan, M. Kamal

    2015-09-01

    Sediment samples were collected from the shallow marine from Kuala Besar, Kelantan outwards to the basin floor of South China Sea which consisted of quaternary bottom sediments. Sixty five samples were analysed for their grain size distribution and statistical relationships. Basic statistical analysis like mean, standard deviation, skewness and kurtosis were calculated and used to differentiate the depositional environment of the sediments and to derive the uniformity of depositional environment either from the beach or river environment. The sediments of all areas were varied in their sorting ranging from very well sorted to poorly sorted, strongly negative skewed to strongly positive skewed, and extremely leptokurtic to very platykurtic in nature. Bivariate plots between the grain-size parameters were then interpreted and the Coarsest-Median (CM) pattern showed the trend suggesting relationships between sediments influenced by three ongoing hydrodynamic factors namely turbidity current, littoral drift and waves dynamic, which functioned to control the sediments distribution pattern in various ways.

  1. Effect of freezing temperature in thermally induced phase separation method in hydroxyapatite/chitosan-based bone scaffold biomaterial

    NASA Astrophysics Data System (ADS)

    Albab, Muh Fadhil; Yuwono, Akhmad Herman; Sofyan, Nofrijon; Ramahdita, Ghiska

    2017-02-01

    In the current study, hydroxyapatite (HA)/chitosan-based bone scaffold has been fabricated using Thermally Induced Phase Separation (TIPS) method under freezing temperature variation of -20, -30, -40 and -80 °C. The samples with weight percent ratio of 70% HA and 30% chitosan were homogeneously mixed and subsequently dissolved in 2% acetic acid. The synthesized samples were further characterized using Fourier transform infrared (FTIR), compressive test and scanning electron microscope (SEM). The investigation results showed that low freezing temperature reduced the pore size and increased the compressive strength of the scaffold. In the freezing temperature of -20 °C, the pore size was 133.93 µm with the compressive strength of 5.9 KPa, while for -80 °C, the pore size declined to 60.55 µm with the compressive strength 29.8 KPa. Considering the obtained characteristics, HA/chitosan obtained in this work has potential to be applied as a bone scaffold.

  2. A critical review on characterization strategies of organic matter for wastewater and water treatment processes.

    PubMed

    Tran, Ngoc Han; Ngo, Huu Hao; Urase, Taro; Gin, Karina Yew-Hoong

    2015-10-01

    The presence of organic matter (OM) in raw wastewater, treated wastewater effluents, and natural water samples has been known to cause many problems in wastewater treatment and water reclamation processes, such as treatability, membrane fouling, and the formation of potentially toxic by-products during wastewater treatment. This paper summarizes the current knowledge on the methods for characterization and quantification of OM in water samples in relation to wastewater and water treatment processes including: (i) characterization based on the biodegradability; (ii) characterization based on particle size distribution; (iii) fractionation based on the hydrophilic/hydrophobic properties; (iv) characterization based on the molecular weight (MW) size distribution; and (v) characterization based on fluorescence excitation emission matrix. In addition, the advantages, disadvantages and applications of these methods are discussed in detail. The establishment of correlations among biodegradability, hydrophobic/hydrophilic fractions, MW size distribution of OM, membrane fouling and formation of toxic by-products potential is highly recommended for further studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. The enigmatic molar from Gondolin, South Africa: implications for Paranthropus paleobiology.

    PubMed

    Grine, Frederick E; Jacobs, Rachel L; Reed, Kaye E; Plavcan, J Michael

    2012-10-01

    The specific attribution of the large hominin M(2) (GDA-2) from Gondolin has significant implications for the paleobiology of Paranthropus. If it is a specimen of Paranthropus robustus it impacts that species' size range, and if it belongs to Paranthropus boisei it has important biogeographic implications. We evaluate crown size, cusp proportions and the likelihood of encountering a large-bodied mammal species in both East and South Africa in the Early Pleistocene. The tooth falls well outside the P. robustus sample range, and comfortably within that for penecontemporaneous P. boisei. Analyses of sample range, distribution and variability suggest that it is possible, albeit unlikely to find a M(2) of this size in the current P. robustus sample. However, taphonomic agents - carnivore (particularly leopard) feeding behaviors - have likely skewed the size distribution of the Swartkrans and Drimolen P. robustus assemblage. In particular, assemblages of large-bodied mammals accumulated by leopards typically display high proportions of juveniles and smaller adults. The skew in the P. robustus sample is consistent with this type of assemblage. Morphological evidence in the form of cusp proportions is congruent with GDA-2 representing P. robustus rather than P. boisei. The comparatively small number of large-bodied mammal species common to both South and East Africa in the Early Pleistocene suggests a low probability of encountering an herbivorous australopith in both. Our results are most consistent with the interpretation of the Gondolin molar as a very large specimen of P. robustus. This, in turn, suggests that large, presumptive male, specimens are rare, and that the levels of size variation (sexual dimorphism) previously ascribed to this species are likely to be gross underestimates. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali.

    PubMed

    Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco

    2012-10-12

    Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  5. Highly repeatable room temperature negative differential resistance in AlN/GaN resonant tunneling diodes grown by molecular beam epitaxy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Growden, Tyler A.; Fakhimi, Parastou; Berger, Paul R., E-mail: pberger@ieee.org

    AlN/GaN resonant tunneling diodes grown on low dislocation density semi-insulating bulk GaN substrates via plasma-assisted molecular-beam epitaxy are reported. The devices were fabricated using a six mask level, fully isolated process. Stable room temperature negative differential resistance (NDR) was observed across the entire sample. The NDR exhibited no hysteresis, background light sensitivity, or degradation of any kind after more than 1000 continuous up-and-down voltage sweeps. The sample exhibited a ∼90% yield of operational devices which routinely displayed an average peak current density of 2.7 kA/cm{sup 2} and a peak-to-valley current ratio of ≈1.15 across different sizes.

  6. Suspended sediment transport under estuarine tidal channel conditions

    USGS Publications Warehouse

    Sternberg, R.W.; Kranck, K.; Cacchione, D.A.; Drake, D.E.

    1988-01-01

    A modified version of the GEOPROBE tripod has been used to monitor flow conditions and suspended sediment distribution in the bottom boundary layer of a tidal channel within San Francisco Bay, California. Measurements were made every 15 minutes over three successive tidal cycles. They included mean velocity profiles from four electromagnetic current meters within 1 m of the seabed; mean suspended sediment concentration profiles from seven miniature nephelometers operated within 1 m of the seabed; near-bottom pressure fluctuations; vertical temperature gradient; and bottom photographs. Additionally, suspended sediment was sampled from four levels within 1 m of the seabed three times during each successive flood and ebb cycle. While the instrument was deployed, STD-nephelometer measurements were made throughout the water column, water samples were collected each 1-2 hours, and bottom sediment was sampled at the deployment site. From these measurements, estimates were made of particle settling velocity (ws) from size distributions of the suspended sediment, friction velocity (U*) from the velocity profiles, and reference concentration (Ca) was measured at z = 20 cm. These parameters were used in the suspended sediment distribution equations to evaluate their ability to predict the observed suspended sediment profiles. Three suspended sediment particle conditions were evaluated: (1) individual particle size in the 4-11 ?? (62.5-0.5 ??m) range with the reference concentration Ca at z = 20 cm (C??), (2) individual particle size in the 4-6 ?? size range, flocs representing the 7-11 ?? size range with the reference concentration Ca at z = 20 cm (Cf), and (3) individual particle size in the 4-6 ?? size range, flocs representing the 7-11 ?? size range with the reference concentration predicted as a function of the bed sediment size distribution and the square of the excess shear stress. In addition, computations of particle flux were made in order to show vertical variations in horizontal mass flux for varying flow conditions. ?? 1988.

  7. Finite element model study of the effect of corner rounding on detectability of corner cracks using bolt hole eddy current

    NASA Astrophysics Data System (ADS)

    Underhill, P. R.; Krause, T. W.

    2017-02-01

    Recent work has shown that the detectability of corner cracks in bolt-holes is compromised when rounding of corners arises, as might occur during bolt-hole removal. Probability of Detection (POD) studies normally require a large number of samples of both fatigue cracks and electric discharge machined notches. In the particular instance of rounding of bolt-hole corners the generation of such a large set of samples representing the full spectrum of potential rounding would be prohibitive. In this paper, the application of Finite Element Method (FEM) modeling is used to supplement the study of detection of cracks forming at the rounded corners of bolt-holes. FEM models show that rounding of the corner of the bolt-hole reduces the size of the response to a corner crack to a greater extent than can be accounted for by loss of crack area. This reduced sensitivity can be ascribed to a lower concentration of eddy currents at the rounded corner surface and greater lift-off of pick-up coils relative to that of a straight-edge corner. A rounding with a radius of 0.4 mm (.016 inch) showed a 20% reduction in the strength of the crack signal. Assuming linearity of the crack signal with crack size, this would suggest an increase in the minimum detectable size by 25%.

  8. Integration of bed characteristics, geochemical tracers, current measurements, and numerical modeling for assessing the provenance of beach sand in the San Francisco Bay Coastal System

    USGS Publications Warehouse

    Barnard, Patrick L.; Foxgrover, Amy C.; Elias, Edwin P.L.; Erikson, Li H.; Hein, James; McGann, Mary; Mizell, Kira; Rosenbauer, Robert J.; Swarzenski, Peter W.; Takesue, Renee K.; Wong, Florence L.; Woodrow, Don

    2013-01-01

    Over 150 million m3 of sand-sized sediment has disappeared from the central region of the San Francisco Bay Coastal System during the last half century. This enormous loss may reflect numerous anthropogenic influences, such as watershed damming, bay-fill development, aggregate mining, and dredging. The reduction in Bay sediment also appears to be linked to a reduction in sediment supply and recent widespread erosion of adjacent beaches, wetlands, and submarine environments. A unique, multi-faceted provenance study was performed to definitively establish the primary sources, sinks, and transport pathways of beach-sized sand in the region, thereby identifying the activities and processes that directly limit supply to the outer coast. This integrative program is based on comprehensive surficial sediment sampling of the San Francisco Bay Coastal System, including the seabed, Bay floor, area beaches, adjacent rock units, and major drainages. Analyses of sample morphometrics and biological composition (e.g., Foraminifera) were then integrated with a suite of tracers including 87Sr/86Sr and 143Nd/144Nd isotopes, rare earth elements, semi-quantitative X-ray diffraction mineralogy, and heavy minerals, and with process-based numerical modeling, in situ current measurements, and bedform asymmetry to robustly determine the provenance of beach-sized sand in the region.

  9. Patency of paediatric endotracheal tubes for airway instrumentation.

    PubMed

    Elfgen, J; Buehler, P K; Thomas, J; Kemper, M; Imach, S; Weiss, M

    2017-01-01

    Airway exchange catheters (AEC) and fiberoptic bronchoscopes (FOB) for tracheal intubation are selected so that there is only a minimal gap between their outer and inner diameter of endotracheal tube (ETT) to minimize the risk of impingement during airway instrumentation. This study aimed to test the ease of passage of FOBs and AECs through paediatric ETT of different sizes and from different manufacturers when using current recommendations for dimensional equipment compatibility taken from text books and manufacturers information. Twelve different brands of cuffed and uncuffed ETT sized ID 2.5 to 5.0 mm were evaluated in an in vitro set-up. Ease of device passage as well as the locations of an impaired passage within the ETT were assessed. Redundant samples were used for same sized ETT and all measurements were triple-checked in randomized order. In total, 51 paired samples of uncuffed as well as cuffed paediatric ETT were tested. There were substantial differences in the ease of ETT passage concordantly for FOBs and AECs among different manufacturers, but also among the product lines from the same manufacturer for a given ID size. Restriction to passage most frequently was found near the endotracheal tube tip or as a gradually increasing resistance along the ETT shaft. Current recommendations for dimensional equipment compatibility AECs and FOBs with ETTs do not appear to be completely accurate for all ETT brands available. We recommend that specific equipment combinations always must be tested carefully together before attempting to use them in a patient. © 2016 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  10. Pesticides in the atmosphere: a comparison of gas-particle partitioning and particle size distribution of legacy and current-use pesticides

    NASA Astrophysics Data System (ADS)

    Degrendele, C.; Okonski, K.; Melymuk, L.; Landlová, L.; Kukučka, P.; Audy, O.; Kohoutek, J.; Čupr, P.; Klánová, J.

    2016-02-01

    This study presents a comparison of seasonal variation, gas-particle partitioning, and particle-phase size distribution of organochlorine pesticides (OCPs) and current-use pesticides (CUPs) in air. Two years (2012/2013) of weekly air samples were collected at a background site in the Czech Republic using a high-volume air sampler. To study the particle-phase size distribution, air samples were also collected at an urban and rural site in the area of Brno, Czech Republic, using a cascade impactor separating atmospheric particulates according to six size fractions. Major differences were found in the atmospheric distribution of OCPs and CUPs. The atmospheric concentrations of CUPs were driven by agricultural activities while secondary sources such as volatilization from surfaces governed the atmospheric concentrations of OCPs. Moreover, clear differences were observed in gas-particle partitioning; CUP partitioning was influenced by adsorption onto mineral surfaces while OCPs were mainly partitioning to aerosols through absorption. A predictive method for estimating the gas-particle partitioning has been derived and is proposed for polar and non-polar pesticides. Finally, while OCPs and the majority of CUPs were largely found on fine particles, four CUPs (carbendazim, isoproturon, prochloraz, and terbuthylazine) had higher concentrations on coarse particles ( > 3.0 µm), which may be related to the pesticide application technique. This finding is particularly important and should be further investigated given that large particles result in lower risks from inhalation (regardless the toxicity of the pesticide) and lower potential for long-range atmospheric transport.

  11. Retention of Electronic Conductivity in LaAlO3/SrTiO3 Nanostructures Using a SrCuO2 Capping Layer

    NASA Astrophysics Data System (ADS)

    Aurino, P. P.; Kalabukhov, A.; Borgani, R.; Haviland, D. B.; Bauch, T.; Lombardi, F.; Claeson, T.; Winkler, D.

    2016-08-01

    The interface between two wide band-gap insulators, LaAlO3 and SrTiO3 (LAO/STO) offers a unique playground to study the interplay and competitions between different ordering phenomena in a strongly correlated two-dimensional electron gas. Recent studies of the LAO/STO interface reveal the inhomogeneous nature of the 2DEG that strongly influences electrical-transport properties. Nanowires needed in future applications may be adversely affected, and our aim is, thus, to produce a more homogeneous electron gas. In this work, we demonstrate that nanostructures fabricated in the quasi-2DEG at the LaAlO3/SrTiO3 interface, capped with a SrCuO2 layer, retain their electrical resistivity and mobility independent of the structure size, ranging from 100 nm to 30 μ m . This is in contrast to noncapped LAO/STO structures, where the room-temperature electrical resistivity significantly increases when the structure size becomes smaller than 1 μ m . High-resolution intermodulation electrostatic force microscopy reveals an inhomogeneous surface potential with "puddles" of a characteristic size of 130 nm in the noncapped samples and a more uniform surface potential with a larger characteristic size of the puddles in the capped samples. In addition, capped structures show superconductivity below 200 mK and nonlinear current-voltage characteristics with a clear critical current observed up to 700 mK. Our findings shed light on the complicated nature of the 2DEG at the LAO/STO interface and may also be used for the design of electronic devices.

  12. Size Matters: Penis Size and Sexual Position in Gay Porn Profiles.

    PubMed

    Brennan, Joseph

    2018-01-01

    This article combines qualitative and quantitative textual approaches to the representation of penis size and sexual position of performers in 10 of the most visited gay pornography Web sites currently in operation. Specifically, in excess of 6,900 performer profiles sourced from 10 commercial Web sites are analyzed. Textual analysis of the profile descriptions is combined with a quantitative representation of disclosed penis size and sexual position, which is presented visually by two figures. The figures confirm that these sites generally market themselves as featuring penises that are extraordinarily large and find a sample-wide correlation between smaller penis sizes (5-6.5 inches) and receptive sexual acts (bottoming), and larger (8.5-13 inches) with penetrative acts (topping). These observations are supported through the qualitative textual readings of how the performers are described on these popular sites, revealing the narratives and marketing strategies that shape the construction of popular porn brands, performers, and profitable fantasies.

  13. Magnetization measurements on multifilamentary No/sub 3/Sn and NbTi conductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, A.K.; Robins, K.E.; Sampson, W.B.

    1985-03-01

    The effective filament size has been determined for a number of high current Nb/sub 3/Sn multifilamentary composites. In most cases it is much larger than the nominal filament size. For the smallest filaments (..integral.. 1 micron) the effective size can be as much as a factor of forty times the nominal size. Samples made by the ''internal tin'', ''bronze route'', and ''jelly roll'' methods have been examined with filaments in the range one to ten microns. Rate dependent magnetization and ''flux'' jumping'' have been observed in some cases. NbTi composites ranging in filament size from nine to two hundred micronsmore » and with copper to super-conductor ratios between 1.6:1 and 7:1 have been examined in the same apparatus. Low field ''flux jumping'' was only observed in conductors with very large filaments and relatively little stabilizing copper.« less

  14. A descriptive study of sexual homicide in Canada: implications for police investigation.

    PubMed

    Beauregard, Eric; Martineau, Melissa

    2013-12-01

    Few empirical studies have been conducted that examine the phenomenon of sexual homicide, and among these studies, many have been limited by small sample size. Although interesting and informative, these studies may not be representative of the greater phenomenon of sexual murder and may be subject to sampling bias that could have significant effects on results. The current study aims to provide a descriptive analysis of the largest sample of sexual homicide cases across Canada in the past 62 years. In doing so, the study aims to examine offender and victim characteristics, victim targeting and access, and modus operandi. Findings show that cases of sexual homicide and sexual murderers included in the current study differ in many aspects from the portrait of the sexual murderer and his or her crime depicted in previous studies. The authors' results may prove useful to the police officers responsible for the investigation of these crimes.

  15. Performance analysis of the toroidal field ITER production conductors

    NASA Astrophysics Data System (ADS)

    Breschi, M.; Macioce, D.; Devred, A.

    2017-05-01

    The production of the superconducting cables for the toroidal field (TF) magnets of the ITER machine has recently been completed at the manufacturing companies selected during the previous qualification phase. The quality assurance/quality control programs that have been implemented to ensure production uniformity across numerous suppliers include performance tests of several conductor samples from selected unit lengths. The short full-size samples (4 m long) were subjected to DC and AC tests in the SULTAN facility at CRPP in Villigen, Switzerland. In a previous work the results of the tests of the conductor performance qualification samples were reported. This work reports the analyses of the results of the tests of the production conductor samples. The results reported here concern the values of current sharing temperature, critical current, effective strain and n-value from the DC tests and the energy dissipated per cycle from the AC loss tests. A detailed comparison is also presented between the performance of the conductors and that of their constituting strands.

  16. The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Choi, In-Hee; Paek, Insu; Cho, Sun-Joo

    2017-01-01

    The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…

  17. The Conceptualisation of Dreams by Adults with Intellectual Disabilities: Relationship with Theory of Mind Abilities and Verbal Ability

    ERIC Educational Resources Information Center

    Dodd, A.; Hare, D. J.; Hendy, S.

    2008-01-01

    Background: Empirical studies suggest that individuals with intellectual disabilities (ID) have difficulties in conceptualising dreams as perceptually private, non-physical, individuated and potentially fictional entities. The aim of the current study was to replicate the results found by Stenfert Kroese et al. using a comparative sample size, and…

  18. One Size (Never) Fits All: Segment Differences Observed Following a School-Based Alcohol Social Marketing Program

    ERIC Educational Resources Information Center

    Dietrich, Timo; Rundle-Thiele, Sharyn; Leo, Cheryl; Connor, Jason

    2015-01-01

    Background: According to commercial marketing theory, a market orientation leads to improved performance. Drawing on the social marketing principles of segmentation and audience research, the current study seeks to identify segments to examine responses to a school-based alcohol social marketing program. Methods: A sample of 371 year 10 students…

  19. Developmental Surveillance and Screening Practices by Pediatric Primary Care Providers: Implications for Early Intervention Professionals

    ERIC Educational Resources Information Center

    Porter, Sallie; Qureshi, Rubab; Caldwell, Barbara Ann; Echevarria, Mercedes; Dubbs, William B.; Sullivan, Margaret W.

    2016-01-01

    This study used a survey approach to investigate current developmental surveillance and developmental screening practices by pediatric primary care providers in a diverse New Jersey county. A total of 217 providers were contacted with a final sample size of 57 pediatric primary care respondents from 13 different municipalities. Most providers…

  20. The Adequacy of Different Robust Statistical Tests in Comparing Two Independent Groups

    ERIC Educational Resources Information Center

    Pero-Cebollero, Maribel; Guardia-Olmos, Joan

    2013-01-01

    In the current study, we evaluated various robust statistical methods for comparing two independent groups. Two scenarios for simulation were generated: one of equality and another of population mean differences. In each of the scenarios, 33 experimental conditions were used as a function of sample size, standard deviation and asymmetry. For each…

  1. Battery condenser system PM10 emission factors and rates for cotton gins: Method 201A PM10 sizing cyclones

    USDA-ARS?s Scientific Manuscript database

    This manuscript is part of a series of manuscripts that to characterize cotton gin emissions from the standpoint of stack sampling. The impetus behind this project was the urgent need to collect additional cotton gin emissions data to address current regulatory issues. A key component of this study ...

  2. Communicating to Learn: Infants' Pointing Gestures Result in Optimal Learning

    ERIC Educational Resources Information Center

    Lucca, Kelsey; Wilbourn, Makeba Parramore

    2018-01-01

    Infants' pointing gestures are a critical predictor of early vocabulary size. However, it remains unknown precisely how pointing relates to word learning. The current study addressed this question in a sample of 108 infants, testing one mechanism by which infants' pointing may influence their learning. In Study 1, 18-month-olds, but not…

  3. Microbial utilization of nitrogen in cold core eddies: size does matter

    NASA Astrophysics Data System (ADS)

    McInnes, A.; Messer, L. F.; Laiolo, L.; Laverock, B.; Laczka, O.; Brown, M. V.; Seymour, J.; Doblin, M.

    2016-02-01

    As the base of the marine food web, and the first step in the biological carbon pump, understanding changes in microbial community composition is essential for predicting changes in the marine nitrogen (N) cycle. Climate change projections suggest that oligotrophic waters will become more stratified with a concomitant shift in microbial community composition based on changes in N supply. In regions of strong boundary currents, eddies could reduce this limitation through nutrient uplift and other forms of eddy mixing. Understanding the preference for different forms of N by microbes is essential for understanding and predicting shifts in the microbial community. This study aims to understand the utilization of different N species within different microbial size fractions as well as understand the preferred source of N to these groups across varying mesoscale and sub-mesoscale features in the East Australian Current (EAC). In June 2015 we sampled microbial communities from three depths (surface, chlorophyll-a maximum and below the mixed layer), in three mesoscale and sub-mesoscale eddy features, as well as two end-point water masses (coastal and oligotrophic EAC water). Particulate matter was analysed for stable C and N isotopes, and seawater incubations with trace amounts of 15NO3, 15NH4, 15N2, 15Urea and 13C were undertaken. All samples were size fractionated into 0.3-2.0 µm, 2.0-10 µm, and >10 µm size classes, encompassing the majority of microbes in these waters. Microbial community composition was also assessed (pigments, flow cytometry, DNA), as well as physical and chemical parameters, to better understand the drivers of carbon fixation and nitrogen utilization across a diversity of water masses and microbial size classes. We observed that small, young features have a greater abundance of larger size classes. We therefore predict that these microbes will preferentially draw down the recently pulsed NO3. Ultimately, the size and age of a feature will determine the N compound utilization and microbial community composition and as the feature grows in size and age a community succession will lead to differential more diverse N compound utilization.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, S.; Aldering, G.; Antilogus, P.

    The use of Type Ia supernovae as distance indicators led to the discovery of the accelerating expansion of the universe a decade ago. Now that large second generation surveys have significantly increased the size and quality of the high-redshift sample, the cosmological constraints are limited by the currently available sample of ~50 cosmologically useful nearby supernovae. The Nearby Supernova Factory addresses this problem by discovering nearby supernovae and observing their spectrophotometric time development. Our data sample includes over 2400 spectra from spectral timeseries of 185 supernovae. This talk presents results from a portion of this sample including a Hubble diagrammore » (relative distance vs. redshift) and a description of some analyses using this rich dataset.« less

  5. The 1815 Tambora ash fall: implications for transport and deposition of distal ash on land and in the deep sea

    NASA Astrophysics Data System (ADS)

    Kandlbauer, Jessica; Carey, Steven N.; Sparks, R. Stephen J.

    2013-04-01

    Tambora volcano lies on the Sanggar Peninsula of Sumbawa Island in the Indonesian archipelago. During the great 1815 explosive eruption, the majority of the erupted pyroclastic material was dispersed and subsequently deposited into the Indian Ocean and Java Sea. This study focuses on the grain size distribution of distal 1815 Tambora ash deposited in the deep sea compared to ash fallen on land. Grain size distribution is an important factor in assessing potential risks to aviation and human health, and provides additional information about the ash transport mechanisms within volcanic umbrella clouds. Grain size analysis was performed using high precision laser diffraction for a particle range of 0.2 μm-2 mm diameter. The results indicate that the deep-sea samples provide a smooth transition to the land samples in terms of grain size distributions despite the different depositional environments. Even the very fine ash fraction (<10 μm) is deposited in the deep sea, suggesting vertical density currents as a fast and effective means of transport to the seafloor. The measured grain size distribution is consistent with an improved atmospheric gravity current sedimentation model that takes into account the finite duration of an eruption. In this model, the eruption time and particle fall velocity are the critical parameters for assessing the ash component depositing while the cloud advances versus the ash component depositing once the eruption terminates. With the historical data on eruption duration (maximum 24 h) and volumetric flow rate of the umbrella cloud (˜1.5-2.5 × 1011 m3/s) as input to the improved model, and assuming a combination of 3 h Plinian phase and 21 h co-ignimbrite phase, it reduces the mean deviation of the predicted versus observed grain size distribution by more than half (˜9.4 % to ˜3.7 %) if both ash components are considered.

  6. High speed, intermediate resolution, large area laser beam induced current imaging and laser scribing system for photovoltaic devices and modules

    NASA Astrophysics Data System (ADS)

    Phillips, Adam B.; Song, Zhaoning; DeWitt, Jonathan L.; Stone, Jon M.; Krantz, Patrick W.; Royston, John M.; Zeller, Ryan M.; Mapes, Meghan R.; Roland, Paul J.; Dorogi, Mark D.; Zafar, Syed; Faykosh, Gary T.; Ellingson, Randy J.; Heben, Michael J.

    2016-09-01

    We have developed a laser beam induced current imaging tool for photovoltaic devices and modules that utilizes diode pumped Q-switched lasers. Power densities on the order of one sun (100 mW/cm2) can be produced in a ˜40 μm spot size by operating the lasers at low diode current and high repetition rate. Using galvanostatically controlled mirrors in an overhead configuration and high speed data acquisition, large areas can be scanned in short times. As the beam is rastered, focus is maintained on a flat plane with an electronically controlled lens that is positioned in a coordinated fashion with the movements of the mirrors. The system can also be used in a scribing mode by increasing the diode current and decreasing the repetition rate. In either mode, the instrument can accommodate samples ranging in size from laboratory scale (few cm2) to full modules (1 m2). Customized LabVIEW programs were developed to control the components and acquire, display, and manipulate the data in imaging mode.

  7. The Effect of Transcranial Direct Current Stimulation (tDCS) Electrode Size and Current Intensity on Motor Cortical Excitability: Evidence From Single and Repeated Sessions.

    PubMed

    Ho, Kerrie-Anne; Taylor, Janet L; Chew, Taariq; Gálvez, Verònica; Alonzo, Angelo; Bai, Siwei; Dokos, Socrates; Loo, Colleen K

    2016-01-01

    Current density is considered an important factor in determining the outcomes of tDCS, and is determined by the current intensity and electrode size. Previous studies examining the effect of these parameters on motor cortical excitability with small sample sizes reported mixed results. This study examined the effect of current intensity (1 mA, 2 mA) and electrode size (16 cm(2), 35 cm(2)) on motor cortical excitability over single and repeated tDCS sessions. Data from seven studies in 89 healthy participants were pooled for analysis. Single-session data were analyzed using mixed effects models and repeated-session data were analyzed using mixed design analyses of variance. Computational modeling was used to examine the electric field generated. The magnitude of increases in excitability after anodal tDCS was modest. For single-session tDCS, the 35 cm(2) electrodes produced greater increases in cortical excitability compared to the 16 cm(2) electrodes. There were no differences in the magnitude of cortical excitation produced by 1 mA and 2 mA tDCS. The repeated-sessions data also showed that there were greater increases in excitability with the 35 cm(2) electrodes. Further, repeated sessions of tDCS with the 35 cm(2) electrodes resulted in a cumulative increase in cortical excitability. Computational modeling predicted higher electric field at the motor hotspot for the 35 cm(2) electrodes. 2 mA tDCS does not necessarily produce larger effects than 1 mA tDCS in healthy participants. Careful consideration should be given to the exact positioning, size and orientation of tDCS electrodes relative to cortical regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. After site selection and before data analysis: sampling, sorting, and laboratory procedures used in stream benthic macroinvertebrate monitoring programs by USA state agencies

    USGS Publications Warehouse

    Carter, James L.; Resh, Vincent H.

    2001-01-01

    A survey of methods used by US state agencies for collecting and processing benthic macroinvertebrate samples from streams was conducted by questionnaire; 90 responses were received and used to describe trends in methods. The responses represented an estimated 13,000-15,000 samples collected and processed per year. Kicknet devices were used in 64.5% of the methods; other sampling devices included fixed-area samplers (Surber and Hess), artificial substrates (Hester-Dendy and rock baskets), grabs, and dipnets. Regional differences existed, e.g., the 1-m kicknet was used more often in the eastern US than in the western US. Mesh sizes varied among programs but 80.2% of the methods used a mesh size between 500 and 600 (mu or u)m. Mesh size variations within US Environmental Protection Agency regions were large, with size differences ranging from 100 to 700 (mu or u)m. Most samples collected were composites; the mean area sampled was 1.7 m2. Samples rarely were collected using a random method (4.7%); most samples (70.6%) were collected using "expert opinion", which may make data obtained operator-specific. Only 26.3% of the methods sorted all the organisms from a sample; the remainder subsampled in the laboratory. The most common method of subsampling was to remove 100 organisms (range = 100-550). The magnification used for sorting ranged from 1 (sorting by eye) to 30x, which results in inconsistent separation of macroinvertebrates from detritus. In addition to subsampling, 53% of the methods sorted large/rare organisms from a sample. The taxonomic level used for identifying organisms varied among taxa; Ephemeroptera, Plecoptera, and Trichoptera were generally identified to a finer taxonomic resolution (genus and species) than other taxa. Because there currently exists a large range of field and laboratory methods used by state programs, calibration among all programs to increase data comparability would be exceptionally challenging. However, because many techniques are shared among methods, limited testing could be designed to evaluate whether procedural differences affect the ability to determine levels of environmental impairment using benthic macroinvertebrate communities.

  9. Fossil shrews from Honduras and their significance for late glacial evolution in body size (Mammalia: Soricidae: Cryptotis)

    USGS Publications Warehouse

    Woodman, N.; Croft, D.A.

    2005-01-01

    Our study of mammalian remains excavated in the 1940s from McGrew Cave, north of Copan, Honduras, yielded an assemblage of 29 taxa that probably accumulated predominantly as the result of predation by owls. Among the taxa present are three species of small-eared shrews, genus Cryptotis. One species, Cryptotis merriami, is relatively rare among the fossil remains. The other two shrews, Cryptotis goodwini and Cryptotis orophila, are abundant and exhibit morpho metrical variation distinguishing them from modern populations. Fossils of C. goodwini are distinctly and consistently smaller than modern members of the species. To quantify the size differences, we derived common measures of body size for fossil C. goodwini using regression models based on modern samples of shrews in the Cryptotis mexicana-group. Estimated mean length of head and body for the fossil sample is 72-79 mm, and estimated mean mass is 7.6-9.6 g. These numbers indicate that the fossil sample averaged 6-14% smaller in head and body length and 39-52% less in mass than the modern sample and that increases of 6-17% in head and body length and 65-108% in mass occurred to achieve the mean body size of the modern sample. Conservative estimates of fresh (wet) food intake based on mass indicate that such a size increase would require a 37-58% increase in daily food consumption. In contrast to C. goodwini, fossil C. orophila from the cave is not different in mean body size from modern samples. The fossil sample does, however, show slightly greater variation in size than is currently present throughout the modern geographical distribution of the taxon. Moreover, variation in some other dental and mandibular characters is more constrained, exhibiting a more direct relationship to overall size. Our study of these species indicates that North American shrews have not all been static in size through time, as suggested by some previous work with fossil soricids. Lack of stratigraphic control within the site and our failure to obtain reliable radiometric dates on remains restrict our opportunities to place the site in a firm temporal context. However, the morphometrical differences we document for fossil C. orophila and C. goodwini show them to be distinct from modern populations of these shrews. Some other species of fossil mammals from McGrew Cave exhibit distinct size changes of the magnitudes experienced by many northern North American and some Mexican mammals during the transition from late glacial to Holocene environmental conditions, and it is likely that at least some of the remains from the cave are late Pleistocene in age. One curious factor is that, whereas most mainland mammals that exhibit large-scale size shifts during the late glacial/postglacial transition experienced dwarfing, C. goodwini increased in size. The lack of clinal variation in modern C. goodwini supports the hypothesis that size evolution can result from local selection rather than from cline translocation. Models of size change in mammals indicate that increased size, such as that observed for C. goodwini, are a likely consequence of increased availability of resources and, thereby, a relaxation of selection during critical times of the year.

  10. Flow field-flow fractionation for the analysis of nanoparticles used in drug delivery.

    PubMed

    Zattoni, Andrea; Roda, Barbara; Borghi, Francesco; Marassi, Valentina; Reschiglian, Pierluigi

    2014-01-01

    Structured nanoparticles (NPs) with controlled size distribution and novel physicochemical features present fundamental advantages as drug delivery systems with respect to bulk drugs. NPs can transport and release drugs to target sites with high efficiency and limited side effects. Regulatory institutions such as the US Food and Drug Administration (FDA) and the European Commission have pointed out that major limitations to the real application of current nanotechnology lie in the lack of homogeneous, pure and well-characterized NPs, also because of the lack of well-assessed, robust routine methods for their quality control and characterization. Many properties of NPs are size-dependent, thus the particle size distribution (PSD) plays a fundamental role in determining the NP properties. At present, scanning and transmission electron microscopy (SEM, TEM) are among the most used techniques to size characterize NPs. Size-exclusion chromatography (SEC) is also applied to the size separation of complex NP samples. SEC selectivity is, however, quite limited for very large molar mass analytes such as NPs, and interactions with the stationary phase can alter NP morphology. Flow field-flow fractionation (F4) is increasingly used as a mature separation method to size sort and characterize NPs in native conditions. Moreover, the hyphenation with light scattering (LS) methods can enhance the accuracy of size analysis of complex samples. In this paper, the applications of F4-LS to NP analysis used as drug delivery systems for their size analysis, and the study of stability and drug release effects are reviewed. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Application of a time-dependent coalescence process for inferring the history of population size changes from DNA sequence data.

    PubMed

    Polanski, A; Kimmel, M; Chakraborty, R

    1998-05-12

    Distribution of pairwise differences of nucleotides from data on a sample of DNA sequences from a given segment of the genome has been used in the past to draw inferences about the past history of population size changes. However, all earlier methods assume a given model of population size changes (such as sudden expansion), parameters of which (e.g., time and amplitude of expansion) are fitted to the observed distributions of nucleotide differences among pairwise comparisons of all DNA sequences in the sample. Our theory indicates that for any time-dependent population size, N(tau) (in which time tau is counted backward from present), a time-dependent coalescence process yields the distribution, p(tau), of the time of coalescence between two DNA sequences randomly drawn from the population. Prediction of p(tau) and N(tau) requires the use of a reverse Laplace transform known to be unstable. Nevertheless, simulated data obtained from three models of monotone population change (stepwise, exponential, and logistic) indicate that the pattern of a past population size change leaves its signature on the pattern of DNA polymorphism. Application of the theory to the published mtDNA sequences indicates that the current mtDNA sequence variation is not inconsistent with a logistic growth of the human population.

  12. The Apollo 17 samples: The Massifs and landslide

    NASA Technical Reports Server (NTRS)

    Ryder, Graham

    1992-01-01

    More than 50 kg of rock and regolith samples, a little less than half the total Apollo 17 sample mass, was collected from the highland stations at Taurus-Littrow. Twice as much material was collected from the North Massif as from the South Massif and its landslide (the apparent disproportionate collecting at the mare sites is mainly a reflection of the large size of a few individual basalt samples). Descriptions of the collection, documentation, and nature of the samples are given. A comprehensive catalog is currently being produced. Many of the samples have been intensely studied over the last 20 years and some of the rocks have become very familiar and depicted in popular works, particularly the dunite clast (72415), the troctolite sample (76535), and the station 6 boulder samples. Most of the boulder samples have been studied in Consortium mode, and many of the rake samples have received a basic petrological/geochemical characterization.

  13. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  14. SU-E-I-46: Sample-Size Dependence of Model Observers for Estimating Low-Contrast Detection Performance From CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiser, I; Lu, Z

    2014-06-01

    Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions includedmore » two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task.« less

  15. Reconstructing Volatile Evolution at Lastarria Volcano (CVZ, Northern Chile) Using Melt Inclusions Analysis

    NASA Astrophysics Data System (ADS)

    Pizarro, M.; Cannatelli, C.; Morata, D.

    2017-12-01

    Melt inclusions Assemblages (MIAs) are considered the best tool available to provide insights into the pre-eruptive volatile contents in the magma and define the pattern of degassing at depth. Lastarria volcano is located in northern Chile, in the Central Volcanic Zone (CVZ). Lastarria's fumarolic activity is currently the most important source of gases of the CVZ and the volcano also exhibits constant deformation. The study of volatile contents in MIAs, allows us to determine the magmatic processes beneath Lastarria volcano, and there for, understand the current status of the volcanic system (deformation and fumarolic activity). We determined the pre-eruptive volatile content (H2O, CO2, F, S, Cl) in the magma by analyzing MIs hosted in feldspars and pyroxenes from 7 samples of lava and pyroclastic rocks, belonging to different eruptive periods of the volcano. All the samples are andesitic in composition. Lava samples contain phenocrysts of plagioclase and pyroxene (up to 45%) and a vitreous groundmass with microlites of plagioclase, pyroxenes, opaque minerals, and limited biotites. Pyroclastic samples contain phenocrysts of plagioclase and pyroxene (up to 30%), and a vitreous matrix with microlites of plagioclase and pyroxene. At least 3 MIAs have been described in feldspars from the lava samples: MIA1, completely homogenized, MIA2 composed of homogeneous glass and one bubble, and MIA3 composed of homogeneous glass and multiple bubbles. All MIAs display sizes between 3 and 200 um. In the pyroxenes, we have observed a wide range of MIAs, showing different sizes and various degrees of recrystallization, from completely homogenized to totally recrystallized. The petrographic study in the feldespars from the pyroclastic rocks shows two types of MIAs: MIA1, containing homogeneous glass associated with a single bubble, and MIA2, showing homogeneous glass with multiple bubbles. Few MIs appear to be slightly recrystallized. The size of this MIAs varies between 3 and 150 um. Pyroxene-hosted MIs are almost all recrystallized, with sizes varying between 3 and 60 um. Preliminary observations show that MIAs hosted in pyroclastic rocks contain a greater amount of bubbles than MIAs hosted in the lava, possibly indicating that a greater degree of volatile saturation can be linked with the explosive phase of Lastarria volcano.

  16. Clutch sizes and nests of tailed frogs from the Olympic Peninsula, Washington

    USGS Publications Warehouse

    Bury, R. Bruce; Loafman, P.; Rofkar, D.; Mike, K.

    2001-01-01

    In the summers 1995-1998, we sampled 168 streams (1,714 in of randomly selected 1-m bands) to determine distribution and abundance of stream amphibians in Olympic National Park, Washington. We found six nests (two in one stream) of the tailed frog, compared to only two nests with clutch sizes reported earlier for coastal regions. This represents only one nest per 286 in searched and one nest per 34 streams sampled. Tailed frogs occurred only in 94 (60%) of the streams and, for these waters, we found one nest per 171 in searched or one nest per 20 streams sampled. The numbers of eggs for four masses ((x) over bar = 48.3, range 40-55) were low but one single strand in a fifth nest had 96 eggs. One nest with 185 eggs likely represented communal egg deposition. Current evidence indicates a geographic trend with yearly clutches of relatively few eggs in coastal tailed frogs compared to biennial nesting with larger clutches for inland populations in the Rocky Mountains.

  17. Enumerative and binomial sampling plans for citrus mealybug (Homoptera: pseudococcidae) in citrus groves.

    PubMed

    Martínez-Ferrer, María Teresa; Ripollés, José Luís; Garcia-Marí, Ferran

    2006-06-01

    The spatial distribution of the citrus mealybug, Planococcus citri (Risso) (Homoptera: Pseudococcidae), was studied in citrus groves in northeastern Spain. Constant precision sampling plans were designed for all developmental stages of citrus mealybug under the fruit calyx, for late stages on fruit, and for females on trunks and main branches; more than 66, 286, and 101 data sets, respectively, were collected from nine commercial fields during 1992-1998. Dispersion parameters were determined using Taylor's power law, giving aggregated spatial patterns for citrus mealybug populations in three locations of the tree sampled. A significant relationship between the number of insects per organ and the percentage of occupied organs was established using either Wilson and Room's binomial model or Kono and Sugino's empirical formula. Constant precision (E = 0.25) sampling plans (i.e., enumerative plans) for estimating mean densities were developed using Green's equation and the two binomial models. For making management decisions, enumerative counts may be less labor-intensive than binomial sampling. Therefore, we recommend enumerative sampling plans for the use in an integrated pest management program in citrus. Required sample sizes for the range of population densities near current management thresholds, in the three plant locations calyx, fruit, and trunk were 50, 110-330, and 30, respectively. Binomial sampling, especially the empirical model, required a higher sample size to achieve equivalent levels of precision.

  18. Historical forest patterns of Oregon's central Coast Range

    USGS Publications Warehouse

    Ripple, W.J.; Hershey, K.T.; Anthony, R.G.

    2000-01-01

    To describe the composition and pattern of unmanaged forestland in Oregon's central Coast Range, we analyzed forest conditions from a random sample of 18 prelogging (1949 and earlier) landscapes. We also compared the amount and variability of old forest (conifer-dominated stands > 53 cm dbh) in the prelogging landscapes with that in the current landscapes. Sixty-three percent of the prelogging landscape comprised old forest, approximately 21% of which also had a significant (> 20% cover) hardwood component. The proportions of forest types across the 18 prelogging landscapes varied greatly for both early seral stages (cv = 81194) and hardwoods (cv = 127) and moderately for old forest (cv = 39). With increasing distance from streams, the amount of hardwoods and nonforest decreased, whereas the amount of seedling/sapling/pole and young conifers increased. The amount of old forest was significantly greater (p < 0.002) in prelogging forests than in current landscapes. Old-forest patterns also differed significantly (p < 0.015) between prelogging and current landscapes; patch density, coefficient of variation of patch size, edge density, and fragmentation were greater in current landscapes and mean patch size, largest patch size, and core habitat were greater in prelogging forests. Generally, old-forest landscape pattern variables showed a greater range in prelogging landscapes than in current landscapes. Management strategies designed to increase the amount of old forest and the range in landscape patterns would result in a landscape more closely resembling that found prior to intensive logging. (C) 2000 Elsevier Science Ltd.

  19. Statistical considerations in evaluating pharmacogenomics-based clinical effect for confirmatory trials.

    PubMed

    Wang, Sue-Jane; O'Neill, Robert T; Hung, Hm James

    2010-10-01

    The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.

  20. Oxygen reduction reaction on highly-durable Pt/nanographene fuel cell catalyst synthesized employing in-liquid plasma

    NASA Astrophysics Data System (ADS)

    Amano, Tomoki; Kondo, Hiroki; Takeda, Keigo; Ishikawa, Kenji; Kano, Hiroyuki; Hiramatsu, Mineo; Sekine, Makoto; Hori, Masaru

    2016-09-01

    We recently have established ultrahigh-speed synthesis method of nanographene materials employing in-liquid plasma, and reported high durability of Pt/nanographene composites as a fuel cell catalyst. Crystallinity and domain size of nanographene materials were essential to their durability. However, their mechanism is not clarified yet. In this study, we investigated the oxygen reduction reaction using three-types of nanographene materials with different crystallinity and domain sizes, which were synthesized using ethanol, 1-propanol and 1-butanol, respectively. According to our previous studies, the nanographene material synthesized using the lower molecular weight alcohol has the higher crystallinity and larger domain size. Pt nanoparticles were supported on the nanographene surfaces by reducing 8 wt% H2PtCl6 diluted with H2O. Oxygen reduction current densities at a potential of 0.2 V vs. RHE were 5.43, 5.19 and 3.69 mA/cm2 for the samples synthesized using ethanol, 1-propanol and 1-butanol, respectively. This means that the higher crystallinity nanographene showed the larger oxygen reduction current density. The controls of crystallinity and domain size of nanographene materials are essential to not only their durability but also highly efficiency as catalyst electrodes.

  1. Tunneling barrier in nanoparticle junctions of La2/3(Ca,Sr)1/3MnO3: Nonlinear current-voltage characteristics

    NASA Astrophysics Data System (ADS)

    Niebieskikwiat, D.; Sánchez, R. D.; Lamas, D. G.; Caneiro, A.; Hueso, L. E.; Rivas, J.

    2003-05-01

    We study the nonlinear current-voltage (I-V) characteristics and analyze the voltage-dependent tunneling conductance in nanoparticles of La2/3A1/3MnO3 (A=Ca, Sr). The powders were prepared by different wet-chemical routes and low calcination temperatures were used to obtain an average particle size D≈30 nm. The data are comprehensively explained in terms of the tunneling picture, which allows one to estimate the height of the grain boundary insulating barrier (φ) for each sample. For constant D, our results show that the sample preparation route is mainly responsible for the value of φ in nanoparticles, while the Coulomb gap in the Coulomb blockade regime is ˜3 times higher for Sr- than for Ca-doping. We also show that a small fraction of the barriers contribute to the nonlinear transport, and the current is mainly carried through low-resistive percolated paths. In addition, despite the different barrier strengths, the low-field magnetoresistance (LFMR) is similar for all samples, implying that φ is not the fundamental parameter determining the LFMR.

  2. Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios

    NASA Technical Reports Server (NTRS)

    Juarez, Alfredo; Harper, Susana A.

    2016-01-01

    The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.

  3. Multiscale modeling of porous ceramics using movable cellular automaton method

    NASA Astrophysics Data System (ADS)

    Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.

    2017-10-01

    The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.

  4. Constraining Ω0 with the Angular Size-Redshift Relation of Double-lobed Quasars in the FIRST Survey

    NASA Astrophysics Data System (ADS)

    Buchalter, Ari; Helfand, David J.; Becker, Robert H.; White, Richard L.

    1998-02-01

    In previous attempts to measure cosmological parameters from the angular size-redshift (θ-z) relation of double-lobed radio sources, the observed data have generally been consistent with a static Euclidean universe rather than with standard Friedmann models, and past authors have disagreed significantly as to what effects are responsible for this observation. These results and different interpretations may be due largely to a variety of selection effects and differences in the sample definitions destroying the integrity of the data sets, and inconsistencies in the analysis undermining the results. Using the VLA FIRST survey, we investigate the θ-z relation for a new sample of double-lobed quasars. We define a set of 103 sources, carefully addressing the various potential problems that, we believe, have compromised past work, including a robust definition of size and the completeness and homogeneity of the sample, and further devise a self-consistent method to assure accurate morphological classification and account for finite resolution effects in the analysis. Before focusing on cosmological constraints, we investigate the possible impact of correlations among the intrinsic properties of these sources over the entire assumed range of allowed cosmological parameter values. For all cases, we find apparent size evolution of the form l ~ (1 + z)c, with c ~ -0.8 +/- 0.4, which is found to arise mainly from a power-size correlation of the form l ~ Pβ (β ~ - 0.13 +/- 0.06) coupled with a power-redshift correlation. Intrinsic size evolution is consistent with zero. We also find that in all cases, a subsample with c ~ 0 can be defined, whose θ-z relation should therefore arise primarily from cosmological effects. These results are found to be independent of orientation effects, although other evidence indicates that orientation effects are present and consistent with predictions of the unified scheme for radio-loud active galactic nuclei. The above results are all confirmed by nonparametric analysis. Contrary to past work, we find that the observed θ-z relation for our sample is more consistent with standard Friedmann models than with a static Euclidean universe. Though the current data cannot distinguish with high significance between various Friedmann models, significant constraints on the cosmological parameters within a given model are obtained. In particular, we find that a flat, matter-dominated universe (Ω0 = 1), a flat universe with a cosmological constant, and an open universe all provide comparably good fits to the data, with the latter two models both yielding Ω0 ~ 0.35 with 1 σ ranges including values between ~0.25 and 1.0; the c ~ 0 subsamples yield values of Ω0 near unity in these models, though with even greater error ranges. We also examine the values of H0 implied by the data, using plausible assumptions about the intrinsic source sizes, and find these to be consistent with the currently accepted range of values. We determine the sample size needed to improve significantly the results and outline future strategies for such work.

  5. Soil Carbon Variability and Change Detection in the Forest Inventory Analysis Database of the United States

    NASA Astrophysics Data System (ADS)

    Wu, A. M.; Nater, E. A.; Dalzell, B. J.; Perry, C. H.

    2014-12-01

    The USDA Forest Service's Forest Inventory Analysis (FIA) program is a national effort assessing current forest resources to ensure sustainable management practices, to assist planning activities, and to report critical status and trends. For example, estimates of carbon stocks and stock change in FIA are reported as the official United States submission to the United Nations Framework Convention on Climate Change. While the main effort in FIA has been focused on aboveground biomass, soil is a critical component of this system. FIA sampled forest soils in the early 2000s and has remeasurement now underway. However, soil sampling is repeated on a 10-year interval (or longer), and it is uncertain what magnitude of changes in soil organic carbon (SOC) may be detectable with the current sampling protocol. We aim to identify the sensitivity and variability of SOC in the FIA database, and to determine the amount of SOC change that can be detected with the current sampling scheme. For this analysis, we attempt to answer the following questions: 1) What is the sensitivity (power) of SOC data in the current FIA database? 2) How does the minimum detectable change in forest SOC respond to changes in sampling intervals and/or sample point density? Soil samples in the FIA database represent 0-10 cm and 10-20 cm depth increments with a 10-year sampling interval. We are investigating the variability of SOC and its change over time for composite soil data in each FIA region (Pacific Northwest, Interior West, Northern, and Southern). To guide future sampling efforts, we are employing statistical power analysis to examine the minimum detectable change in SOC storage. We are also investigating the sensitivity of SOC storage changes under various scenarios of sample size and/or sample frequency. This research will inform the design of future FIA soil sampling schemes and improve the information available to international policy makers, university and industry partners, and the public.

  6. Variability of Phytoplankton Size Structure in Response to Changes in Coastal Upwelling Intensity in the Southwestern East Sea

    NASA Astrophysics Data System (ADS)

    Shin, Jung-Wook; Park, Jinku; Choi, Jang-Geun; Jo, Young-Heon; Kang, Jae Joong; Joo, HuiTae; Lee, Sang Heon

    2017-12-01

    The aim of this study was to examine the size structure of phytoplankton under varying coastal upwelling intensities and to determine the resulting primary productivity in the southwestern East Sea. Samples of phytoplankton assemblages were collected on five occasions from the Hupo Bank, off the east coast of Korea, during 2012-2013. Because two major surface currents have a large effect on water mass transport in this region, we first performed a Backward Particle Tracking Experiment (BPTE) to determine the coastal sea from which the collected samples originated according to advection time of BPTE particles, following which we used upwelling age (UA) to determine the intensity of coastal upwelling in the region of origin for each sample. Only samples that were affected by coastal upwelling in the region of origin were included in subsequent analyses. We found that as UA increased, there was a decreasing trend in the concentration of picophytoplankton, and increasing trends in the concentration of nanophytoplankton and microphytoplankton. We also examined the relationship between the size structure of phytoplankton and primary productivity in the Ulleung Basin (UB), which has experienced significant variation over the past decade. We found that primary productivity in UB was closely related to the strength of the southerly wind, which is the most important mechanism for coastal upwelling in the southwestern East Sea. Thus, the size structure of phytoplankton is determined by the intensity of coastal upwelling, which is regulated by the southerly wind, and makes an important contribution to primary productivity.

  7. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  8. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  9. Semiquantitative analysis of gaps in microbiological performance of fish processing sector implementing current food safety management systems: a case study.

    PubMed

    Onjong, Hillary Adawo; Wangoh, John; Njage, Patrick Murigu Kamau

    2014-08-01

    Fish processing plants still face microbial food safety-related product rejections and the associated economic losses, although they implement legislation, with well-established quality assurance guidelines and standards. We assessed the microbial performance of core control and assurance activities of fish exporting processors to offer suggestions for improvement using a case study. A microbiological assessment scheme was used to systematically analyze microbial counts in six selected critical sampling locations (CSLs). Nine small-, medium- and large-sized companies implementing current food safety management systems (FSMS) were studied. Samples were collected three times on each occasion (n = 324). Microbial indicators representing food safety, plant and personnel hygiene, and overall microbiological performance were analyzed. Microbiological distribution and safety profile levels for the CSLs were calculated. Performance of core control and assurance activities of the FSMS was also diagnosed using an FSMS diagnostic instrument. Final fish products from 67% of the companies were within the legally accepted microbiological limits. Salmonella was absent in all CSLs. Hands or gloves of workers from the majority of companies were highly contaminated with Staphylococcus aureus at levels above the recommended limits. Large-sized companies performed better in Enterobacteriaceae, Escherichia coli, and S. aureus than medium- and small-sized ones in a majority of the CSLs, including receipt of raw fish material, heading and gutting, and the condition of the fish processing tables and facilities before cleaning and sanitation. Fish products of 33% (3 of 9) of the companies and handling surfaces of 22% (2 of 9) of the companies showed high variability in Enterobacteriaceae counts. High variability in total viable counts and Enterobacteriaceae was noted on fish products and handling surfaces. Specific recommendations were made in core control and assurance activities associated with sampling locations showing poor performance.

  10. Stable Encapsulated Air Nanobubbles in Water.

    PubMed

    Wang, Yu; Liu, Guojun; Hu, Heng; Li, Terry Yantian; Johri, Amer M; Li, Xiaoyu; Wang, Jian

    2015-11-23

    The dispersion into water of nanocapsules bearing a highly hydrophobic fluorinated internal lining yielded encapsulated air nanobubbles. These bubbles, like their micrometer-sized counterparts (microbubbles), effectively reflected ultrasound. More importantly, the nanobubbles survived under ultrasonication 100-times longer than a commercial microbubble sample that is currently in clinical use. We justify this unprecedented stability theoretically. These nanobubbles, owing to their small size and potential ability to permeate the capillary networks of tissues, may expand the applications of microbubbles in diagnostic ultrasonography and find new applications in ultrasound-regulated drug delivery. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  12. Suitability of river delta sediment as proppant, Missouri and Niobrara Rivers, Nebraska and South Dakota, 2015

    USGS Publications Warehouse

    Zelt, Ronald B.; Hobza, Christopher M.; Burton, Bethany L.; Schaepe, Nathaniel J.; Piatak, Nadine

    2017-11-16

    Sediment management is a challenge faced by reservoir managers who have several potential options, including dredging, for mitigation of storage capacity lost to sedimentation. As sediment is removed from reservoir storage, potential use of the sediment for socioeconomic or ecological benefit could potentially defray some costs of its removal. Rivers that transport a sandy sediment load will deposit the sand load along a reservoir-headwaters reach where the current of the river slackens progressively as its bed approaches and then descends below the reservoir water level. Given a rare combination of factors, a reservoir deposit of alluvial sand has potential to be suitable for use as proppant for hydraulic fracturing in unconventional oil and gas development. In 2015, the U.S. Geological Survey began a program of researching potential sources of proppant sand from reservoirs, with an initial focus on the Missouri River subbasins that receive sand loads from the Nebraska Sand Hills. This report documents the methods and results of assessments of the suitability of river delta sediment as proppant for a pilot study area in the delta headwaters of Lewis and Clark Lake, Nebraska and South Dakota. Results from surface-geophysical surveys of electrical resistivity guided borings to collect 3.7-meter long cores at 25 sites on delta sandbars using the direct-push method to recover duplicate, 3.8-centimeter-diameter cores in April 2015. In addition, the U.S. Geological Survey collected samples of upstream sand sources in the lower Niobrara River valley.At the laboratory, samples were dried, weighed, washed, dried, and weighed again. Exploratory analysis of natural sand for determining its suitability as a proppant involved application of a modified subset of the standard protocols known as American Petroleum Institute (API) Recommended Practice (RP) 19C. The RP19C methods were not intended for exploration-stage evaluation of raw materials. Results for the washed samples are not directly applicable to evaluations of suitability for use as fracture sand because, except for particle-size distribution, the API-recommended practices for assessing proppant properties (sphericity, roundness, bulk density, and crush resistance) require testing of specific proppant size classes. An optical imaging particle-size analyzer was used to make measurements of particle-size distribution and particle shape. Measured samples were sieved to separate the dominant-size fraction, and the separated subsample was further tested for roundness, sphericity, bulk density, and crush resistance.For the bulk washed samples collected from the Missouri River delta, the geometric mean size averaged 0.27 millimeters (mm), 80 percent of the samples were predominantly sand in the API 40/70 size class, and 17 percent were predominantly sand in the API 70/140 size class. Distributions of geometric mean size among the four sandbar complexes were similar, but samples collected from sandbar complex B were slightly coarser sand than those from the other three complexes. The average geometric mean sizes among the four sandbar complexes ranged only from 0.26 to 0.30 mm. For 22 main-stem sampling locations along the lower Niobrara River, geometric mean size averaged 0.26 mm, an average of 61 percent was sand in the API 40/70 size class, and 28 percent was sand in the API 70/140 size class. Average composition for lower Niobrara River samples was 48 percent medium sand, 37 percent fine sand, and about 7 percent each very fine sand and coarse sand fractions. On average, samples were moderately well sorted.Particle shape and strength were assessed for the dominant-size class of each sample. For proppant strength, crush resistance was tested at a predetermined level of stress (34.5 megapascals [MPa], or 5,000 pounds-force per square inch). To meet the API minimum requirement for proppant, after the crush test not more than 10 percent of the tested sample should be finer than the precrush dominant-size class. For particle shape, all samples surpassed the recommended minimum criteria for sphericity and roundness, with most samples being well-rounded. For proppant strength, of 57 crush-resistance tested Missouri River delta samples of 40/70-sized sand, 23 (40 percent) were interpreted as meeting the minimum criterion at 34.5 MPa, or 5,000 pounds-force per square inch. Of 12 tested samples of 70/140-sized sand, 9 (75 percent) of the Missouri River delta samples had less than 10 percent fines by volume following crush testing, achieving the minimum criterion at 34.5 MPa. Crush resistance for delta samples was strongest at sandbar complex A, where 67 percent of tested samples met the 10-percent fines criterion at the 34.5-MPa threshold. This frequency was higher than was indicated by samples from sandbar complexes B, C, and D that had rates of 50, 46, and 42 percent, respectively. The group of sandbar complex A samples also contained the largest percentages of samples dominated by the API 70/140 size class, which overall had a higher percentage of samples meeting the minimum criterion compared to samples dominated by coarser size classes; however, samples from sandbar complex A that had the API 40/70 size class tested also had a higher rate for meeting the minimum criterion (57 percent) than did samples from sandbar complexes B, C, and D (50, 43, and 40 percent, respectively). For samples collected along the lower Niobrara River, of the 25 tested samples of 40/70-sized sand, 9 samples passed the API minimum criterion at 34.5 MPa, but only 3 samples passed the more-stringent criterion of 8 percent postcrush fines. All four tested samples of 70/140 sand passed the minimum criterion at 34.5 MPa, with postcrush fines percentage of at most 4.1 percent.For two reaches of the lower Niobrara River, where hydraulic sorting was energized artificially by the hydraulic head drop at and immediately downstream from Spencer Dam, suitability of channel deposits for potential use as fracture sand was confirmed by test results. All reach A washed samples were well-rounded and had sphericity scores above 0.65, and samples for 80 percent of sampled locations met the crush-resistance criterion at the 34.5-MPa stress level. A conservative lower-bound estimate of sand volume in the reach A deposits was about 86,000 cubic meters. All reach B samples were well-rounded but sphericity averaged 0.63, a little less than the average for upstream reaches A and SP. All four samples tested passed the crush-resistance test at 34.5 MPa. Of three reach B sandbars, two had no more than 3 percent fines after the crush test, surpassing more stringent criteria for crush resistance that accept a maximum of 6 percent fines following the crush test for the API 70/140 size class.Relative to the crush-resistance test results for the API 40/70 size fraction of two samples of mine output from Loup River settling-basin dredge spoils near Genoa, Nebr., four of five reach A sample locations compared favorably. The four samples had increases in fines composition of 1.6–5.9 percentage points, whereas fines in the two mine-output samples increased by an average 6.8 percentage points.

  13. Insight into Primordial Solar System Oxygen Reservoirs from Returned Cometary Samples

    NASA Technical Reports Server (NTRS)

    Brownlee, D. E.; Messenger, S.

    2004-01-01

    The recent successful rendezvous of the Stardust spacecraft with comet Wild-2 will be followed by its return of cometary dust to Earth in January 2006. Results from two separate dust impact detectors suggest that the spacecraft collected approximately the nominal fluence of at least 1,000 particles larger than 15 micrometers in size. While constituting only about one microgram total, these samples will be sufficient to answer many outstanding questions about the nature of cometary materials. More than two decades of laboratory studies of stratospherically collected interplanetary dust particles (IDPs) of similar size have established the necessary microparticle handling and analytical techniques necessary to study them. It is likely that some IDPs are in fact derived from comets, although complex orbital histories of individual particles have made these assignments difficult to prove. Analysis of bona fide cometary samples will be essential for answering some fundamental outstanding questions in cosmochemistry, such as (1) the proportion of interstellar and processed materials that comprise comets and (2) whether the Solar System had a O-16-rich reservoir. Abundant silicate stardust grains have recently been discovered in anhydrous IDPs, in far greater abundances (200 5,500 ppm) than those in meteorites (25 ppm). Insight into the more subtle O isotopic variations among chondrites and refractory phases will require significantly higher precision isotopic measurements on micrometer-sized samples than are currently available.

  14. Research and development of a luminol-carbon monoxide flow system

    NASA Technical Reports Server (NTRS)

    Thomas, R. R.

    1977-01-01

    Adaption of the luminol-carbon monoxide injection system to a flowing type system is reported. Analysis of actual wastewater samples was carried out and revealed that bacteria can be associated with particles greater than 10 microns in size in samples such as mixed liquor. Research into the luminol reactive oxidation state indicates that oxidized iron porphyrins, cytochrome-c in particular, produce more luminol chemiluminescence than the reduced form. Correlation exists between the extent of porphyrin oxidation and relative chemiluminescence. In addition, the porphyrin nucleus is apparently destroyed under the current chemiluminescent reaction conditions.

  15. Body mass estimates of hominin fossils and the evolution of human body size.

    PubMed

    Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G

    2015-08-01

    Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. A comparison of defect size and film quality obtained from Film digitized image and digital image radiographs

    NASA Astrophysics Data System (ADS)

    Kamlangkeng, Poramate; Asa, Prateepasen; Mai, Noipitak

    2014-06-01

    Digital radiographic testing is an acceptable premature nondestructive examination technique. Its performance and limitation comparing to the old technique are still not widely well known. In this paper conducted the study on the comparison of the accuracy of the defect size measurement and film quality obtained from film and digital radiograph techniques by testing in specimens and known size sample defect. Initially, one specimen was built with three types of internal defect; which are longitudinal cracking, lack of fusion, and porosity. For the known size sample defect, it was machined various geometrical size for comparing the accuracy of the measuring defect size to the real size in both film and digital images. To compare the image quality by considering at smallest detectable wire and the three defect images. In this research used Image Quality Indicator (IQI) of wire type 10/16 FE EN BS EN-462-1-1994. The radiographic films were produced by X-ray and gamma ray using Kodak AA400 size 3.5x8 inches, while the digital images were produced by Fuji image plate type ST-VI with 100 micrometers resolution. During the tests, a radiator GE model MF3 was implemented. The applied energy is varied from 120 to 220 kV and the current from 1.2 to 3.0 mA. The intensity of Iridium 192 gamma ray is in the range of 24-25 Curie. Under the mentioned conditions, the results showed that the deviation of the defect size measurement comparing to the real size obtained from the digital image radiographs is below than that of the film digitized, whereas the quality of film digitizer radiographs is higher in comparison.

  17. Synthesis and characterization of γ-Fe2O3 NPs on silicon substrate for power device application

    NASA Astrophysics Data System (ADS)

    Hussein Nurul Athirah, Abu; Bee Chin, Ang; Yew Hoong, Wong; Boon Hoong, Ong; Aainaa Aqilah, Baharuddin

    2018-06-01

    Maghemite nanoparticles (γ-Fe2O3 NPs) were synthesized using Massart procedure. The formation reaction were optimized by varying the concentration of ferric nitrate solution (Fe(NO3)3) (0.1, 0.3, 0.5, 0.7 and 1.0 M). All samples were characterized by means of x-ray Diffractometer (XRD), Raman Spectroscopy, Transmission Electron Microscope (TEM) and Alternating Gradient Magnetometer (AGM). The smallest size of the NPs were chosen to be deposited on Silicon (100) substrate by spin coating technique. Annealing process of the samples were performed in Argon ambient at different temperatures (600, 700, 800 and 900°) for 20 min. Metal-oxide-semiconductor capacitors were then fabricated by depositing Aluminium as the gate electrode. The effect of the annealing process on the structural and electrical properties of γ-Fe2O3 NPs thin film were investigated. The structural properties of the deposited thin film were evaluated by XRD analysis, Atomic Force Microscopy (AFM) and Raman Analysis. On the other hand, the electrical properties was conducted by current-voltage analysis. It was revealed that the difference in the annealing temperature affect the grain size, surface roughness, distribution of the nanoparticles as well as the electrical performance of the samples where low annealing temperature (600 °C) gives low leakage current while high annealing temperature (900 °C) gives high electrical breakdown.

  18. Mating System and Effective Population Size of the Overexploited Neotropical Tree (Myroxylon peruiferum L.f.) and Their Impact on Seedling Production.

    PubMed

    Silvestre, Ellida de Aguiar; Schwarcz, Kaiser Dias; Grando, Carolina; de Campos, Jaqueline Bueno; Sujii, Patricia Sanae; Tambarussi, Evandro Vagner; Macrini, Camila Menezes Trindade; Pinheiro, José Baldin; Brancalion, Pedro Henrique Santin; Zucchi, Maria Imaculada

    2018-03-16

    The reproductive system of a tree species has substantial impact on genetic diversity and structure within and among natural populations. Such information, should be considered when planning tree planting for forest restoration. Here, we describe the mating system and genetic diversity of an overexploited Neotropical tree, Myroxylon peruiferum L.f. (Fabaceae) sampled from a forest remnant (10 seed trees and 200 seeds) and assess whether the effective population size of nursery-grown seedlings (148 seedlings) is sufficient to prevent inbreeding depression in reintroduced populations. Genetic analyses were performed based on 8 microsatellite loci. M. peruiferum presented a mixed mating system with evidence of biparental inbreeding (t^m-t^s = 0.118). We found low levels of genetic diversity for M. peruiferum species (allelic richness: 1.40 to 4.82; expected heterozygosity: 0.29 to 0.52). Based on Ne(v) within progeny, we suggest a sample size of 47 seed trees to achieve an effective population size of 100. The effective population sizes for the nursery-grown seedlings were much smaller Ne = 27.54-34.86) than that recommended for short term Ne ≥ 100) population conservation. Therefore, to obtain a reasonable genetic representation of native tree species and prevent problems associated with inbreeding depression, seedling production for restoration purposes may require a much larger sampling effort than is currently used, a problem that is further complicated by species with a mixed mating system. This study emphasizes the need to integrate species reproductive biology into seedling production programs and connect conservation genetics with ecological restoration.

  19. Cluster randomised crossover trials with binary data and unbalanced cluster sizes: application to studies of near-universal interventions in intensive care.

    PubMed

    Forbes, Andrew B; Akram, Muhammad; Pilcher, David; Cooper, Jamie; Bellomo, Rinaldo

    2015-02-01

    Cluster randomised crossover trials have been utilised in recent years in the health and social sciences. Methods for analysis have been proposed; however, for binary outcomes, these have received little assessment of their appropriateness. In addition, methods for determination of sample size are currently limited to balanced cluster sizes both between clusters and between periods within clusters. This article aims to extend this work to unbalanced situations and to evaluate the properties of a variety of methods for analysis of binary data, with a particular focus on the setting of potential trials of near-universal interventions in intensive care to reduce in-hospital mortality. We derive a formula for sample size estimation for unbalanced cluster sizes, and apply it to the intensive care setting to demonstrate the utility of the cluster crossover design. We conduct a numerical simulation of the design in the intensive care setting and for more general configurations, and we assess the performance of three cluster summary estimators and an individual-data estimator based on binomial-identity-link regression. For settings similar to the intensive care scenario involving large cluster sizes and small intra-cluster correlations, the sample size formulae developed and analysis methods investigated are found to be appropriate, with the unweighted cluster summary method performing well relative to the more optimal but more complex inverse-variance weighted method. More generally, we find that the unweighted and cluster-size-weighted summary methods perform well, with the relative efficiency of each largely determined systematically from the study design parameters. Performance of individual-data regression is adequate with small cluster sizes but becomes inefficient for large, unbalanced cluster sizes. When outcome prevalences are 6% or less and the within-cluster-within-period correlation is 0.05 or larger, all methods display sub-nominal confidence interval coverage, with the less prevalent the outcome the worse the coverage. As with all simulation studies, conclusions are limited to the configurations studied. We confined attention to detecting intervention effects on an absolute risk scale using marginal models and did not explore properties of binary random effects models. Cluster crossover designs with binary outcomes can be analysed using simple cluster summary methods, and sample size in unbalanced cluster size settings can be determined using relatively straightforward formulae. However, caution needs to be applied in situations with low prevalence outcomes and moderate to high intra-cluster correlations. © The Author(s) 2014.

  20. Intraclass Correlation Coefficients for Obesity Indicators and Energy Balance-Related Behaviors among New York City Public Elementary Schools

    ERIC Educational Resources Information Center

    Gray, Heewon Lee; Burgermaster, Marissa; Tipton, Elizabeth; Contento, Isobel R.; Koch, Pamela A.; Di Noia, Jennifer

    2016-01-01

    Objective: Sample size and statistical power calculation should consider clustering effects when schools are the unit of randomization in intervention studies. The objective of the current study was to investigate how student outcomes are clustered within schools in an obesity prevention trial. Method: Baseline data from the Food, Health &…

  1. The Day-to-Day Reality of Teacher Turnover in Preschool Classrooms: An Analysis of Classroom Context and Teacher, Director, and Parent Perspectives

    ERIC Educational Resources Information Center

    Cassidy, Deborah J.; Lower, Joanna K.; Kintner-Duffy, Victoria L.; Hegde, Archana V.; Shim, Jonghee

    2011-01-01

    The purpose of the current study is to examine teacher turnover comprehensively by triangulating the experiences of teachers, directors, parents, and children through actual, "real-time" turnover transitions. We intentionally examined turnover with a small sample size (N = 13 classrooms) to facilitate comprehensive data collection utilizing…

  2. A Stratigraphic, Granulometric, and Textural Comparison of recent pyroclastic density current deposits exposed at West Island and Burr Point, Augustine Volcano, Alaska

    NASA Astrophysics Data System (ADS)

    Rath, C. A.; Browne, B. L.

    2011-12-01

    Augustine Volcano (Alaska) is the most active volcano in the eastern Aleutian Islands, with 6 violent eruptions over the past 200 years and at least 12 catastrophic debris-avalanche deposits over the past ~2,000 years. The frequency and destructive nature of these eruptions combined with the proximity of Augustine Volcano to commercial ports and populated areas represents a significant hazard to the Cook Inlet region of Alaska. The focus of this study examines the relationship between debris-avalanche events and the subsequent emplacement of pyroclastic density currents by comparing the stratigraphic, granulometric, and petrographic characteristics of pyroclastic deposits emplaced following the 1883 A.D. Burr Point debris-avalanche and those emplaced following the ~370 14C yr B.P. West Island debris-avalanche. Data from this study combines grain size and componentry analysis of pyroclastic deposits with density, textural, and compositional analysis of juvenile clasts contained in the pyroclastic deposits. The 1883 A.D. Burr Point pyroclastic unit immediately overlies the 1883 debris avalanche deposit and underlies the 1912 Katmai ash. It ranges in thickness from 4 to 48 cm and consists of fine to medium sand-sized particles and coarser fragments of andesite. In places, this unit is normally graded and exhibits cross-bedding. Many of these samples are fines-enriched, with sorting coefficients ranging from -0.1 to 1.9 and median grain size ranging from 0.1 to 2.4 mm. The ~370 14C yr B.P. West Island pyroclastic unit is sandwiched between the underlying West Island debris-avalanche deposit and the overlying 1912 Katmai Ash deposit, and at times a fine-grained gray ash originating from the 1883 eruption. West Island pyroclastic deposit is sand to coarse-sand-sized and either normally graded or massive with sorting coefficients ranging from 0.9 to 2.8 and median grain sizes ranging from 0.4 to 2.6 mm. Some samples display a bimodal distribution of grain sizes, while most display a fines-depleted distribution. Juvenile andesite clasts exist as either subrounded to subangular fragments with abundant vesicles that range in color from white to brown or dense clasts characterized by their porphyritic and glassy texture. Samples from neither eruption correlate in sorting or grain size with distance from the vent. Stratigraphic and granulometric data suggest differences in the manner in which these two pyroclastic density currents traveled and groundmass textures are interpreted as recording differences in how the two magmas ascended and erupted, whereas juvenile Burr Point clasts resemble other lava flows erupted from Augustine Volcano, vesicular and glassy juvenile West Island clasts bear resemblance to clasts derived from so-called "blast-generated" pyroclastic density deposits at Mt. St. Helens in 1980 and Bezymianny in 1956.

  3. Textural characteristics and sedimentary environment of sediment at eroded and deposited regions in the severely eroded coastline of Batu Pahat, Malaysia.

    PubMed

    Wan Mohtar, Wan Hanna Melini; Nawang, Siti Aminah Bassa; Abdul Maulud, Khairul Nizam; Benson, Yannie Anak; Azhary, Wan Ahmad Hafiz Wan Mohamed

    2017-11-15

    This study investigates the textural characteristics of sediments collected at eroded and deposited areas of highly severed eroded coastline of Batu Pahat, Malaysia. Samples were taken from systematically selected 23 locations along the 67km stretch of coastline and are extended to the fluvial sediments of the main river of Batu Pahat. Grain size distribution analysis was conducted to identify its textural characteristics and associated sedimentary transport behaviours. Sediments obtained along the coastline were fine-grained material with averaged mean size of 7.25 ϕ, poorly sorted, positively skewed and has wide distributions. Samples from eroded and deposition regions displayed no distinctive characteristics and exhibited similar profiles. The high energy condition transported the sediments as suspension, mostly as pelagic and the sediments were deposited as shallow marine and agitated deposits. The fluvial sediments of up to 3km into the river have particularly similar profile of textural characteristics with the neighbouring marine sediments from the river mouth. Profiles were similar with marine sediments about 3km opposite the main current and can go up to 10km along the current of Malacca Straits. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    PubMed Central

    2012-01-01

    Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes. PMID:23057445

  5. Surface Acoustic Wave Nebulisation Mass Spectrometry for the Fast and Highly Sensitive Characterisation of Synthetic Dyes in Textile Samples

    NASA Astrophysics Data System (ADS)

    Astefanei, Alina; van Bommel, Maarten; Corthals, Garry L.

    2017-10-01

    Surface acoustic wave nebulisation (SAWN) mass spectrometry (MS) is a method to generate gaseous ions compatible with direct MS of minute samples at femtomole sensitivity. To perform SAWN, acoustic waves are propagated through a LiNbO3 sampling chip, and are conducted to the liquid sample, which ultimately leads to the generation of a fine mist containing droplets of nanometre to micrometre diameter. Through fission and evaporation, the droplets undergo a phase change from liquid to gaseous analyte ions in a non-destructive manner. We have developed SAWN technology for the characterisation of organic colourants in textiles. It generates electrospray-ionisation-like ions in a non-destructive manner during ionisation, as can be observed by the unmodified chemical structure. The sample size is decreased by tenfold to 1000-fold when compared with currently used liquid chromatography-MS methods, with equal or better sensitivity. This work underscores SAWN-MS as an ideal tool for molecular analysis of art objects as it is non-destructive, is rapid, involves minimally invasive sampling and is more sensitive than current MS-based methods. [Figure not available: see fulltext.

  6. Testing of typical spacecraft materials in a simulated substorm environment

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.; Berkopec, F. D.; Staskus, J. V.; Blech, R. A.; Narciso, S. J.

    1977-01-01

    The test specimens were spacecraft paints, silvered Teflon, thermal blankets, and solar array segments. The samples, ranging in size from 300 to 1000 sq cm were exposed to monoenergetic electron energies from 2 to 20 keV at a current density of 1 NA/sq cm. The samples generally behaved as capacitors with strong voltage gradient at their edges. The charging characteristics of the silvered Teflon, Kapton, and solar cell covers were controlled by the secondary emission characteristics. Insulators that did not discharge were the spacecraft paints and the quartz fiber cloth thermal blanket sample. All other samples did experience discharges when the surface voltage reached -8 to -16kV. The discharges were photographed. The breakdown voltage for each sample was determined and the average energy lost in the discharge was computed.

  7. Effect of sampling volume on dry powder inhaler (DPI)-emitted aerosol aerodynamic particle size distributions (APSDs) measured by the Next-Generation Pharmaceutical Impactor (NGI) and the Andersen eight-stage cascade impactor (ACI).

    PubMed

    Mohammed, Hlack; Roberts, Daryl L; Copley, Mark; Hammond, Mark; Nichols, Steven C; Mitchell, Jolyon P

    2012-09-01

    Current pharmacopeial methods for testing dry powder inhalers (DPIs) require that 4.0 L be drawn through the inhaler to quantify aerodynamic particle size distribution of "inhaled" particles. This volume comfortably exceeds the internal dead volume of the Andersen eight-stage cascade impactor (ACI) and Next Generation pharmaceutical Impactor (NGI) as designated multistage cascade impactors. Two DPIs, the second (DPI-B) having similar resistance than the first (DPI-A) were used to evaluate ACI and NGI performance at 60 L/min following the methodology described in the European and United States Pharmacopeias. At sampling times ≥2 s (equivalent to volumes ≥2.0 L), both impactors provided consistent measures of therapeutically important fine particle mass (FPM) from both DPIs, independent of sample duration. At shorter sample times, FPM decreased substantially with the NGI, indicative of incomplete aerosol bolus transfer through the system whose dead space was 2.025 L. However, the ACI provided consistent measures of both variables across the range of sampled volumes evaluated, even when this volume was less than 50% of its internal dead space of 1.155 L. Such behavior may be indicative of maldistribution of the flow profile from the relatively narrow exit of the induction port to the uppermost stage of the impactor at start-up. An explanation of the ACI anomalous behavior from first principles requires resolution of the rapidly changing unsteady flow and pressure conditions at start up, and is the subject of ongoing research by the European Pharmaceutical Aerosol Group. Meanwhile, these experimental findings are provided to advocate a prudent approach by retaining the current pharmacopeial methodology.

  8. Copper Decoration of Carbon Nanotubes and High Resolution Electron Microscopy

    NASA Astrophysics Data System (ADS)

    Probst, Camille

    A new process of decorating carbon nanotubes with copper was developed for the fabrication of nanocomposite aluminum-nanotubes. The process consists of three stages: oxidation, activation and electroless copper plating on the nanotubes. The oxidation step was required to create chemical function on the nanotubes, essential for the activation step. Then, catalytic nanoparticles of tin-palladium were deposited on the tubes. Finally, during the electroless copper plating, copper particles with a size between 20 and 60 nm were uniformly deposited on the nanotubes surface. The reproducibility of the process was shown by using another type of carbon nanotube. The fabrication of nanocomposites aluminum-nanotubes was tested by aluminum vacuum infiltration. Although the infiltration of carbon nanotubes did not produce the expected results, an interesting electron microscopy sample was discovered during the process development: the activated carbon nanotubes. Secondly, scanning transmitted electron microscopy (STEM) imaging in SEM was analysed. The images were obtained with a new detector on the field emission scanning electron microscope (Hitachi S-4700). Various parameters were analysed with the use of two different samples: the activated carbon nanotubes (previously obtained) and gold-palladium nanodeposits. Influences of working distance, accelerating voltage or sample used on the spatial resolution of images obtained with SMART (Scanning Microscope Assessment and Resolution Testing) were analysed. An optimum working distance for the best spatial resolution related to the sample analysed was found for the imaging in STEM mode. Finally, relation between probe size and spatial resolution of backscattered electrons (BSE) images was studied. An image synthesis method was developed to generate the BSE images from backscattered electrons coefficients obtained with CASINO software. Spatial resolution of images was determined using SMART. The analysis shown that using a probe size smaller than the size of the observed object (sample features) does not improve the spatial resolution. In addition, the effects of the accelerating voltage, the current intensity and the sample geometry and composition were analysed.

  9. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Interpretation of standard leaching test BS EN 12457-2: is your sample hazardous or inert?

    PubMed

    Zandi, Mohammad; Russell, Nigel V; Edyvean, Robert G J; Hand, Russell J; Ward, Philip

    2007-12-01

    A slag sample from a lead refiner has been obtained and given to two analytical laboratories to determine the release of trace elements from the sample according to BS EN 12457-2. Samples analysed by one laboratory passed waste acceptance criteria, leading it to be classified as an inert material; samples of the same material analysed by the other laboratory failed waste acceptance criteria and were classified as hazardous. It was found that the sample preparation procedure is the critical step in the leaching analysis and that the effects of particle size on leachability should be taken into account when using this standard. The purpose of this paper is to open a debate on designing a better defined standard leaching test and making current waste acceptance criteria more flexible.

  11. Formation of Hot Tear Under Controlled Solidification Conditions

    NASA Astrophysics Data System (ADS)

    Subroto, Tungky; Miroux, Alexis; Bouffier, Lionel; Josserond, Charles; Salvo, Luc; Suéry, Michel; Eskin, Dmitry G.; Katgerman, Laurens

    2014-06-01

    Aluminum alloy 7050 is known for its superior mechanical properties, and thus finds its application in aerospace industry. Vertical direct-chill (DC) casting process is typically employed for producing such an alloy. Despite its advantages, AA7050 is considered as a "hard-to-cast" alloy because of its propensity to cold cracking. This type of cracks occurs catastrophically and is difficult to predict. Previous research suggested that such a crack could be initiated by undeveloped hot tears (microscopic hot tear) formed during the DC casting process if they reach a certain critical size. However, validation of such a hypothesis has not been done yet. Therefore, a method to produce a hot tear with a controlled size is needed as part of the verification studies. In the current study, we demonstrate a method that has a potential to control the size of the created hot tear in a small-scale solidification process. We found that by changing two variables, cooling rate and displacement compensation rate, the size of the hot tear during solidification can be modified in a controlled way. An X-ray microtomography characterization technique is utilized to quantify the created hot tear. We suggest that feeding and strain rate during DC casting are more important compared with the exerted force on the sample for the formation of a hot tear. In addition, we show that there are four different domains of hot-tear development in the explored experimental window—compression, microscopic hot tear, macroscopic hot tear, and failure. The samples produced in the current study will be used for subsequent experiments that simulate cold-cracking conditions to confirm the earlier proposed model.

  12. The Nature and Origin of UCDs in the Coma Cluster

    NASA Astrophysics Data System (ADS)

    Chiboucas, Kristin; Tully, R. Brent; Madrid, Juan; Phillipps, Steven; Carter, David; Peng, Eric

    2018-01-01

    UCDs are super massive star clusters found largely in dense regions but have also been found around individual galaxies and in smaller groups. Their origin is still under debate but currently favored scenarios include formation as giant star clusters, either as the brightest globular clusters or through mergers of super star clusters, themselves formed during major galaxy mergers, or as remnant nuclei from tidal stripping of nucleated dwarf ellipticals. Establishing the nature of these enigmatic objects has important implications for our understanding of star formation, star cluster formation, the missing satellite problem, and galaxy evolution. We are attempting to disentangle these competing formation scenarios with a large survey of UCDs in the Coma cluster. Using ACS two-passband imaging from the HST/ACS Coma Cluster Treasury Survey, we are using colors and sizes to identify the UCD cluster members. With a large size limited sample of the UCD population within the core region of the Coma cluster, we are investigating the population size, properties, and spatial distribution, and comparing that with the Coma globular cluster and nuclear star cluster populations to discriminate between the threshing and globular cluster scenarios. In previous work, we had found a possible correlation of UCD colors with host galaxy and a possible excess of UCDs around a non-central giant galaxy with an unusually large globular cluster population, both suggestive of a globular cluster origin. With a larger sample size and additional imaging fields that encompass the regions around these giant galaxies, we have found that the color correlation with host persists and the giant galaxy with unusually large globular cluster population does appear to host a large UCD population as well. We present the current status of the survey.

  13. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  14. Isolating magnetic moments from individual grains within a magnetic assemblage

    NASA Astrophysics Data System (ADS)

    Béguin, A.; Fabian, K.; Jansen, C.; Lascu, I.; Harrison, R.; Barnhoorn, A.; de Groot, L. V.

    2017-12-01

    Methods to derive paleodirections or paleointensities from rocks currently rely on measurements of bulk samples (typically 10 cc). The process of recording and storing magnetizations as function of temperature, however, differs for grains of various sizes and chemical compositions. Most rocks, by their mere nature, consist of assemblages of grains varying in size, shape, and chemistry. Unraveling the behavior of individual grains is a holy grail in fundamental rock magnetism. Recently, we showed that it is possible to obtain plausible magnetic moments for individual grains in a synthetic sample by a micromagnetic tomography (MMT) technique. We use a least-squares inversion to obtain these magnetic moments based on the physical locations and dimensions of the grains obtained from a MicroCT scanner and a magnetic flux density map of the surface of the sample. The sample used for this proof of concept, however, was optimized for success: it had a low dispersion of the grains, and the grains were large enough so they were easily detected by the MicroCT scanner. Natural lavas are much more complex than the synthetic sample analyzed so far: the dispersion of the magnetic markers is one order of magnitude higher, the grains differ more in composition and size, and many small (submicron) magnetic markers may be present that go undetected by the MicroCT scanner. Here we present the first results derived from a natural volcanic sample from the 1907-flow at Hawaii. To analyze the magnetic flux at the surface of the sample at room temperature, we used the Magnetic Tunneling Junction (MTJ) technique. We were able to successfully obtain MicroCT and MTJ scans from the sample and isolate plausible magnetic moments for individual grains in the top 70 µm of the sample. We discuss the potential of the MMT technique applied to natural samples and compare the MTJ and SSM methods in terms of work flow and quality of the results.

  15. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  16. Sample Size Calculations for Micro-randomized Trials in mHealth

    PubMed Central

    Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A.

    2015-01-01

    The use and development of mobile interventions are experiencing rapid growth. In “just-in-time” mobile interventions, treatments are provided via a mobile device and they are intended to help an individual make healthy decisions “in the moment,” and thus have a proximal, near future impact. Currently the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a “micro-randomized” trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. PMID:26707831

  17. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  18. The legibility of prescription medication labelling in Canada

    PubMed Central

    Ahrens, Kristina; Krishnamoorthy, Abinaya; Gold, Deborah; Rojas-Fernandez, Carlos H.

    2014-01-01

    Introduction: The legibility of medication labelling is a concern for all Canadians, because poor or illegible labelling may lead to miscommunication of medication information and poor patient outcomes. There are currently few guidelines and no regulations regarding print standards on medication labels. This study analyzed sample prescription labels from Ontario, Canada, and compared them with print legibility guidelines (both generic and specific to medication labels). Methods: Cluster sampling was used to randomly select a total of 45 pharmacies in the tri-cities of Kitchener, Waterloo and Cambridge. Pharmacies were asked to supply a regular label with a hypothetical prescription. The print characteristics of patient-critical information were compared against the recommendations for prescription labels by pharmaceutical and health organizations and for print accessibility by nongovernmental organizations. Results: More than 90% of labels followed the guidelines for font style, contrast, print colour and nonglossy paper. However, only 44% of the medication instructions met the minimum guideline of 12-point print size, and none of the drug or patient names met this standard. Only 5% of the labels were judged to make the best use of space, and 51% used left alignment. None of the instructions were in sentence case, as is recommended. Discussion: We found discrepancies between guidelines and current labels in print size, justification, spacing and methods of emphasis. Conclusion: Improvements in pharmacy labelling are possible without moving to new technologies or changing the size of labels and would be expected to enhance patient outcomes. PMID:24847371

  19. Interventions targeting substance abuse among women survivors of intimate partner abuse: a meta-analysis.

    PubMed

    Fowler, Dawnovise N; Faulkner, Monica

    2011-12-01

    In this article, meta-analytic techniques are used to examine existing intervention studies (n = 11) to determine their effects on substance abuse among female samples of intimate partner abuse (IPA) survivors. This research serves as a starting point for greater attention in research and practice to the implementation of evidence-based, integrated services to address co-occurring substance abuse and IPA victimization among women as major intersecting public health problems. The results show greater effects in three main areas. First, greater effect sizes exist in studies where larger numbers of women experienced current IPA. Second, studies with a lower mean age also showed greater effect sizes than studies with a higher mean age. Lastly, studies with smaller sample sizes have greater effects. This research helps to facilitate cohesion in the knowledge base on this topic, and the findings of this meta-analysis, in particular, contribute needed information to gaps in the literature on the level of promise of existing interventions to impact substance abuse in this underserved population. Published by Elsevier Inc.

  20. A meta-analytic review of overgeneral memory: The role of trauma history, mood, and the presence of posttraumatic stress disorder.

    PubMed

    Ono, Miyuki; Devilly, Grant J; Shum, David H K

    2016-03-01

    A number of studies suggest that a history of trauma, depression, and posttraumatic stress disorder (PTSD) are associated with autobiographical memory deficits, notably overgeneral memory (OGM). However, whether there are any group differences in the nature and magnitude of OGM has not been evaluated. Thus, a meta-analysis was conducted to quantify group differences in OGM. The effect sizes were pooled from studies examining the effect on OGM from a history of trauma (e.g., childhood sexual abuse), and the presence of PTSD or current depression (e.g., major depressive disorder). Using multiple search engines, 13 trauma studies and 12 depression studies were included in this review. A depression effect was observed on OGM with a large effect size, and was more evident by the lack of specific memories, especially to positive cues. An effect of trauma history on OGM was observed with a medium effect size, and this was most evident by the presence of overgeneral responses to negative cues. The results also suggested an amplified memory deficit in the presence of PTSD. That is, the effect sizes of OGM among individuals with PTSD were very large and relatively equal across different types of OGM. Future studies that directly compare the differences of OGM among 4 samples (i.e., controls, current depression without trauma history, trauma history without depression, and trauma history and depression) would be warranted to verify the current findings. (c) 2016 APA, all rights reserved).

  1. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Nanosized LiM YMn 2- YO 4 (M = Cr, Co and Ni) spinels synthesized by a sucrose-aided combustion method . Structural characterization and electrochemical properties

    NASA Astrophysics Data System (ADS)

    Amarilla, J. M.; Rojas, R. M.; Pico, F.; Pascual, L.; Petrov, K.; Kovacheva, D.; Lazarraga, M. G.; Lejona, I.; Rojo, J. M.

    Spinels of composition LiM YMn 2- YO 4, M = Cr 3+, Co 3+, or Ni 2+, Y = 0.1 and 1 for the Cr and Co dopants, Y = 0.05 and 0.5 for the Ni sample, have been synthesized by a sucrose-aided combustion method. The samples as prepared require of an additional thermal treatment at 700 °C, 1 h to get stoichiometric single-phase spinels. The samples consist of aggregated particles of small size (45-50 nm) as deduced from transmission electron microscopy and X-ray powder diffraction. The electrochemical behaviour of the six spinels as cathodes in lithium cells has been analysed at 5 and 4 V under high current, 1 C rate. At 5 V the discharge capacity of LiNi 0.5Mn 1.5O 4 is higher than the one shown by LiCrMnO 4 and LiCoMnO 4, and it shows an elevated cyclability, i.e. capacity retention of 85.3% after 100 cycles. At 4 V the discharge capacity is similar for LiNi 0.05Mn 1.95O 4, LiCr 0.1Mn 1.9O 4 and LiCo 0.1Mn 1.9O 4, and all the three spinels show similar and very high cyclability, i.e. capacity retention >90% after 100 cycles. The spinels preserve their starting capacity up to currents as high as 2 C rate. The nanometric size of the samples explains the high rate capability of the synthesized spinels.

  3. Will Outer Tropical Cyclone Size Change due to Anthropogenic Warming?

    NASA Astrophysics Data System (ADS)

    Schenkel, B. A.; Lin, N.; Chavas, D. R.; Vecchi, G. A.; Knutson, T. R.; Oppenheimer, M.

    2017-12-01

    Prior research has shown significant interbasin and intrabasin variability in outer tropical cyclone (TC) size. Moreover, outer TC size has even been shown to vary substantially over the lifetime of the majority of TCs. However, the factors responsible for both setting initial outer TC size and determining its evolution throughout the TC lifetime remain uncertain. Given these gaps in our physical understanding, there remains uncertainty in how outer TC size will change, if at all, due to anthropogenic warming. The present study seeks to quantify whether outer TC size will change significantly in response to anthropogenic warming using data from a high-resolution global climate model and a regional hurricane model. Similar to prior work, the outer TC size metric used in this study is the radius in which the azimuthal-mean surface azimuthal wind equals 8 m/s. The initial results from the high-resolution global climate model data suggest that the distribution of outer TC size shifts significantly towards larger values in each global TC basin during future climates, as revealed by 1) statistically significant increase of the median outer TC size by 5-10% (p<0.05) according to a 1,000-sample bootstrap resampling approach with replacement and 2) statistically significant differences between distributions of outer TC size from current and future climate simulations as shown using two-sample Kolmogorov Smirnov testing (p<<0.01). Additional analysis of the high-resolution global climate model data reveals that outer TC size does not uniformly increase within each basin in future climates, but rather shows substantial locational dependence. Future work will incorporate the regional mesoscale hurricane model data to help focus on identifying the source of the spatial variability in outer TC size increases within each basin during future climates and, more importantly, why outer TC size changes in response to anthropogenic warming.

  4. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  5. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  6. 3D contour fluorescence spectroscopy with Brus model: Determination of size and band gap of double stranded DNA templated silver nanoclusters

    NASA Astrophysics Data System (ADS)

    Kamalraj, Devaraj; Yuvaraj, Selvaraj; Yoganand, Coimbatore Paramasivam; Jaffer, Syed S.

    2018-01-01

    Here, we propose a new synthetic methodology for silver nanocluster preparation by using a double stranded-DNA (ds-DNA) template which no one has reported yet. A new calculative method was formulated to determine the size of the nanocluster and their band gaps by using steady state 3D contour fluorescence technique with Brus model. Generally, the structure and size of the nanoclusters determine by using High Resolution Transmission Electron Microscopy (HR-TEM). Before imaging the samples by using HR-TEM, they are introduced to drying process which causes aggregation and forms bigger polycrystalline particles. It takes long time duration and expensive methodology. In this current methodology, we found out the size and band gap of the nanocluster in the liquid form without any polycrystalline aggregation for which 3D contour fluorescence technique was used as an alternative approach to the HR-TEM method.

  7. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  8. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  9. Virtual reality gaming in the rehabilitation of the upper extremities post-stroke.

    PubMed

    Yates, Michael; Kelemen, Arpad; Sik Lanyi, Cecilia

    2016-01-01

    Occurrences of strokes often result in unilateral upper limb dysfunction. Dysfunctions of this nature frequently persist and can present chronic limitations to activities of daily living. Research into applying virtual reality gaming systems to provide rehabilitation therapy have seen resurgence. Themes explored in stroke rehab for paretic limbs are action observation and imitation, versatility, intensity and repetition and preservation of gains. Fifteen articles were ultimately selected for review. The purpose of this literature review is to compare the various virtual reality gaming modalities in the current literature and ascertain their efficacy. The literature supports the use of virtual reality gaming rehab therapy as equivalent to traditional therapies or as successful augmentation to those therapies. While some degree of rigor was displayed in the literature, small sample sizes, variation in study lengths and therapy durations and unequal controls reduce generalizability and comparability. Future studies should incorporate larger sample sizes and post-intervention follow-up measures.

  10. Microwave Nondestructive Evaluation of Dielectric Materials with a Metamaterial Lens

    NASA Technical Reports Server (NTRS)

    Shreiber, Daniel; Gupta, Mool; Cravey, Robin L.

    2008-01-01

    A novel microwave Nondestructive Evaluation (NDE) sensor was developed in an attempt to increase the sensitivity of the microwave NDE method for detection of defects small relative to a wavelength. The sensor was designed on the basis of a negative index material (NIM) lens. Characterization of the lens was performed to determine its resonant frequency, index of refraction, focus spot size, and optimal focusing length (for proper sample location). A sub-wavelength spot size (3 dB) of 0.48 lambda was obtained. The proof of concept for the sensor was achieved when a fiberglass sample with a 3 mm diameter through hole (perpendicular to the propagation direction of the wave) was tested. The hole was successfully detected with an 8.2 cm wavelength electromagnetic wave. This method is able to detect a defect that is 0.037 lambda. This method has certain advantages over other far field and near field microwave NDE methods currently in use.

  11. Precise Manipulation and Patterning of Protein Crystals for Macromolecular Crystallography Using Surface Acoustic Waves.

    PubMed

    Guo, Feng; Zhou, Weijie; Li, Peng; Mao, Zhangming; Yennawar, Neela H; French, Jarrod B; Huang, Tony Jun

    2015-06-01

    Advances in modern X-ray sources and detector technology have made it possible for crystallographers to collect usable data on crystals of only a few micrometers or less in size. Despite these developments, sample handling techniques have significantly lagged behind and often prevent the full realization of current beamline capabilities. In order to address this shortcoming, a surface acoustic wave-based method for manipulating and patterning crystals is developed. This method, which does not damage the fragile protein crystals, can precisely manipulate and pattern micrometer and submicrometer-sized crystals for data collection and screening. The technique is robust, inexpensive, and easy to implement. This method not only promises to significantly increase efficiency and throughput of both conventional and serial crystallography experiments, but will also make it possible to collect data on samples that were previously intractable. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Multiple defocused coherent diffraction imaging: method for simultaneously reconstructing objects and probe using X-ray free-electron lasers.

    PubMed

    Hirose, Makoto; Shimomura, Kei; Suzuki, Akihiro; Burdet, Nicolas; Takahashi, Yukio

    2016-05-30

    The sample size must be less than the diffraction-limited focal spot size of the incident beam in single-shot coherent X-ray diffraction imaging (CXDI) based on a diffract-before-destruction scheme using X-ray free electron lasers (XFELs). This is currently a major limitation preventing its wider applications. We here propose multiple defocused CXDI, in which isolated objects are sequentially illuminated with a divergent beam larger than the objects and the coherent diffraction pattern of each object is recorded. This method can simultaneously reconstruct both objects and a probe from the coherent X-ray diffraction patterns without any a priori knowledge. We performed a computer simulation of the prposed method and then successfully demonstrated it in a proof-of-principle experiment at SPring-8. The prposed method allows us to not only observe broad samples but also characterize focused XFEL beams.

  13. Annual variation in polychlorinated biphenyl (PCB) exposure in tree swallow (Tachycineta bicolor) eggs and nestlings at Great Lakes Restoration Initiative (GLRI) study sites

    USGS Publications Warehouse

    Custer, Christine M.; Custer, Thomas W.; Dummer, Paul; Goldberg, Diana R.; Franson, J. Christian

    2018-01-01

    Tree swallow (Tachycineta bicolor) eggs and nestlings were collected from 16 sites across the Great Lakes to quantify normal annual variation in total polychlorinated biphenyl (PCB) exposure and to validate the sample size choice in earlier work. A sample size of five eggs or five nestlings per site was adequate to quantify exposure to PCBs in tree swallows given the current exposure levels and variation. There was no difference in PCB exposure in two randomly selected sets of five eggs collected in the same year, but analyzed in different years. Additionally, there was only modest annual variation in exposure, with between 69% (nestlings) and 73% (eggs) of sites having no differences between years. There was a tendency, both statistically and qualitatively, for there to be less exposure in the second year compared to the first year.

  14. Determination of Mercury in Aqueous and Geologic Materials by Continuous Flow-Cold Vapor-Atomic Fluorescence Spectrometry (CVAFS)

    USGS Publications Warehouse

    Hageman, Philip L.

    2007-01-01

    New methods for the determination of total mercury in geologic materials and dissolved mercury in aqueous samples have been developed that will replace the methods currently (2006) in use. The new methods eliminate the use of sodium dichromate (Na2Cr2O7 ?2H2O) as an oxidizer and preservative and significantly lower the detection limit for geologic and aqueous samples. The new methods also update instrumentation from the traditional use of cold vapor-atomic absorption spectrometry to cold vapor-atomic fluorescence spectrometry. At the same time, the new digestion procedures for geologic materials use the same size test tubes, and the same aluminum heating block and hot plate as required by the current methods. New procedures for collecting and processing of aqueous samples use the same procedures that are currently (2006) in use except that the samples are now preserved with concentrated hydrochloric acid/bromine monochloride instead of sodium dichromate/nitric acid. Both the 'old' and new methods have the same analyst productivity rates. These similarities should permit easy migration to the new methods. Analysis of geologic and aqueous reference standards using the new methods show that these procedures provide mercury recoveries that are as good as or better than the previously used methods.

  15. Prediction of Active-Region CME Productivity from Magnetograms

    NASA Technical Reports Server (NTRS)

    Falconer, D. A.; Moore, R. L.; Gary, G. A.

    2004-01-01

    We report results of an expanded evaluation of whole-active-region magnetic measures as predictors of active-region coronal mass ejection (CME) productivity. Previously, in a sample of 17 vector magnetograms of 12 bipolar active regions observed by the Marshall Space Flight Center (MSFC) vector magnetograph, from each magnetogram we extracted a measure of the size of the active region (the active region s total magnetic flux a) and four measures of the nonpotentiality of the active region: the strong-shear length L(sub SS), the strong-gradient length L(sub SG), the net vertical electric current I(sub N), and the net-current magnetic twist parameter alpha (sub IN). This sample size allowed us to show that each of the four nonpotentiality measures was statistically significantly correlated with active-region CME productivity in time windows of a few days centered on the day of the magnetogram. We have now added a fifth measure of active-region nonpotentiality (the best-constant-alpha magnetic twist parameter (alpha sub BC)), and have expanded the sample to 36 MSFC vector magnetograms of 31 bipolar active regions. This larger sample allows us to demonstrate statistically significant correlations of each of the five nonpotentiality measures with future CME productivity, in time windows of a few days starting from the day of the magnetogram. The two magnetic twist parameters (alpha (sub 1N) and alpha (sub BC)) are normalized measures of an active region s nonpotentially in that they do not depend directly on the size of the active region, while the other three nonpotentiality measures (L(sub SS), L(sub SG), and I(sub N)) are non-normalized measures in that they do depend directly on active-region size. We find (1) Each of the five nonpotentiality measures is statistically significantly correlated (correlation confidence level greater than 95%) with future CME productivity and has a CME prediction success rate of approximately 80%. (2) None of the nonpotentiality measures is a significantly better CME predictor than the others. (3) The active-region phi shows some correlation with CME productivity, but well below a statistically significant level (correlation confidence level less than approximately 80%; CME prediction success rate less than approximately 65%). (4) In addition to depending on magnetic twist, CME productivity appears to have some direct dependence on active-region size (rather than only an indirect dependence through a correlation of magnetic twist with active-region size), but it will take a still larger sample of active regions (50 or more) to certify this. (5) Of the five nonpotentiality measures, L(sub SG) appears to be the best for operational CME forecasting because it is as good or better a CME predictor than the others and it alone does not require a vector magnetogram; L(sub SG) can be measured from a line-of-sight magnetogram such as from the Michelson Doppler Imager (MDI) on the Solar and Heliospheric Observatory (SOHO).

  16. Bathymetry, substrate and circulation in Westcott Bay, San Juan Islands, Washington

    USGS Publications Warehouse

    Grossman, Eric E.; Stevens, Andrew W.; Curran, Chris; Smith, Collin; Schwartz, Andrew

    2007-01-01

    Nearshore bathymetry, substrate type, and circulation patterns in Westcott Bay, San Juan Islands, Washington, were mapped using two acoustic sonar systems, video and direct sampling of seafloor sediments. The goal of the project was to characterize nearshore habitat and conditions influencing eelgrass (Z. marina) where extensive loss has occurred since 1995. A principal hypothesis for the loss of eelgrass is a recent decrease in light availability for eelgrass growth due to increase in turbidity associated with either an increase in fine sedimentation or biological productivity within the bay. To explore sources for this fine sediment and turbidity, a dual-frequency Biosonics sonar operating at 200 and 430 kHz was used to map seafloor depth, morphology and vegetation along 69 linear kilometers of the bay. The higher frequency 430 kHz system also provided information on particulate concentrations in the water column. A boat-mounted 600 kHz RDI Acoustic Doppler Current Profiler (ADCP) was used to map current velocity and direction and water column backscatter intensity along another 29 km, with select measurements made to characterize variations in circulation with tides. An underwater video camera was deployed to ground-truth acoustic data. Seventy one sediment samples were collected to quantify sediment grain size distributions across Westcott Bay. Sediment samples were analyzed for grain size at the Western Coastal and Marine Geology Team sediment laboratory in Menlo Park, Calif. These data reveal that the seafloor near the entrance to Westcott Bay is rocky with a complex morphology and covered with dense and diverse benthic vegetation. Current velocities were also measured to be highest at the entrance and along a deep channel extending 1 km into the bay. The substrate is increasingly comprised of finer sediments with distance into Westcott Bay where current velocities are lower. This report describes the data collected and preliminary findings of USGS Cruise B-6-07-PS conducted between May 31, 2007 and June 5, 2007.

  17. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  18. Influence of size and shape of sub-micrometer light scattering centers in ZnO-assisted TiO2 photoanode for dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Pham, Trang T. T.; Mathews, Nripan; Lam, Yeng-Ming; Mhaisalkar, Subodh

    2018-03-01

    Sub-micrometer cavities have been incorporated in the TiO2 photoanode of dye-sensitized solar cell to enhance its optical property with light scattering effect. These are large pores of several hundred nanometers in size and scatter incident light due to the difference refraction index between the scattering center and the surrounding materials, according to Mie theory. The pores are created using polystyrene (PS) or zinc oxide (ZnO) templates reported previously which resulted in ellipsoidal and spherical shapes, respectively. The effect of size and shape of scattering center was modeled using a numerical analysis finite-difference time-domain (FDTD). The scattering cross-section was not affected significantly with different shapes if the total displacement volume of the scattering center is comparable. Experiments were carried out to evaluate the optical property with varying size of ZnO templates. Photovoltaic effect of dye-sensitized solar cells made from these ZnO-assisted films were investigated with incident-photon-to-current efficiency to understand the effect of scattering center size on the enhancement of absorption. With 380 nm macropores incorporated, the power conversion efficiency has increased by 11% mostly thanks to the improved current density, while 170 nm and 500 nm macropores samples did not have increment in sufficiently wide range of absorbing wavelengths.

  19. Characterization of Total and Size-Fractionated Manganese Exposure by Work Area in a Shipbuilding Yard.

    PubMed

    Jeong, Jee Yeon; Park, Jong Su; Kim, Pan Gyi

    2016-06-01

    Shipbuilding involves intensive welding activities, and welders are exposed to a variety of metal fumes, including manganese, that may be associated with neurological impairments. This study aimed to characterize total and size-fractionated manganese exposure resulting from welding operations in shipbuilding work areas. In this study, we characterized manganese-containing particulates with an emphasis on total mass (n = 86, closed-face 37-mm cassette samplers) and particle size-selective mass concentrations (n = 86, 8-stage cascade impactor samplers), particle size distributions, and a comparison of exposure levels determined using personal cassette and impactor samplers. Our results suggest that 67.4% of all samples were above the current American Conference of Governmental Industrial Hygienists manganese threshold limit value of 100 μg/m(3) as inhalable mass. Furthermore, most of the particles containing manganese in the welding process were of the size of respirable particulates, and 90.7% of all samples exceeded the American Conference of Governmental Industrial Hygienists threshold limit value of 20 μg/m(3) for respirable manganese. The concentrations measured with the two sampler types (cassette: total mass; impactor: inhalable mass) were significantly correlated (r = 0.964, p < 0.001), but the total concentration obtained using cassette samplers was lower than the inhalable concentration of impactor samplers.

  20. Augmented Currents of an HCN2 Variant in Patients with Febrile Seizure Syndromes

    PubMed Central

    Dibbens, Leanne M.; Reid, Christopher A.; Hodgson, Bree; Thomas, Evan A.; Phillips, Alison M.; Gazina, Elena; Cromer, Brett A.; Clarke, Alison L.; Baram, Tallie Z.; Scheffer, Ingrid E.; Berkovic, Samuel F.; Petrou, Steven

    2012-01-01

    The genetic architecture of common epilepsies is largely unknown. HCNs are excellent epilepsy candidate genes because of their fundamental neurophysiological roles. Screening in subjects with febrile seizures and genetic epilepsy with febrile seizures plus revealed that 2.4% carried a common triple proline deletion (delPPP) in HCN2 that was seen in only 0.2% of blood bank controls. Currents generated by mutant HCN2 channels were ~35% larger than those of controls; an effect revealed using automated electrophysiology and an appropriately powered sample size. This is the first association of HCN2 and familial epilepsy, demonstrating gain of function of HCN2 current as a potential contributor to polygenic epilepsy. PMID:20437590

  1. The Views of Mathematics Teachers on the Factors Affecting the Integration of Technology in Mathematics Courses

    ERIC Educational Resources Information Center

    Kaleli-Yilmaz, Gül

    2015-01-01

    The aim of this study was to determine the views of mathematics teachers on the factors that affect the integration of technology in mathematic courses. It is a qualitative case study. The sample size of the study is 10 teachers who are receiving postgraduate education in a university in Turkey. The current study was conducted in three stages. At…

  2. The Consequences of Indexing the Minimum Wage to Average Wages in the U.S. Economy.

    ERIC Educational Resources Information Center

    Macpherson, David A.; Even, William E.

    The consequences of indexing the minimum wage to average wages in the U.S. economy were analyzed. The study data were drawn from the 1974-1978 May Current Population Survey (CPS) and the 180 monthly CPS Outgoing Rotation Group files for 1979-1993 (approximate annual sample sizes of 40,000 and 180,000, respectively). The effects of indexing on the…

  3. Comparison of fluvial suspended-sediment concentrations and particle-size distributions measured with in-stream laser diffraction and in physical samples

    USGS Publications Warehouse

    Czuba, Jonathan A.; Straub, Timothy D.; Curran, Christopher A.; Landers, Mark N.; Domanski, Marian M.

    2015-01-01

    Laser-diffraction technology, recently adapted for in-stream measurement of fluvial suspended-sediment concentrations (SSCs) and particle-size distributions (PSDs), was tested with a streamlined (SL), isokinetic version of the Laser In-Situ Scattering and Transmissometry (LISST) for measuring volumetric SSCs and PSDs ranging from 1.8-415 µm in 32 log-spaced size classes. Measured SSCs and PSDs from the LISST-SL were compared to a suite of 22 datasets (262 samples in all) of concurrent suspended-sediment and streamflow measurements using a physical sampler and acoustic Doppler current profiler collected during 2010-12 at 16 U.S. Geological Survey streamflow-gaging stations in Illinois and Washington (basin areas: 38 – 69,264 km2). An unrealistically low computed effective density (mass SSC / volumetric SSC) of 1.24 g/ml (95% confidence interval: 1.05-1.45 g/ml) provided the best-fit value (R2 = 0.95; RMSE = 143 mg/L) for converting volumetric SSC to mass SSC for over 2 orders of magnitude of SSC (12-2,170 mg/L; covering a substantial range of SSC that can be measured by the LISST-SL) despite being substantially lower than the sediment particle density of 2.67 g/ml (range: 2.56-2.87 g/ml, 23 samples). The PSDs measured by the LISST-SL were in good agreement with those derived from physical samples over the LISST-SL's measureable size range. Technical and operational limitations of the LISST-SL are provided to facilitate the collection of more accurate data in the future. Additionally, the spatial and temporal variability of SSC and PSD measured by the LISST-SL is briefly described to motivate its potential for advancing our understanding of suspended-sediment transport by rivers.

  4. Origin and heterogeneity of pore sizes in the Mount Simon Sandstone and Eau Claire Formation: Implications for multiphase fluid flow

    DOE PAGES

    Mozley, Peter S.; Heath, Jason E.; Dewers, Thomas A.; ...

    2016-01-01

    The Mount Simon Sandstone and Eau Claire Formation represent a principal reservoir - caprock system for wastewater disposal, geologic CO 2 storage, and compressed air energy storage (CAES) in the Midwestern United States. Of primary concern to site performance is heterogeneity in flow properties that could lead to non-ideal injectivity and distribution of injected fluids (e.g., poor sweep efficiency). Using core samples from the Dallas Center Structure, Iowa, we investigate pore structure that governs flow properties of major lithofacies of these formations. Methods include gas porosimetry and permeametry, mercury intrusion porosimetry, thin section petrography, and X-ray diffraction. The lithofacies exhibitmore » highly variable intra- and inter-informational distributions of pore throat and body sizes. Based on pore-throat size, samples fall into four distinct groups. Micropore-throat dominated samples are from the Eau Claire Formation, whereas the macropore-, mesopore-, and uniform-dominated samples are from the Mount Simon Sandstone. Complex paragenesis governs the high degree of pore and pore-throat size heterogeneity, due to an interplay of precipitation, non-uniform compaction, and later dissolution of cements. Furthermore, the cement dissolution event probably accounts for much of the current porosity in the unit. The unusually heterogeneous nature of the pore networks in the Mount Simon Sandstone indicates that there is a greater-than-normal opportunity for reservoir capillary trapping of non-wetting fluids — as quantified by CO 2 and air column heights — which should be taken into account when assessing the potential of the reservoir-caprock system for CO 2 storage and CAES.« less

  5. Posterior dental size reduction in hominids: the Atapuerca evidence.

    PubMed

    Bermúdez de Castro, J M; Nicolas, M E

    1995-04-01

    In order to reassess previous hypotheses concerning dental size reduction of the posterior teeth during Pleistocene human evolution, current fossil dental evidence is examined. This evidence includes the large sample of hominid teeth found in recent excavations (1984-1993) in the Sima de los Huesos Middle Pleistocene cave site of the Sierra de Atapuerca (Burgos, Spain). The lower fourth premolars and molars of the Atapuerca hominids, probably older than 300 Kyr, have dimensions similar to those of modern humans. Further, these hominids share the derived state of other features of the posterior teeth with modern humans, such as a similar relative molar size and frequent absence of the hypoconulid, thus suggesting a possible case of parallelism. We believe that dietary changes allowed size reduction of the posterior teeth during the Middle Pleistocene, and the present evidence suggests that the selective pressures that operated on the size variability of these teeth were less restrictive than what is assumed by previous models of dental reduction. Thus, the causal relationship between tooth size decrease and changes in food-preparation techniques during the Pleistocene should be reconsidered. Moreover, the present evidence indicates that the differential reduction of the molars cannot be explained in terms of restriction of available growth space. The molar crown area measurements of a modern human sample were also investigated. The results of this study, as well as previous similar analyses, suggest that a decrease of the rate of cell proliferation, which affected the later-forming crown regions to a greater extent, may be the biological process responsible for the general and differential dental size reduction that occurred during human evolution.

  6. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  7. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  8. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Environmental factors controlling the distribution of rhodoliths: An integrated study based on seafloor sampling, ROV and side scan sonar data, offshore the W-Pontine Archipelago

    NASA Astrophysics Data System (ADS)

    Sañé, E.; Chiocci, F. L.; Basso, D.; Martorelli, E.

    2016-10-01

    The effects of different environmental factors controlling the distribution of different morphologies, sizes and growth forms of rhodoliths in the western Pontine Archipelago have been studied. The analysis of 231 grab samples has been integrated with 68 remotely operated vehicle (ROV) videos (22 h) and a high resolution (<1 m) side scan sonar mosaic of the seafloor surrounding the Archipelago, covering an area of approximately 460 km2. Living rhodoliths were collected in approximately 10% of the grab samples and observed in approximately 30% of the ROV dives. The combination of sediment sampling, video surveys and acoustic facies mapping suggested that the presence of rhodoliths can be associated to the dishomogeneous high backscatter sonar facies and high backscatter facies. Both pralines and unattached branches were found to be the most abundant morphological groups (50% and 41% of samples, respectively), whereas boxwork rhodoliths were less common, accounting only for less than 10% of the total number of samples. Pralines and boxwork rhodoliths were almost equally distributed among large (28%), medium (36%) and small sizes (36%). Pralines generally presented a fruticose growth form (49% of pralines) even if pralines with encrusting-warty (36% of pralines) or lumpy (15% of pralines) growth forms were also present. Morphologies, sizes and growth forms vary mainly along the depth gradient. Large rhodoliths with a boxwork morphology are abundant at depth, whereas unattached branches and, in general, rhodoliths with a high protuberance degree are abundant in shallow waters. The exposure to storm waves and bottom currents related to geostrofic circulation could explain the absence of rhodoliths off the eastern side of the three islands forming the Archipelago.

  10. Current and previous spatial distributions of oilseed rape fields influence the abundance and the body size of a solitary wild bee, Andrena cineraria, in permanent grasslands.

    PubMed

    Van Reeth, Colin; Caro, Gaël; Bockstaller, Christian; Michel, Nadia

    2018-01-01

    Wild bees are essential pollinators whose survival partly depends on the capacity of their environment to offer a sufficient amount of nectar and pollen. Semi-natural habitats and mass-flowering crops such as oilseed rape provide abundant floristic resources for bees. The aim of this study was to evaluate the influences of the spatial distribution of semi-natural habitats and oilseed rape fields on the abundance and the mean body size of a solitary bee in grasslands. We focused on a generalist mining bee, Andrena cineraria, that forages and reproduces during oilseed rape flowering. In 21 permanent grasslands of Eastern France, we captured 1 287 individuals (1 205 males and 82 females) and measured the body size of male individuals. The flower density in grasslands was quantified during bee captures (2016) and the landscape surrounding grasslands was characterized during two consecutive years (2015 and 2016). The influence of oilseed rape was tested through its distribution in the landscape during both the current year of bee sampling and the previous year. Bee abundance was positively influenced by the flower density in grasslands and by the area covered by oilseed rape around grasslands in the previous year. The mean body size of A. cineraria was explained by the interaction between flower density in the grassland and the distance to the nearest oilseed rape field in the current year: the flower density positively influenced the mean body size only in grasslands distant from oilseed rape. A. cineraria abundance and body size distribution were not affected by the area of semi-natural habitats in the landscape. The spatial distribution of oilseed rape fields (during both the current and the previous year) as well as the local density of grassland flowers drive both bee abundance and the mean value of an intraspecific trait (body size) in permanent grasslands. Space-time variations of bee abundance and mean body size in grasslands may have important ecological implications on plant pollination and on interspecific interactions between pollinators. Specifically, a competition between bee species for nesting sites might occur in oilseed rape rich landscapes, thus raising important conservation issues for bee species that do not benefit from oilseed rape resources.

  11. Combined investigation of Eddy current and ultrasonic techniques for composite materials NDE

    NASA Technical Reports Server (NTRS)

    Davis, C. W.; Nath, S.; Fulton, J. P.; Namkung, M.

    1993-01-01

    Advanced composites are not without trade-offs. Their increased designability brings an increase in the complexity of their internal geometry and, as a result, an increase in the number of failure modes associated with a defect. When two or more isotropic materials are combined in a composite, the isotropic material failure modes may also combine. In a laminate, matrix delamination, cracking and crazing, and voids and porosity, will often combine with fiber breakage, shattering, waviness, and separation to bring about ultimate structural failure. This combining of failure modes can result in defect boundaries of different sizes, corresponding to the failure of each structural component. This paper discusses a dual-technology NDE (Non Destructive Evaluation) (eddy current (EC) and ultrasonics (UT)) study of graphite/epoxy (gr/ep) laminate samples. Eddy current and ultrasonic raster (Cscan) imaging were used together to characterize the effects of mechanical impact damage, high temperature thermal damage and various types of inserts in gr/ep laminate samples of various stacking sequences.

  12. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  13. Morphological effects on sensitivity of heterogeneous energetic materials

    NASA Astrophysics Data System (ADS)

    Roy, Sidhartha; Rai, Nirmal; Sen, Oishik; Udaykumar, H. S.

    2017-06-01

    The mesoscale physical response under shock loading in heterogeneous energetics is inherently linked to the microstructural characteristics. The current work demonstrates the connection between the microstructural features of porous energetic material and its sensitivity. A unified levelset based framework is developed to characterize the microstructures of a given sample. Several morphological metrics describing the mesoscale geometry of the materials are extracted using the current tool including anisotropy, tortuosity, surface to volume, nearest neighbors, size and curvature distributions. The relevant metrics among the ones extracted are identified and correlated to the mesoscale response of the energetic materials under shock loading. Two classes of problems are considered here: (a) field of idealized voids embedded in the HMX material and (b) real samples of pressed HMX. The effects of stochasticity associated with void arrangements on the sensitivity of the energetic material samples are shown. In summary, this work demonstrates the relationship between the mesoscale morphology and shock response of heterogeneous energetic materials using a levelset based framework.

  14. Effect of current density during electrodeposition on microstructure and hardness of textured Cu coating in the application of antimicrobial Al touch surface.

    PubMed

    Augustin, Arun; Huilgol, Prashant; Udupa, K Rajendra; Bhat K, Udaya

    2016-10-01

    Copper is a well proven antimicrobial material which can be used in the form of a coating on the touch surfaces. Those coating can offer a good service as touch surface for very long time if only they possess good mechanical properties like scratch resistance and microhardness. In the present work the above mentioned mechanical properties were determined on the electrodeposited copper thin film; deposited on double zincated aluminium. During deposition, current density was varied from 2Adm(-2) to 10Adm(-2), to produce crystallite size in the range of 33.5nm to 66nm. The crystallite size was calculated from the X-ray peak broadening (Scherrer׳s formula) which were later confirmed by TEM micrographs. The scratch hardness and microhardness of the coating were measured and correlated with the crystallite size in the copper coating. Both characteristic values were found to increase with the reduction in crystallite size. Reduced crystallite size (Hall-Petch effect) and preferred growth of copper films along (111) plane play a significant role on the increase in the hardness of the coating. Further, TEM analysis reveals the presence of nano-twins in the film deposited at higher current density, which contributed to a large extent to the sharp increase of coating hardness compared to the mechanism of Hall-Petch effect. The antimicrobial ability of the coated sample has been evaluated against Escherichia coli bacteria and which is compared with that of commercially available bulk copper using the colony count method. 94% of E. coli cells were died after six hours of exposure to the copper coated surface. The morphology of the copper treated cells was studied using SEM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Evaluation of statistical designs in phase I expansion cohorts: the Dana-Farber/Harvard Cancer Center experience.

    PubMed

    Dahlberg, Suzanne E; Shapiro, Geoffrey I; Clark, Jeffrey W; Johnson, Bruce E

    2014-07-01

    Phase I trials have traditionally been designed to assess toxicity and establish phase II doses with dose-finding studies and expansion cohorts but are frequently exceeding the traditional sample size to further assess endpoints in specific patient subsets. The scientific objectives of phase I expansion cohorts and their evolving role in the current era of targeted therapies have yet to be systematically examined. Adult therapeutic phase I trials opened within Dana-Farber/Harvard Cancer Center (DF/HCC) from 1988 to 2012 were identified for sample size details. Statistical designs and study objectives of those submitted in 2011 were reviewed for expansion cohort details. Five hundred twenty-two adult therapeutic phase I trials were identified during the 25 years. The average sample size of a phase I study has increased from 33.8 patients to 73.1 patients over that time. The proportion of trials with planned enrollment of 50 or fewer patients dropped from 93.0% during the time period 1988 to 1992 to 46.0% between 2008 and 2012; at the same time, the proportion of trials enrolling 51 to 100 patients and more than 100 patients increased from 5.3% and 1.8%, respectively, to 40.5% and 13.5% (χ(2) test, two-sided P < .001). Sixteen of the 60 trials (26.7%) in 2011 enrolled patients to three or more sub-cohorts in the expansion phase. Sixty percent of studies provided no statistical justification of the sample size, although 91.7% of trials stated response as an objective. Our data suggest that phase I studies have dramatically changed in size and scientific scope within the last decade. Additional studies addressing the implications of this trend on research processes, ethical concerns, and resource burden are needed. © The Author 2014. Published by Oxford University Press. All rights reserved.

  16. The episode of genetic drift defining the migration of humans out of Africa is derived from a large east African population size.

    PubMed

    Elhassan, Nuha; Gebremeskel, Eyoab Iyasu; Elnour, Mohamed Ali; Isabirye, Dan; Okello, John; Hussien, Ayman; Kwiatksowski, Dominic; Hirbo, Jibril; Tishkoff, Sara; Ibrahim, Muntaser E

    2014-01-01

    Human genetic variation particularly in Africa is still poorly understood. This is despite a consensus on the large African effective population size compared to populations from other continents. Based on sequencing of the mitochondrial Cytochrome C Oxidase subunit II (MT-CO2), and genome wide microsatellite data we observe evidence suggesting the effective size (Ne) of humans to be larger than the current estimates, with a foci of increased genetic diversity in east Africa, and a population size of east Africans being at least 2-6 fold larger than other populations. Both phylogenetic and network analysis indicate that east Africans possess more ancestral lineages in comparison to various continental populations placing them at the root of the human evolutionary tree. Our results also affirm east Africa as the likely spot from which migration towards Asia has taken place. The study reflects the spectacular level of sequence variation within east Africans in comparison to the global sample, and appeals for further studies that may contribute towards filling the existing gaps in the database. The implication of these data to current genomic research, as well as the need to carry out defined studies of human genetic variation that includes more African populations; particularly east Africans is paramount.

  17. Measurement of neoclassically predicted edge current density at ASDEX Upgrade

    NASA Astrophysics Data System (ADS)

    Dunne, M. G.; McCarthy, P. J.; Wolfrum, E.; Fischer, R.; Giannone, L.; Burckhart, A.; the ASDEX Upgrade Team

    2012-12-01

    Experimental confirmation of neoclassically predicted edge current density in an ELMy H-mode plasma is presented. Current density analysis using the CLISTE equilibrium code is outlined and the rationale for accuracy of the reconstructions is explained. Sample profiles and time traces from analysis of data at ASDEX Upgrade are presented. A high time resolution is possible due to the use of an ELM-synchronization technique. Additionally, the flux-surface-averaged current density is calculated using a neoclassical approach. Results from these two separate methods are then compared and are found to validate the theoretical formula. Finally, several discharges are compared as part of a fuelling study, showing that the size and width of the edge current density peak at the low-field side can be explained by the electron density and temperature drives and their respective collisionality modifications.

  18. The Population Structure of Glossina palpalis gambiensis from Island and Continental Locations in Coastal Guinea

    PubMed Central

    Solano, Philippe; Ravel, Sophie; Bouyer, Jeremy; Camara, Mamadou; Kagbadouno, Moise S.; Dyer, Naomi; Gardes, Laetitia; Herault, Damien; Donnelly, Martin J.; De Meeûs, Thierry

    2009-01-01

    Background We undertook a population genetics analysis of the tsetse fly Glossina palpalis gambiensis, a major vector of sleeping sickness in West Africa, using microsatellite and mitochondrial DNA markers. Our aims were to estimate effective population size and the degree of isolation between coastal sites on the mainland of Guinea and Loos Islands. The sampling locations encompassed Dubréka, the area with the highest Human African Trypanosomosis (HAT) prevalence in West Africa, mangrove and savannah sites on the mainland, and two islands, Fotoba and Kassa, within the Loos archipelago. These data are discussed with respect to the feasibility and sustainability of control strategies in those sites currently experiencing, or at risk of, sleeping sickness. Principal Findings We found very low migration rates between sites except between those sampled around the Dubréka area that seems to contain a widely dispersed and panmictic population. In the Kassa island samples, various effective population size estimates all converged on surprisingly small values (10

  19. Spatio-temporal population structuring and genetic diversity retention in depleted Atlantic Bluefin tuna of the Mediterranean Sea

    PubMed Central

    Riccioni, Giulia; Landi, Monica; Ferrara, Giorgia; Milano, Ilaria; Cariani, Alessia; Zane, Lorenzo; Sella, Massimo; Barbujani, Guido; Tinti, Fausto

    2010-01-01

    Fishery genetics have greatly changed our understanding of population dynamics and structuring in marine fish. In this study, we show that the Atlantic Bluefin tuna (ABFT, Thunnus thynnus), an oceanic predatory species exhibiting highly migratory behavior, large population size, and high potential for dispersal during early life stages, displays significant genetic differences over space and time, both at the fine and large scales of variation. We compared microsatellite variation of contemporary (n = 256) and historical (n = 99) biological samples of ABFTs of the central-western Mediterranean Sea, the latter dating back to the early 20th century. Measures of genetic differentiation and a general heterozygote deficit suggest that differences exist among population samples, both now and 96–80 years ago. Thus, ABFTs do not represent a single panmictic population in the Mediterranean Sea. Statistics designed to infer changes in population size, both from current and past genetic variation, suggest that some Mediterranean ABFT populations, although still not severely reduced in their genetic potential, might have suffered from demographic declines. The short-term estimates of effective population size are straddled on the minimum threshold (effective population size = 500) indicated to maintain genetic diversity and evolutionary potential across several generations in natural populations. PMID:20080643

  20. Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method

    NASA Astrophysics Data System (ADS)

    Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.

    2017-10-01

    The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.

  1. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Design and testing of a shrouded probe for airborne aerosol sampling in a high velocity airstream

    NASA Astrophysics Data System (ADS)

    Cain, Stuart Arthur

    1997-07-01

    Tropospheric aerosols play an important role in many phenomena related to global climate and climate change and two important parameters, aerosol size distribution and concentration, have been the focus of a great deal of attention. To study these parameters it is necessary to obtain a representative sample of the ambient aerosol using an airborne aerosol sampling probe mounted on a suitably equipped aircraft. Recently, however, serious questions have been raised (Huebert et al., 1990; Baumgardner et al., 1991) concerning the current procedures and techniques used in airborne aerosol sampling. We believe that these questions can be answered by: (1) use of a shrouded aerosol sampling probe, (2) proper aerodynamic sampler design using numerical simulation techniques, (3) calculation of the sampler calibration curve to be used in determining free-stream aerosol properties from measurements made with the sampler and (4) wind tunnel tests to verify the design and investigate the performance of the sampler at small angles of attack (typical in airborne sampling applications due to wind gusts and aircraft fuel consumption). Our analysis is limited to the collection of insoluble particles representative of the global tropospheric 'background aerosol' (0.1-2.6 μm diameter) whose characteristics are least likely to be affected by the collection process. We begin with a survey of the most relevant problems associated with current airborne aerosol samplers and define the physical quantity that we wish to measure. This includes the derivation of a unique mathematical expression relating the free-stream aerosol size distribution to aerosol data obtained from the airborne measurements with the sampler. We follow with the presentation of the results of our application of Computational Fluid Dynamics (CFD) and Computational Particle Dynamics (CPD) to the design of a shrouded probe for airborne aerosol sampling of insoluble tropospheric particles in the size range 0.1 to 15 μm diameter at an altitude of 6069 m (20,000 ft) above sea level (asl). Our aircraft of choice is the National Center for Atmospheric Research (NCAR) EC-130 Geoscience Research aircraft whose cruising speed at a sampling altitude of 6069 m asl is 100 m/s. We calculate the aspiration efficiency of the sampler and estimate the transmission efficiency of the diffuser probe based on particle trajectory simulations. We conclude by presenting the results of a series of qualitative and quantitative wind tunnel tests of the airflow through a plexiglass prototype of the sampler to verify our numerical simulations and predict the performance of the sampler at angles of attack from 0o to 15o.

  3. A lower bound on the number of cosmic ray events required to measure source catalogue correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolci, Marco; Romero-Wolf, Andrew; Wissel, Stephanie, E-mail: marco.dolci@polito.it, E-mail: Andrew.Romero-Wolf@jpl.nasa.gov, E-mail: swissel@calpoly.edu

    2016-10-01

    Recent analyses of cosmic ray arrival directions have resulted in evidence for a positive correlation with active galactic nuclei positions that has weak significance against an isotropic source distribution. In this paper, we explore the sample size needed to measure a highly statistically significant correlation to a parent source catalogue. We compare several scenarios for the directional scattering of ultra-high energy cosmic rays given our current knowledge of the galactic and intergalactic magnetic fields. We find significant correlations are possible for a sample of >1000 cosmic ray protons with energies above 60 EeV.

  4. Strain and thermally induced magnetic dynamics and spin current in magnetic insulators subject to transient optical grating

    NASA Astrophysics Data System (ADS)

    Wang, Xi-Guang; Chotorlishvili, Levan; Berakdar, Jamal

    2017-07-01

    We analyze the magnetic dynamics and particularlythe spin current in an open-circuit ferromagnetic insulator irradiated by two intense, phase-locked laser pulses. The interference of the laser beams generates a transient optical grating and a transient spatio-temporal temperature distribution. Both effects lead to elastic and heat waves at the surface and into the bulk of the sample. The strain induced spin current as well as the thermally induced magnonic spin current are evaluated numerically on the basis of micromagnetic simulations using solutions of the heat equation. We observe that the thermo-elastically induced magnonic spin current propagates on a distance larger than the characteristic size of thermal profile, an effect useful for applications in remote detection of spin caloritronics phenomena. Our findings point out that exploiting strain adds a new twist to heat-assisted magnetic switching and spin-current generation for spintronic applications.

  5. Superconducting properties of nano-sized SiO2 added YBCO thick film on Ag substrate

    NASA Astrophysics Data System (ADS)

    Almessiere, Munirah Abdullah; Al-Otaibi, Amal lafy; Azzouz, Faten Ben

    2017-10-01

    The microstructure and the flux pinning capability of SiO2-added YBa2Cu3Oy thick films on Ag substrates were investigated. A series of YBa2Cu3Oy thick films with small amounts (0-0.5 wt%) of nano-sized SiO2 particles (12 nm) was prepared. The thicknesses of the prepared thick films was approximately 100 µm. Phase analysis by x-ray diffraction and microstructure examination by scanning electron microscopy were performed and the critical current density dependence on the applied magnetic field Jc(H) and electrical resistivity ρ(T) were investigated. The magnetic field and temperature dependence of the critical current density (Jc) was calculated from magnetization measurements using Bean's critical state model. The results showed that the addition of a small amount (≤0.02 wt%) of SiO2 was effective in enhancing the critical current densities in the applied magnetic field. The sample with 0.01 wt% of added SiO2 exhibited a superconducting characteristics under an applied magnetic field for a temperature ranging from 10 to 77 K.

  6. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  7. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  8. Influence of Cu-Cr substitution on structural, morphological, electrical and magnetic properties of magnesium ferrite

    NASA Astrophysics Data System (ADS)

    Yonatan Mulushoa, S.; Murali, N.; Tulu Wegayehu, M.; Margarette, S. J.; Samatha, K.

    2018-03-01

    Cu-Cr substituted magnesium ferrite materials (Mg1 - xCuxCrxFe21 - xO4 with x = 0.0-0.7) have been synthesized by the solid state reaction method. XRD analysis revealed the prepared samples are cubic spinel with single phase face centered cubic. A significant decrease of ∼41.15 nm in particle size is noted in response to the increase in Cu-Cr substitution level. The room temperature resistivity increases gradually from 0.553 × 105 Ω cm (x = 0.0) to 0.105 × 108 Ω cm (x = 0.7). Temperature dependent DC-electrical resistivity of all the samples, exhibits semiconductor like behavior. Cu-Cr doped materials can be suitable to limit the eddy current losses. VSM result shows pure and doped magnesium ferrite particles show soft ferrimagnetic nature at room temperature. The saturation magnetization of the samples decreases initially from 34.5214 emu/g for x = 0.0 to 18.98 emu/g (x = 0.7). Saturation magnetization, remanence and coercivity are decreased with doping, which may be due to the increase in grain size.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirtley, John R., E-mail: jkirtley@stanford.edu; Rosenberg, Aaron J.; Palmstrom, Johanna C.

    Superconducting QUantum Interference Device (SQUID) microscopy has excellent magnetic field sensitivity, but suffers from modest spatial resolution when compared with other scanning probes. This spatial resolution is determined by both the size of the field sensitive area and the spacing between this area and the sample surface. In this paper we describe scanning SQUID susceptometers that achieve sub-micron spatial resolution while retaining a white noise floor flux sensitivity of ≈2μΦ{sub 0}/Hz{sup 1/2}. This high spatial resolution is accomplished by deep sub-micron feature sizes, well shielded pickup loops fabricated using a planarized process, and a deep etch step that minimizes themore » spacing between the sample surface and the SQUID pickup loop. We describe the design, modeling, fabrication, and testing of these sensors. Although sub-micron spatial resolution has been achieved previously in scanning SQUID sensors, our sensors not only achieve high spatial resolution but also have integrated modulation coils for flux feedback, integrated field coils for susceptibility measurements, and batch processing. They are therefore a generally applicable tool for imaging sample magnetization, currents, and susceptibilities with higher spatial resolution than previous susceptometers.« less

  10. Tunable UV- and Visible-Light Photoresponse Based on p-ZnO Nanostructures/n-ZnO/Glass Peppered with Au Nanoparticles.

    PubMed

    Hsu, Cheng-Liang; Lin, Yu-Hong; Wang, Liang-Kai; Hsueh, Ting-Jen; Chang, Sheng-Po; Chang, Shoou-Jinn

    2017-05-03

    UV- and visible-light photoresponse was achieved via p-type K-doped ZnO nanowires and nanosheets that were hydrothermally synthesized on an n-ZnO/glass substrate and peppered with Au nanoparticles. The K content of the p-ZnO nanostructures was 0.36 atom %. The UV- and visible-light photoresponse of the p-ZnO nanostructures/n-ZnO sample was roughly 2 times higher than that of the ZnO nanowires. The Au nanoparticles of various densities and diameter sizes were deposited on the p-ZnO nanostructures/n-ZnO samples by a simple UV photochemical reaction method yielding a tunable and enhanced UV- and visible-light photoresponse. The maximum UV and visible photoresponse of the Au nanoparticle sample was obtained when the diameter size of the Au nanoparticle was approximately 5-35 nm. On the basis of the localized surface plasmon resonance effect, the UV, blue, and green photocurrent/dark current ratios of Au nanoparticle/p-ZnO nanostructures/n-ZnO are ∼1165, ∼94.6, and ∼9.7, respectively.

  11. Problems in determining the surface density of the Galactic disk

    NASA Technical Reports Server (NTRS)

    Statler, Thomas S.

    1989-01-01

    A new method is presented for determining the local surface density of the Galactic disk from distance and velocity measurements of stars toward the Galactic poles. The procedure is fully three-dimensional, approximating the Galactic potential by a potential of Staeckel form and using the analytic third integral to treat the tilt and the change of shape of the velocity ellipsoid consistently. Applying the procedure to artificial data superficially resembling the K dwarf sample of Kuijken and Gilmore (1988, 1989), it is shown that the current best estimates of local disk surface density are uncertain by at least 30 percent. Of this, about 25 percent is due to the size of the velocity sample, about 15 percent comes from uncertainties in the rotation curve and the solar galactocentric distance, and about 10 percent from ignorance of the shape of the velocity distribution above z = 1 kpc, the errors adding in quadrature. Increasing the sample size by a factor of 3 will reduce the error to 20 percent. To achieve 10 percent accuracy, observations will be needed along other lines of sight to constrain the shape of the velocity ellipsoid.

  12. Current status and future challenges in T-cell receptor/peptide/MHC molecular dynamics simulations.

    PubMed

    Knapp, Bernhard; Demharter, Samuel; Esmaielbeiki, Reyhaneh; Deane, Charlotte M

    2015-11-01

    The interaction between T-cell receptors (TCRs) and major histocompatibility complex (MHC)-bound epitopes is one of the most important processes in the adaptive human immune response. Several hypotheses on TCR triggering have been proposed. Many of them involve structural and dynamical adjustments in the TCR/peptide/MHC interface. Molecular Dynamics (MD) simulations are a computational technique that is used to investigate structural dynamics at atomic resolution. Such simulations are used to improve understanding of signalling on a structural level. Here we review how MD simulations of the TCR/peptide/MHC complex have given insight into immune system reactions not achievable with current experimental methods. Firstly, we summarize methods of TCR/peptide/MHC complex modelling and TCR/peptide/MHC MD trajectory analysis methods. Then we classify recently published simulations into categories and give an overview of approaches and results. We show that current studies do not come to the same conclusions about TCR/peptide/MHC interactions. This discrepancy might be caused by too small sample sizes or intrinsic differences between each interaction process. As computational power increases future studies will be able to and should have larger sample sizes, longer runtimes and additional parts of the immunological synapse included. © The Author 2015. Published by Oxford University Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.

  13. A dataset of housing market and self-attitudes towards housing location choices in Alexandria, Egypt.

    PubMed

    Ibrahim, Mohamed R

    2017-04-01

    A survey, of sample size 224, is designed to include the different related-factors to housing location choice, such as; socioeconomic factors, housing characteristics, travel behavior, current self-selection factors, housing demand and future location preferences. It comprises 16 questions, categorized into three different sections; socioeconomic (5 Questions), current dwelling unit characteristics (7 Questions), and housing demand characteristics (4 Questions). The first part, socioeconomic, covers the basic information about the respondent, such as; age, gender, marital status, employment, and car ownership. While the second part, current dwelling unit characteristics, covers different aspect concerning the residential unit typology, financial aspects, and travel behavior of the respondent. It includes the tenure types of the residential unit, estimation of the unit price (in the case of ownership or renting), housing typologies, the main reason for choosing the unit, in case of working, the modes of travel to work, and time to reach it, residential mobility in the last decade, and the ownership of any other residential units. The last part, housing demand characteristics, covers the size of the demand for a residential unit, preference in living in a certain area and the reason to choose it, and the preference of residential unit׳s tenure. This survey is a representative sample for the population in Alexandria, Egypt. The data in this article is represented in: How do people select their residential locations in Egypt? The case of Alexandria; JCIT1757.

  14. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  15. A New On-the-Fly Sampling Method for Incoherent Inelastic Thermal Neutron Scattering Data in MCNP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlou, Andrew Theodore; Brown, Forrest B.; Ji, Wei

    2014-09-02

    At thermal energies, the scattering of neutrons in a system is complicated by the comparable velocities of the neutron and target, resulting in competing upscattering and downscattering events. The neutron wavelength is also similar in size to the target's interatomic spacing making the scattering process a quantum mechanical problem. Because of the complicated nature of scattering at low energies, the thermal data files in ACE format used in continuous-energy Monte Carlo codes are quite large { on the order of megabytes for a single temperature and material. In this paper, a new storage and sampling method is introduced that ismore » orders of magnitude less in size and is used to sample scattering parameters at any temperature on-the-fly. In addition to the reduction in storage, the need to pre-generate thermal scattering data tables at fine temperatures has been eliminated. This is advantageous for multiphysics simulations which may involve temperatures not known in advance. A new module was written for MCNP6 that bypasses the current S(α,β) table lookup in favor of the new format. The new on-the-fly sampling method was tested for graphite for two benchmark problems at ten temperatures: 1) an eigenvalue test with a fuel compact of uranium oxycarbide fuel homogenized into a graphite matrix, 2) a surface current test with a \\broomstick" problem with a monoenergetic point source. The largest eigenvalue difference was 152pcm for T= 1200K. For the temperatures and incident energies chosen for the broomstick problem, the secondary neutron spectrum showed good agreement with the traditional S(α,β) sampling method. These preliminary results show that sampling thermal scattering data on-the-fly is a viable option to eliminate both the storage burden of keeping thermal data at discrete temperatures and the need to know temperatures before simulation runtime.« less

  16. Problems with sampling desert tortoises: A simulation analysis based on field data

    USGS Publications Warehouse

    Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.

    2005-01-01

    The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.

  17. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Evaluation of respondent-driven sampling.

    PubMed

    McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required when interpreting findings based on the sampling method.

  19. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  20. JPRS Report, West Europe.

    DTIC Science & Technology

    1988-02-03

    hint interestingly at current moods, especially if the data are combined with other surveys conducted by Economy Research, Inc., for UUSI SUOMI in...likewise a problem, especially if a long time has elapsed since the previous election. Comparison is also made difficult by different-sized samples... especially , for worse; all of them are awaiting their chance. And that fact—that is, the existence of a majority composed of men who are constantly

  1. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  2. Rationale and design of the IMPACT EU-trial: improve management of heart failure with procalcitonin biomarkers in cardiology (BIC)-18.

    PubMed

    Möckel, Martin; Slagman, Anna; Vollert, Jörn Ole; Ebmeyer, Stefan; Wiemer, Jan C; Searle, Julia; Giannitsis, Evangelos; Kellum, John A; Maisel, Alan

    2018-02-01

    To evaluate the effectiveness of procalcitonin (PCT)-guided antibiotic treatment compared to current treatment practice to reduce 90-day all-cause mortality in emergency patients with shortness of breath (SOB) and suspected acute heart failure (AHF). Concomitant AHF and lower respiratory tract (or other bacterial) infection in emergency patients with dyspnea are common and can be difficult to diagnose. Early and adequate initiation of antibiotic therapy (ABX) significantly improves patient outcome, but superfluous prescription of ABX maybe harmful. In a multicentre, prospective, randomized, controlled process trial with an open intervention, adult emergency patients with SOB and increased levels of natriuretic peptides will be randomized to either a standard care group or a PCT-guided group with respect to the initiation of antibiotic treatment. In the PCT-guided group, the initiation of antibiotic therapy is based on the results of acute PCT measurements at admission, using a cut-off of 0.2 ng/ml. A two-stage sample-size adaptive design is used; an interim analysis was done after completion of 50% of patients and the final sample size remained unchanged. Primary endpoint is 90-day all-cause mortality. The current study will provide evidence, whether the routine use of PCT in patients with suspected AHF improves outcome.

  3. Analysis of work ability and work-related physical activity of employees in a medium-sized business.

    PubMed

    Wilke, Christiane; Ashton, Philip; Elis, Tobias; Biallas, Bianca; Froböse, Ingo

    2015-12-18

    Work-related physical activity (PA) and work ability are of growing importance in modern working society. There is evidence for age- and job-related differences regarding PA and work ability. This study analyses work ability and work-related PA of employees in a medium-sized business regarding age and occupation. The total sample consists of 148 employees (116 men-78.38% of the sample-and 32 women, accounting for 21.62%; mean age: 40.85 ± 10.07 years). 100 subjects (67.57%) are white-collar workers (WC), and 48 (32.43%) are blue-collar workers (BC). Work ability is measured using the work ability index, and physical activity is obtained via the Global Physical Activity Questionnaire. Work ability shows significant differences regarding occupation (p = 0.001) but not regarding age. Further, significant differences are found for work-related PA concerning occupation (p < 0.0001), but again not for age. Overall, more than half of all subjects meet the current guidelines for physical activity. Work ability is rated as good, yet, a special focus should lie on the promotion during early and late working life. Also, there is still a lack of evidence on the level of work-related PA. Considering work-related PA could add to meeting current activity recommendations.

  4. Free flux flow: a probe into the field dependence of vortex core size in clean single crystals

    NASA Astrophysics Data System (ADS)

    Gapud, A. A.; Gafarov, O.; Moraes, S.; Thompson, J. R.; Christen, D. K.; Reyes, A. P.

    2012-02-01

    The free-flux-flow (FFF) phase has been attained successfully in a number of clean, weak-pinning, low-anisotropy, low-Tc, single-crystal samples as a unique probe into type II superconductivity that is independent of composition. The ``clean'' quality of the samples have been confirmed by reversible magnetization, high residual resistivity ratio, and low critical current densities Jc with a re-entrant ``peak'' effect in Jc(H) just below the critical field Hc2. The necessity of high current densities presented technical challenges that had been successfully addressed, and FFF is confirmed by a field-dependent ohmic state that is also well below the normal state. In these studies, the FFF resistivity ρf(H) has been measured in order to observe the field-dependent core size of the quantized magnetic flux vortices as modeled recently by Kogan and Zelezhina (KZ) who predicted a specific deviation from Bardeen-Stephen flux flow, dependent on normalized temperature and scattering parameter λ. The compounds studied are: V3Si, LuNi2B2C, and NbSe2, and results have shown consistency with the KZ model. Other applications of this method could also be used to probe normal-state properties, especially for the new iron arsenides, as will be discussed.

  5. Assessing macroinvertebrate biodiversity in freshwater ecosystems: Advances and challenges in dna-based approaches

    USGS Publications Warehouse

    Pfrender, M.E.; Ferrington, L.C.; Hawkins, C.P.; Hartzell, P.L.; Bagley, M.; Jackson, S.; Courtney, G.W.; Larsen, D.P.; Creutzburg, B.R.; Levesque, C.A.; Epler, J.H.; Morse, J.C.; Fend, S.; Petersen, M.J.; Ruiter, D.; Schindel, D.; Whiting, M.

    2010-01-01

    Assessing the biodiversity of macroinvertebrate fauna in freshwater ecosystems is an essential component of both basic ecological inquiry and applied ecological assessments. Aspects of taxonomic diversity and composition in freshwater communities are widely used to quantify water quality and measure the efficacy of remediation and restoration efforts. The accuracy and precision of biodiversity assessments based on standard morphological identifications are often limited by taxonomic resolution and sample size. Morphologically based identifications are laborious and costly, significantly constraining the sample sizes that can be processed. We suggest that the development of an assay platform based on DNA signatures will increase the precision and ease of quantifying biodiversity in freshwater ecosystems. Advances in this area will be particularly relevant for benthic and planktonic invertebrates, which are often monitored by regulatory agencies. Adopting a genetic assessment platform will alleviate some of the current limitations to biodiversity assessment strategies. We discuss the benefits and challenges associated with DNA-based assessments and the methods that are currently available. As recent advances in microarray and next-generation sequencing technologies will facilitate a transition to DNA-based assessment approaches, future research efforts should focus on methods for data collection, assay platform development, establishing linkages between DNA signatures and well-resolved taxonomies, and bioinformatics. ?? 2010 by The University of Chicago Press.

  6. Evolution of mechanical properties of ultrafine grained 1050 alloy annealing with electric current

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Yiheng; He, Lizi, E-mail: helizi@epm.neu.edu.cn; Zhang, Lin

    2016-03-15

    The tensile properties and microstructures of 1050 aluminum alloy prepared by equal channel angular pressing at cryogenic temperature (cryoECAP) after electric current annealing at 90–210 °C for 3 h were investigated by tensile test, electron back scattering diffraction (EBSD) and transmission electron microscopy (TEM). An unexpected annealing-induced strengthening phenomenon occurs at 90–210 °C, due to a significant decrease in the density of mobile dislocations after annealing, and thus a higher yield stress is required to nucleate alternative dislocation sources during tensile test. The electric current can enhance the motion of dislocations, lead to a lower dislocation density at 90–150 °C,more » and thus shift the peak annealing temperature from 150 °C to 120 °C. Moreover, the electric current can promote the migration of grain boundaries at 150–210 °C, result in a larger grain size at 150 °C and 210 °C, and thus causes a lower yield stress. The sample annealed with electric current has a lower uniform elongation at 90–120 °C, and the deviation in the uniform elongation between samples annealed without and with electric current becomes smaller at 150–210 °C. - Highlights: • An unexpected annealing-induced strengthening phenomenon occurs at 90–210 °C. • The d. c. current can enhance the motion of dislocations at 90–150 °C, and thus shift the peak annealing temperature from 150 °C to 120 °C. • The d. c. current can promote the grain growth at 150–210 °C, and thus cause a lower yield stress. • The DC annealed sample has a lower uniform elongation at 90–120 °C.« less

  7. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  8. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  9. Variable-Size Bead Layer as Standard Reference for Endothelial Microscopes.

    PubMed

    Tufo, Simona; Prazzoli, Erica; Ferraro, Lorenzo; Cozza, Federica; Borghesi, Alessandro; Tavazzi, Silvia

    2017-02-01

    For morphometric analysis of the cell mosaic of corneal endothelium, checking accuracy and precision of instrumentation is a key step. In this study, a standard reference sample is proposed, developed to reproduce the cornea with its shape and the endothelium with its intrinsic variability in the cell size. A polystyrene bead layer (representing the endothelium) was deposited on a lens (representing the cornea). Bead diameters were 20, 25, and 30 μm (fractions in number 55%, 30%, and 15%, respectively). Bead density and hexagonality were simulated to obtain the expected true values and measured using a slit-lamp endothelial microscope applied to 1) a Takagi 700GL slit lamp at 40× magnification (recommended standard setup) and 2) a Takagi 2ZL slit lamp at 25× magnification. The simulation provided the expected bead density 2001 mm and hexagonality 47%. At 40×, density and hexagonality were measured to be 2009 mm (SD 93 mm) and 45% (SD 3%). At 25× on a different slit lamp, the comparison between measured and expected densities provided the factor 1.526 to resize the image and to use the current algorithms of the slit-lamp endothelial microscope for cell recognition. A variable-size polystyrene bead layer on a lens is proposed as a standard sample mimicking the real shape of the cornea and the variability of cell size and cell arrangement of corneal endothelium. The sample is suggested to evaluate accuracy and precision of cell density and hexagonality obtained by different endothelial microscopes, including a slit-lamp endothelial microscope applied to different slit lamps, also at different magnifications.

  10. Snow particles extracted from X-ray computed microtomography imagery and their single-scattering properties

    NASA Astrophysics Data System (ADS)

    Ishimoto, Hiroshi; Adachi, Satoru; Yamaguchi, Satoru; Tanikawa, Tomonori; Aoki, Teruo; Masuda, Kazuhiko

    2018-04-01

    Sizes and shapes of snow particles were determined from X-ray computed microtomography (micro-CT) images, and their single-scattering properties were calculated at visible and near-infrared wavelengths using a Geometrical Optics Method (GOM). We analyzed seven snow samples including fresh and aged artificial snow and natural snow obtained from field samples. Individual snow particles were numerically extracted, and the shape of each snow particle was defined by applying a rendering method. The size distribution and specific surface area distribution were estimated from the geometrical properties of the snow particles, and an effective particle radius was derived for each snow sample. The GOM calculations at wavelengths of 0.532 and 1.242 μm revealed that the realistic snow particles had similar scattering phase functions as those of previously modeled irregular shaped particles. Furthermore, distinct dendritic particles had a characteristic scattering phase function and asymmetry factor. The single-scattering properties of particles of effective radius reff were compared with the size-averaged single-scattering properties. We found that the particles of reff could be used as representative particles for calculating the average single-scattering properties of the snow. Furthermore, the single-scattering properties of the micro-CT particles were compared to those of particle shape models using our current snow retrieval algorithm. For the single-scattering phase function, the results of the micro-CT particles were consistent with those of a conceptual two-shape model. However, the particle size dependence differed for the single-scattering albedo and asymmetry factor.

  11. EXTENDING THE FLOOR AND THE CEILING FOR ASSESSMENT OF PHYSICAL FUNCTION

    PubMed Central

    Fries, James F.; Lingala, Bharathi; Siemons, Liseth; Glas, Cees A. W.; Cella, David; Hussain, Yusra N; Bruce, Bonnie; Krishnan, Eswar

    2014-01-01

    Objective The objective of the current study was to improve the assessment of physical function by improving the precision of assessment at the floor (extremely poor function) and at the ceiling (extremely good health) of the health continuum. Methods Under the NIH PROMIS program, we developed new physical function floor and ceiling items to supplement the existing item bank. Using item response theory (IRT) and the standard PROMIS methodology, we developed 30 floor items and 26 ceiling items and administered them during a 12-month prospective observational study of 737 individuals at the extremes of health status. Change over time was compared across anchor instruments and across items by means of effect sizes. Using the observed changes in scores, we back-calculated sample size requirements for the new and comparison measures. Results We studied 444 subjects with chronic illness and/or extreme age, and 293 generally fit subjects including athletes in training. IRT analyses confirmed that the new floor and ceiling items outperformed reference items (p<0.001). The estimated post-hoc sample size requirements were reduced by a factor of two to four at the floor and a factor of two at the ceiling. Conclusion Extending the range of physical function measurement can substantially improve measurement quality, can reduce sample size requirements and improve research efficiency. The paradigm shift from Disability to Physical Function includes the entire spectrum of physical function, signals improvement in the conceptual base of outcome assessment, and may be transformative as medical goals more closely approach societal goals for health. PMID:24782194

  12. Public attitudes toward larger cigarette pack warnings: Results from a nationally representative U.S. sample

    PubMed Central

    2017-01-01

    A large body of evidence supports the effectiveness of larger health warnings on cigarette packages. However, there is limited research examining attitudes toward such warning labels, which has potential implications for implementation of larger warning labels. The purpose of the current study was to examine attitudes toward larger warning sizes on cigarette packages and examine variables associated with more favorable attitudes. In a nationally representative survey of U.S. adults (N = 5,014), participants were randomized to different warning size conditions, assessing attitude toward “a health warning that covered (25, 50, 75) % of a cigarette pack.” SAS logistic regression survey procedures were used to account for the complex survey design and sampling weights. Across experimental groups, nearly three-quarters (72%) of adults had attitudes supportive of larger warning labels on cigarette packs. Among the full sample and smokers only (N = 1,511), most adults had favorable attitudes toward labels that covered 25% (78.2% and 75.2%, respectively), 50% (70% and 58.4%, respectively), and 75% (67.9% and 61%, respectively) of a cigarette pack. Young adults, females, racial/ethnic minorities, and non-smokers were more likely to have favorable attitudes toward larger warning sizes. Among smokers only, females and those with higher quit intentions held more favorable attitudes toward larger warning sizes. Widespread support exists for larger warning labels on cigarette packages among U.S. adults, including among smokers. Our findings support the implementation of larger health warnings on cigarette packs in the U.S. as required by the 2009 Tobacco Control Act. PMID:28253257

  13. On the Contribution of Curl-Free Current Patterns to the Ultimate Intrinsic Signal-to-Noise Ratio at Ultra-High Field Strength.

    PubMed

    Pfrommer, Andreas; Henning, Anke

    2017-05-01

    The ultimate intrinsic signal-to-noise ratio (SNR) is a coil independent performance measure to compare different receive coil designs. To evaluate this benchmark in a sample, a complete electromagnetic basis set is required. The basis set can be obtained by curl-free and divergence-free surface current distributions, which excite linearly independent solutions to Maxwell's equations. In this work, we quantitatively investigate the contribution of curl-free current patterns to the ultimate intrinsic SNR in a spherical head-sized model at 9.4 T. Therefore, we compare the ultimate intrinsic SNR obtained with having only curl-free or divergence-free current patterns, with the ultimate intrinsic SNR obtained from a combination of curl-free and divergence-free current patterns. The influence of parallel imaging is studied for various acceleration factors. Moreover results for different field strengths (1.5 T up to 11.7 T) are presented at specific voxel positions and acceleration factors. The full-wave electromagnetic problem is analytically solved using dyadic Green's functions. We show, that at ultra-high field strength (B 0 ⩾7T) a combination of curl-free and divergence-free current patterns is required to achieve the best possible SNR at any position in a spherical head-sized model. On 1.5- and 3T platforms, divergence-free current patterns are sufficient to cover more than 90% of the ultimate intrinsic SNR. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Indicators of quality of antenatal care: a pilot study.

    PubMed

    Vause, S; Maresh, M

    1999-03-01

    To pilot a list of indicators of quality of antenatal care across a range of maternity care settings. For each indicator to determine what is achieved in current clinical practice, to facilitate the setting of audit standards and calculation of appropriate sample sizes for audit. A multicentre retrospective observational study. Nine maternity units in the United Kingdom. 20,771 women with a singleton pregnancy, who were delivered between 1 August 1994 and 31 July 1995. Nine of the eleven suggested indicators were successfully piloted. Two indicators require further development. In seven of the nine hospitals external cephalic version was not commonly performed. There were wide variations in the proportions of women screened for asymptomatic bacteriuria. Screening of women from ethnic minorities for haemoglobinopathy was more likely in hospitals with a large proportion of non-caucasian women. A large number of Rhesus negative women did not have a Rhesus antibody check performed after 28 weeks of gestation and did not receive anti-D immunoglobulin after a potentially sensitising event during pregnancy. As a result of the study appropriate sample sizes for future audit could be calculated. Measuring the extent to which evidence-based interventions are used in routine clinical practice provides a more detailed picture of the strengths and weaknesses in an antenatal service than traditional outcomes such as perinatal mortality rates. Awareness of an appropriate sample size should prevent waste of time and resources on inconclusive audits.

  15. Retention of Ancestral Genetic Variation Across Life-Stages of an Endangered, Long-Lived Iteroparous Fish.

    PubMed

    Carson, Evan W; Turner, Thomas F; Saltzgiver, Melody J; Adams, Deborah; Kesner, Brian R; Marsh, Paul C; Pilger, Tyler J; Dowling, Thomas E

    2016-11-01

    As with many endangered, long-lived iteroparous fishes, survival of razorback sucker depends on a management strategy that circumvents recruitment failure that results from predation by non-native fishes. In Lake Mohave, AZ-NV, management of razorback sucker centers on capture of larvae spawned in the lake, rearing them in off-channel habitats, and subsequent release ("repatriation") to the lake when adults are sufficiently large to resist predation. The effects of this strategy on genetic diversity, however, remained uncertain. After correction for differences in sample size among groups, metrics of mitochondrial DNA (mtDNA; number of haplotypes, N H , and haplotype diversity, H D ) and microsatellite (number of alleles, N A , and expected heterozygosity, H E ) diversity did not differ significantly between annual samples of repatriated adults and larval year-classes or among pooled samples of repatriated adults, larvae, and wild fish. These findings indicate that the current management program thus far maintained historical genetic variation of razorback sucker in the lake. Because effective population size, N e , is closely tied to the small census population size (N c = ~1500-3000) of razorback sucker in Lake Mohave, this population will remain at risk from genetic, as well as demographic risk of extinction unless N c is increased substantially. © The American Genetic Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Evaluation of multiple-frequency, active and passive acoustics as surrogates for bedload transport

    USGS Publications Warehouse

    Wood, Molly S.; Fosness, Ryan L.; Pachman, Gregory; Lorang, Mark; Tonolla, Diego

    2015-01-01

    The use of multiple-frequency, active acoustics through deployment of acoustic Doppler current profilers (ADCPs) shows potential for estimating bedload in selected grain size categories. The U.S. Geological Survey (USGS), in cooperation with the University of Montana (UM), evaluated the use of multiple-frequency, active and passive acoustics as surrogates for bedload transport during a pilot study on the Kootenai River, Idaho, May 17-18, 2012. Four ADCPs with frequencies ranging from 600 to 2000 kHz were used to measure apparent moving bed velocities at 20 stations across the river in conjunction with physical bedload samples. Additionally, UM scientists measured the sound frequencies of moving particles with two hydrophones, considered passive acoustics, along longitudinal transects in the study reach. Some patterns emerged in the preliminary analysis which show promise for future studies. Statistically significant relations were successfully developed between apparent moving bed velocities measured by ADCPs with frequencies 1000 and 1200 kHz and bedload in 0.5 to 2.0 mm grain size categories. The 600 kHz ADCP seemed somewhat sensitive to the movement of gravel bedload in the size range 8.0 to 31.5 mm, but the relation was not statistically significant. The passive hydrophone surveys corroborated the sample results and could be used to map spatial variability in bedload transport and to select a measurement cross-section with moving bedload for active acoustic surveys and physical samples.

  17. Waif goodbye! Average-size female models promote positive body image and appeal to consumers.

    PubMed

    Diedrichs, Phillippa C; Lee, Christina

    2011-10-01

    Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.

  18. Size-amplified acoustofluidic separation of circulating tumor cells with removable microbeads

    NASA Astrophysics Data System (ADS)

    Liu, Huiqin; Ao, Zheng; Cai, Bo; Shu, Xi; Chen, Keke; Rao, Lang; Luo, Changliang; Wang, Fu-Bin; Liu, Wei; Bondesson, Maria; Guo, Shishang; Guo, Feng

    2018-06-01

    Isolation and analysis of rare circulating tumor cells (CTCs) is of great interest in cancer diagnosis, prognosis, and treatment efficacy evaluation. Acoustofluidic cell separation becomes an attractive method due to its contactless, noninvasive, simple, and versatile features. However, the indistinctive physical difference between CTCs and normal blood cells limits the purity of CTCs using current acoustic methods. Herein, we demonstrate a size-amplified acoustic separation and release of CTCs with removable microbeads. CTCs selectively bound to size-amplifiers (40 μm-diameter anti-EpCAM/gelatin-coated SiO2 microbeads) have significant physical differences (size and mechanics) compared to normal blood cells, resulting in an amplification of acoustic radiation force approximately a hundredfold over that of bare CTCs or normal blood cells. Therefore, CTCs can be efficiently sorted out with size-amplifiers in a traveling surface acoustic wave microfluidic device and released from size-amplifiers by enzymatic degradation for further purification or downstream analysis. We demonstrate a cell separation from blood samples with a total efficiency (E total) of ∼ 77%, purity (P) of ∼ 96%, and viability (V) of ∼83% after releasing cells from size-amplifiers. Our method substantially improves the emerging application of rare cell purification for translational medicine.

  19. Using multi-frequency acoustic attenuation to monitor grain size and concentration of suspended sediment in rivers.

    PubMed

    Moore, S A; Le Coz, J; Hurther, D; Paquier, A

    2013-04-01

    Multi-frequency acoustic backscatter profiles recorded with side-looking acoustic Doppler current profilers are used to monitor the concentration and size of sedimentary particles suspended in fluvial environments. Data at 300, 600, and 1200 kHz are presented from the Isère River in France where the dominant particles in suspension are silt and clay sizes. The contribution of suspended sediment to the through-water attenuation was determined for three high concentration (> 100 mg/L) events and compared to theoretical values for spherical particles having size distributions that were measured by laser diffraction in water samples. Agreement was good for the 300 kHz data, but it worsened with increasing frequency. A method for the determination of grain size using multi-frequency attenuation data is presented considering models for spherical and oblate spheroidal particles. When the resulting size estimates are used to convert sediment attenuation to concentration, the spheroidal model provides the best agreement with optical estimates of concentration, but the aspect ratio and grain size that provide the best fit differ between events. The acoustic estimates of size were one-third the values from laser grain sizing. This agreement is encouraging considering optical and acoustical instruments measure different parameters.

  20. Concentrations of selected constituents in surface-water and streambed-sediment samples collected from streams in and near an area of oil and natural-gas development, south-central Texas, 2011-13

    USGS Publications Warehouse

    Opsahl, Stephen P.; Crow, Cassi L.

    2014-01-01

    During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.

  1. Fundamental quantum noise mapping with tunnelling microscopes tested at surface structures of subatomic lateral size.

    PubMed

    Herz, Markus; Bouvron, Samuel; Ćavar, Elizabeta; Fonin, Mikhail; Belzig, Wolfgang; Scheer, Elke

    2013-10-21

    We present a measurement scheme that enables quantitative detection of the shot noise in a scanning tunnelling microscope while scanning the sample. As test objects we study defect structures produced on an iridium single crystal at low temperatures. The defect structures appear in the constant current images as protrusions with curvature radii well below the atomic diameter. The measured power spectral density of the noise is very near to the quantum limit with Fano factor F = 1. While the constant current images show detailed structures expected for tunnelling involving d-atomic orbitals of Ir, we find the current noise to be without pronounced spatial variation as expected for shot noise arising from statistically independent events.

  2. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  3. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  4. Non-linear behaviour of electrical parameters in porous, water-saturated rocks: a model to predict pore size distribution

    NASA Astrophysics Data System (ADS)

    Hallbauer-Zadorozhnaya, Valeriya; Santarato, Giovanni; Abu Zeid, Nasser

    2015-08-01

    In this paper, two separate but related goals are tackled. The first one is to demonstrate that in some saturated rock textures the non-linear behaviour of induced polarization (IP) and the violation of Ohm's law not only are real phenomena, but they can also be satisfactorily predicted by a suitable physical-mathematical model, which is our second goal. This model is based on Fick's second law. As the model links the specific dependence of resistivity and chargeability of a laboratory sample to the injected current and this in turn to its pore size distribution, it is able to predict pore size distribution from laboratory measurements, in good agreement with mercury injection capillary pressure test results. This fact opens up the possibility for hydrogeophysical applications on a macro scale. Mathematical modelling shows that the chargeability acquired in the field under normal conditions, that is at low current, will always be very small and approximately proportional to the applied current. A suitable field test site for demonstrating the possible reliance of both resistivity and chargeability on current was selected and a specific measuring strategy was established. Two data sets were acquired using different injected current strengths, while keeping the charging time constant. Observed variations of resistivity and chargeability are in agreement with those predicted by the mathematical model. These field test data should however be considered preliminary. If confirmed by further evidence, these facts may lead to changing the procedure of acquiring field measurements in future, and perhaps may encourage the design and building of a new specific geo-resistivity meter. This paper also shows that the well-known Marshall and Madden's equations based on Fick's law cannot be solved without specific boundary conditions.

  5. Assessing methods to specify the target difference for a randomised controlled trial: DELTA (Difference ELicitation in TriAls) review.

    PubMed

    Cook, Jonathan A; Hislop, Jennifer; Adewuyi, Temitope E; Harrild, Kirsten; Altman, Douglas G; Ramsay, Craig R; Fraser, Cynthia; Buckley, Brian; Fayers, Peter; Harvey, Ian; Briggs, Andrew H; Norrie, John D; Fergusson, Dean; Ford, Ian; Vale, Luke D

    2014-05-01

    The randomised controlled trial (RCT) is widely considered to be the gold standard study for comparing the effectiveness of health interventions. Central to the design and validity of a RCT is a calculation of the number of participants needed (the sample size). The value used to determine the sample size can be considered the 'target difference'. From both a scientific and an ethical standpoint, selecting an appropriate target difference is of crucial importance. Determination of the target difference, as opposed to statistical approaches to calculating the sample size, has been greatly neglected though a variety of approaches have been proposed the current state of the evidence is unclear. The aim was to provide an overview of the current evidence regarding specifying the target difference in a RCT sample size calculation. The specific objectives were to conduct a systematic review of methods for specifying a target difference; to evaluate current practice by surveying triallists; to develop guidance on specifying the target difference in a RCT; and to identify future research needs. The biomedical and social science databases searched were MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL), Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, Education Resources Information Center (ERIC) and Scopus for in-press publications. All were searched from 1966 or the earliest date of the database coverage and searches were undertaken between November 2010 and January 2011. There were three interlinked components: (1) systematic review of methods for specifying a target difference for RCTs - a comprehensive search strategy involving an electronic literature search of biomedical and some non-biomedical databases and clinical trials textbooks was carried out; (2) identification of current trial practice using two surveys of triallists - members of the Society for Clinical Trials (SCT) were invited to complete an online survey and respondents were asked about their awareness and use of, and willingness to recommend, methods; one individual per triallist group [UK Clinical Research Collaboration (UKCRC)-registered Clinical Trials Units (CTUs), Medical Research Council (MRC) UK Hubs for Trials Methodology Research and National Institute for Health Research (NIHR) UK Research Design Services (RDS)] was invited to complete a survey; (3) production of a structured guidance document to aid the design of future trials - the draft guidance was developed utilising the results of the systematic review and surveys by the project steering and advisory groups. Methodological review incorporating electronic searches, review of books and guidelines, two surveys of experts (membership of an international society and UK- and Ireland-based triallists) and development of guidance. The two surveys were sent out to membership of the SCT and UK- and Ireland-based triallists. The review focused on methods for specifying the target difference in a RCT. It was not restricted to any type of intervention or condition. Methods for specifying the target difference for a RCT were considered. The search identified 11,485 potentially relevant studies. In total, 1434 were selected for full-text assessment and 777 were included in the review. Seven methods to specify the target difference for a RCT were identified - anchor, distribution, health economic, opinion-seeking, pilot study, review of evidence base (RoEB) and standardised effect size (SES) - each having important variations in implementation. A total of 216 of the included studies used more than one method. A total of 180 (15%) responses to the SCT survey were received, representing 13 countries. Awareness of methods ranged from 38% (n =69) for the health economic method to 90% (n =162) for the pilot study. Of the 61 surveys sent out to UK triallist groups, 34 (56%) responses were received. Awareness ranged from 97% (n =33) for the RoEB and pilot study methods to only 41% (n =14) for the distribution method. Based on the most recent trial, all bar three groups (91%, n =30) used a formal method. Guidance was developed on the use of each method and the reporting of the sample size calculation in a trial protocol and results paper. There is a clear need for greater use of formal methods to determine the target difference and better reporting of its specification. Raising the standard of RCT sample size calculations and the corresponding reporting of them would aid health professionals, patients, researchers and funders in judging the strength of the evidence and ensuring better use of scarce resources. The Medical Research Council UK and the National Institute for Health Research Joint Methodology Research programme.

  6. Biomechanical behavior of bone scaffolds made of additive manufactured tricalciumphosphate and titanium alloy under different loading conditions.

    PubMed

    Wieding, Jan; Fritsche, Andreas; Heinl, Peter; Körner, Carolin; Cornelsen, Matthias; Seitz, Hermann; Mittelmeier, Wolfram; Bader, Rainer

    2013-12-16

    The repair of large segmental bone defects caused by fracture, tumor or infection remains challenging in orthopedic surgery. The capability of two different bone scaffold materials, sintered tricalciumphosphate and a titanium alloy (Ti6Al4V), were determined by mechanical and biomechanical testing. All scaffolds were fabricated by means of additive manufacturing techniques with identical design and controlled pore geometry. Small-sized sintered TCP scaffolds (10 mm diameter, 21 mm length) were fabricated as dense and open-porous samples and tested in an axial loading procedure. Material properties for titanium alloy were determined by using both tensile (dense) and compressive test samples (open-porous). Furthermore, large-sized open-porous TCP and titanium alloy scaffolds (30 mm in height and diameter, 700 µm pore size) were tested in a biomechanical setup simulating a large segmental bone defect using a composite femur stabilized with an osteosynthesis plate. Static physiologic loads (1.9 kN) were applied within these tests. Ultimate compressive strength of the TCP samples was 11.2 ± 0.7 MPa and 2.2 ± 0.3 MPa, respectively, for the dense and the open-porous samples. Tensile strength and ultimate compressive strength was 909.8 ± 4.9 MPa and 183.3 ± 3.7 MPa, respectively, for the dense and the open-porous titanium alloy samples. Furthermore, the biomechanical results showed good mechanical stability for the titanium alloy scaffolds. TCP scaffolds failed at 30% of the maximum load. Based on recent data, the 3D printed TCP scaffolds tested cannot currently be recommended for high load-bearing situations. Scaffolds made of titanium could be optimized by adapting the biomechanical requirements.

  7. Does an uneven sample size distribution across settings matter in cross-classified multilevel modeling? Results of a simulation study.

    PubMed

    Milliren, Carly E; Evans, Clare R; Richmond, Tracy K; Dunn, Erin C

    2018-06-06

    Recent advances in multilevel modeling allow for modeling non-hierarchical levels (e.g., youth in non-nested schools and neighborhoods) using cross-classified multilevel models (CCMM). Current practice is to cluster samples from one context (e.g., schools) and utilize the observations however they are distributed from the second context (e.g., neighborhoods). However, it is unknown whether an uneven distribution of sample size across these contexts leads to incorrect estimates of random effects in CCMMs. Using the school and neighborhood data structure in Add Health, we examined the effect of neighborhood sample size imbalance on the estimation of variance parameters in models predicting BMI. We differentially assigned students from a given school to neighborhoods within that school's catchment area using three scenarios of (im)balance. 1000 random datasets were simulated for each of five combinations of school- and neighborhood-level variance and imbalance scenarios, for a total of 15,000 simulated data sets. For each simulation, we calculated 95% CIs for the variance parameters to determine whether the true simulated variance fell within the interval. Across all simulations, the "true" school and neighborhood variance parameters were estimated 93-96% of the time. Only 5% of models failed to capture neighborhood variance; 6% failed to capture school variance. These results suggest that there is no systematic bias in the ability of CCMM to capture the true variance parameters regardless of the distribution of students across neighborhoods. Ongoing efforts to use CCMM are warranted and can proceed without concern for the sample imbalance across contexts. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Evaluation of a formula that categorizes female gray wolf breeding status by nipple size

    USGS Publications Warehouse

    Barber-Meyer, Shannon M.; Mech, L. David

    2015-01-01

    The proportion by age class of wild Canis lupus (Gray Wolf) females that reproduce in any given year remains unclear; thus, we evaluated the applicability to our long-term (1972–2013) data set of the Mech et al. (1993) formula that categorizes female Gray Wolf breeding status by nipple size and time of year. We used the formula to classify Gray Wolves from 68 capture events into 4 categories (yearling, adult non-breeder, former breeder, current breeder). To address issues with small sample size and variance, we created an ambiguity index to allow some Gray Wolves to be classed into 2 categories. We classified 20 nipple measurements ambiguously: 16 current or former breeder, 3 former or adult non-breeder, and 1 yearling or adult non-breeder. The formula unambiguously classified 48 (71%) of the nipple measurements; based on supplemental field evidence, at least 5 (10%) of these were incorrect. When used in conjunction with an ambiguity index we developed and with corrections made for classifications involving very large nipples, and supplemented with available field evidence, the Mech et al. (1993) formula provided reasonably reliable classification of breeding status in wild female Gray Wolves.

  9. Thickness-dependently enhanced photodetection performance of vertically grown SnS2 nanoflakes with large size and high production.

    PubMed

    Jia, Xiansheng; Tang, Chengchun; Pan, Ruhao; Long, Yun-Ze; Gu, Changzhi; Li, Junjie

    2018-05-10

    Photodetection based on Two-dimensional (2D) SnS2 has attracted a growing interest due to its superiority in response rate and responsivity, but high-quality growth and high performance photodetecting of 2D SnS2still face great challenges. Here, high-quality SnS2 nanoflakes with large-size and high-production are vertically grown on Si substrate by a modified CVD method, having an average size of 30 m with different thicknesses. Then a single SnS2 nanoflake-based phototransistor was fabricated to obtain a high current on/off ratio of 107 and excellent performances in photodetection, including fast response rates, low dark current, high responsivity and detectivity. Specifically, the SnS2 nanoflakes show the thickness-dependent photodetection capability and the highest responsivity of 354.4 A W-1 is obtained at the average thickness of 100.5 nm. A sensitized process using HfO2 nanolayer can further enhance the responsivity up to 1922 A W-1. Our work provides an efficient path to select SnS2 crystal samples with the optimal thickness as promising candidates for high-performance optoelectronic applications.

  10. Photoacoustic simulation study of chirp excitation response from different size absorbers

    NASA Astrophysics Data System (ADS)

    Jnawali, K.; Chinni, B.; Dogra, V.; Rao, N.

    2017-03-01

    Photoacoustic (PA) imaging is a hybrid imaging modality that integrates the strength of optical and ultrasound imaging. Nanosecond (ns) pulsed lasers used in current PA imaging systems are expensive, bulky and they often waste energy. We propose and evaluate, through simulations, the use of a continuous wave (CW) laser whose amplitude is linear frequency modulated (chirp) for PA imaging. The chirp signal provides signal-to-side-lobe ratio (SSR) improvement potential and full control over PA signal frequencies excited in the sample. The PA signal spectrum is a function of absorber size and the time frequencies present in the chirp. A mismatch between the input chirp spectrum and the output PA signal spectrum can affect the compressed pulse that is recovered from cross-correlating the two. We have quantitatively characterized this effect. The k-wave Matlab tool box was used to simulate PA signals in three dimensions for absorbers ranging in size from 0.1 mm to 0.6 mm, in response to laser excitation amplitude that is linearly swept from 0.5 MHz to 4 MHz. This sweep frequency range was chosen based on the spectrum analysis of a PA signal generated from ex-vivo human prostate tissue samples. In comparison, the energy wastage by a ns laser pulse was also estimated. For the chirp methodology, the compressed pulse peak amplitude, pulse width and side lobe structure parameters were extracted for different size absorbers. While the SSR increased 6 fold with absorber size, the pulse width decreased by 25%.

  11. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...

  12. Automatic liquid handling for life science: a critical review of the current state of the art.

    PubMed

    Kong, Fanwei; Yuan, Liang; Zheng, Yuan F; Chen, Weidong

    2012-06-01

    Liquid handling plays a pivotal role in life science laboratories. In experiments such as gene sequencing, protein crystallization, antibody testing, and drug screening, liquid biosamples frequently must be transferred between containers of varying sizes and/or dispensed onto substrates of varying types. The sample volumes are usually small, at the micro- or nanoliter level, and the number of transferred samples can be huge when investigating large-scope combinatorial conditions. Under these conditions, liquid handling by hand is tedious, time-consuming, and impractical. Consequently, there is a strong demand for automated liquid-handling methods such as sensor-integrated robotic systems. In this article, we survey the current state of the art in automatic liquid handling, including technologies developed by both industry and research institutions. We focus on methods for dealing with small volumes at high throughput and point out challenges for future advancements.

  13. IC(B,T,STRAIN) Characterisation of a Nb3Sn Internal Tin Strand with Enhanced Specification for Use in Fusion Conductors

    NASA Astrophysics Data System (ADS)

    Pasztor, G.; Bruzzone, P.

    2004-06-01

    The dc performance of a recently produced internal tin route Nb3Sn strand with enhanced specification is studied extensively and compared with predecessor wires manufactured by the suppliers for the ITER Model Coils in 1996. The wire has been selected for use in a full size, developmental cable-in-conduit conductor sample, which is being tested in the SULTAN Test Facility. The critical current, Ic, and the index of the current/voltage characteristic, n, are measured over a broad range of field and temperature, using ITER standard sample holders, made of TiAlV grooved cylinders. The behavior of Ic versus applied tensile strain is also investigated at 4.2 K and 12 T, on straight specimens. Scaling law parameters are drawn from the fit of the experimental results. The implications of the test results to the design of the fusion conductors are discussed.

  14. Inferred Paternity and Male Reproductive Success in a Killer Whale (Orcinus orca) Population.

    PubMed

    Ford, Michael J; Hanson, M Bradley; Hempelmann, Jennifer A; Ayres, Katherine L; Emmons, Candice K; Schorr, Gregory S; Baird, Robin W; Balcomb, Kenneth C; Wasser, Samuel K; Parsons, Kim M; Balcomb-Bartok, Kelly

    2011-01-01

    We used data from 78 individuals at 26 microsatellite loci to infer parental and sibling relationships within a community of fish-eating ("resident") eastern North Pacific killer whales (Orcinus orca). Paternity analysis involving 15 mother/calf pairs and 8 potential fathers and whole-pedigree analysis of the entire sample produced consistent results. The variance in male reproductive success was greater than expected by chance and similar to that of other aquatic mammals. Although the number of confirmed paternities was small, reproductive success appeared to increase with male age and size. We found no evidence that males from outside this small population sired any of the sampled individuals. In contrast to previous results in a different population, many offspring were the result of matings within the same "pod" (long-term social group). Despite this pattern of breeding within social groups, we found no evidence of offspring produced by matings between close relatives, and the average internal relatedness of individuals was significantly less than expected if mating were random. The population's estimated effective size was <30 or about 1/3 of the current census size. Patterns of allele frequency variation were consistent with a population bottleneck.

  15. Combining censored and uncensored data in a U-statistic: design and sample size implications for cell therapy research.

    PubMed

    Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O

    2011-01-01

    The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.

  16. Population entropies estimates of proteins

    NASA Astrophysics Data System (ADS)

    Low, Wai Yee

    2017-05-01

    The Shannon entropy equation provides a way to estimate variability of amino acids sequences in a multiple sequence alignment of proteins. Knowledge of protein variability is useful in many areas such as vaccine design, identification of antibody binding sites, and exploration of protein 3D structural properties. In cases where the population entropies of a protein are of interest but only a small sample size can be obtained, a method based on linear regression and random subsampling can be used to estimate the population entropy. This method is useful for comparisons of entropies where the actual sequence counts differ and thus, correction for alignment size bias is needed. In the current work, an R based package named EntropyCorrect that enables estimation of population entropy is presented and an empirical study on how well this new algorithm performs on simulated dataset of various combinations of population and sample sizes is discussed. The package is available at https://github.com/lloydlow/EntropyCorrect. This article, which was originally published online on 12 May 2017, contained an error in Eq. (1), where the summation sign was missing. The corrected equation appears in the Corrigendum attached to the pdf.

  17. Porosity dependence of terahertz emission of porous silicon investigated using reflection geometry terahertz time-domain spectroscopy

    NASA Astrophysics Data System (ADS)

    Mabilangan, Arvin I.; Lopez, Lorenzo P.; Faustino, Maria Angela B.; Muldera, Joselito E.; Cabello, Neil Irvin F.; Estacio, Elmer S.; Salvador, Arnel A.; Somintac, Armando S.

    2016-12-01

    Porosity dependent terahertz emission of porous silicon (PSi) was studied. The PSi samples were fabricated via electrochemical etching of boron-doped (100) silicon in a solution containing 48% hydrofluoric acid, deionized water and absolute ethanol in a 1:3:4 volumetric ratio. The porosity was controlled by varying the supplied anodic current for each sample. The samples were then optically characterized via normal incidence reflectance spectroscopy to obtain values for their respective refractive indices and porosities. Absorbance of each sample was also computed using the data from its respective reflectance spectrum. Terahertz emission of each sample was acquired through terahertz - time domain spectroscopy. A decreasing trend in the THz signal power was observed as the porosity of each PSi was increased. This was caused by the decrease in the absorption strength as the silicon crystallite size in the PSi was minimized.

  18. Application of nonlinear ultrasonics to inspection of stainless steel for dry storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulrich, Timothy James II; Anderson, Brain E.; Remillieux, Marcel C.

    This report summarized technical work conducted by LANL staff an international collaborators in support of the UFD Storage Experimentation effort. The focus of the current technical work is on the detection and imaging of a failure mechanism known as stress corrosion cracking (SCC) in stainless steel using the nonlinear ultrasonic technique known as TREND. One of the difficulties faced in previous work is in finding samples that contain realistically sized SCC. This year such samples were obtained from EPRI. Reported here are measurements made on these samples. One of the key findings is the ability to detect subsurface changes tomore » the direction in which a crack is penetrating into the sample. This result follows from last year's report that demonstrated the ability of TREND techniques to image features below the sample surface. A new collaboration was established with AGH University of Science and Technology, Krakow, Poland.« less

  19. Generic particulate-monitoring system for retrofit to Hanford exhaust stacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camman, J.W.; Carbaugh, E.H.

    1982-11-01

    Evaluations of 72 sampling and monitoring systems were performed at Hanford as the initial phase of a program to upgrade such systems. Each evaluation included determination of theoretical sampling efficiencies for particle sizes ranging from 0.5 to 10 micrometers aerodynamic equivalent diameter, addressing anisokinetic bias, sample transport line losses, and collector device efficiency. Upgrades needed to meet current Department of Energy guidance for effluent sampling and monitoring were identified, and a cost for each upgrade was estimated. A relative priority for each system's upgrade was then established based on evaluation results, current operational status, and future plans for the facilitymore » being exhausted. Common system upgrade requirements lead to the development of a generic design for common components of an exhaust stack sampling and monitoring system for airborne radioactive particulates. The generic design consists of commercially available off-the-shelf components to the extent practical and will simplify future stack sampling and monitoring system design, fabrication, and installation efforts. Evaluation results and their significance to system upgrades are empasized. A brief discussion of the analytical models used and experience to date with the upgrade program is included. Development of the generic stack sampling and monitoring system design is outlined. Generic system design features and limitations are presented. Requirements for generic system retrofitting to existing exhaust stacks are defined and benefits derived from generic system application are discussed.« less

  20. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  2. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  3. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  4. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Performance Evaluation of the Operational Air Quality Monitor for Water Testing Aboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Wallace, William T.; Limero, Thomas F.; Gazda, Daniel B.; Macatangay, Ariel V.; Dwivedi, Prabha; Fernandez, Facundo M.

    2014-01-01

    In the history of manned spaceflight, environmental monitoring has relied heavily on archival sampling. For short missions, this type of sample collection was sufficient; returned samples provided a snapshot of the presence of chemical and biological contaminants in the spacecraft air and water. However, with the construction of the International Space Station (ISS) and the subsequent extension of mission durations, soon to be up to one year, the need for enhanced, real-time environmental monitoring became more pressing. The past several years have seen the implementation of several real-time monitors aboard the ISS, complemented with reduced archival sampling. The station air is currently monitored for volatile organic compounds (VOCs) using gas chromatography-differential mobility spectrometry (Air Quality Monitor [AQM]). The water on ISS is analyzed to measure total organic carbon and biocide concentrations using the Total Organic Carbon Analyzer (TOCA) and the Colorimetric Water Quality Monitoring Kit (CWQMK), respectively. The current air and water monitors provide important data, but the number and size of the different instruments makes them impractical for future exploration missions. It is apparent that there is still a need for improvements in environmental monitoring capabilities. One such improvement could be realized by modifying a single instrument to analyze both air and water. As the AQM currently provides quantitative, compound-specific information for target compounds present in air samples, and many of the compounds are also targets for water quality monitoring, this instrument provides a logical starting point to evaluate the feasibility of this approach. In this presentation, we will discuss our recent studies aimed at determining an appropriate method for introducing VOCs from water samples into the gas phase and our current work, in which an electro-thermal vaporization unit has been interfaced with the AQM to analyze target analytes at the relevant concentrations at which they are routinely detected in archival water samples from the ISS.

  6. Apollo rocks, fines and soil cores

    NASA Astrophysics Data System (ADS)

    Allton, J.; Bevill, T.

    Apollo rocks and soils not only established basic lunar properties and ground truth for global remote sensing, they also provided important lessons for planetary protection (Adv. Space Res ., 1998, v. 22, no. 3 pp. 373-382). The six Apollo missions returned 2196 samples weighing 381.7 kg, comprised of rocks, fines, soil cores and 2 gas samples. By examining which samples were allocated for scientific investigations, information was obtained on usefulness of sampling strategy, sampling devices and containers, sample types and diversity, and on size of sample needed by various disciplines. Diversity was increased by using rakes to gather small rocks on the Moon and by removing fragments >1 mm from soils by sieving in the laboratory. Breccias and soil cores are diverse internally. Per unit weight these samples were more often allocated for research. Apollo investigators became adept at wringing information from very small sample sizes. By pushing the analytical limits, the main concern was adequate size for representative sampling. Typical allocations for trace element analyses were 750 mg for rocks, 300 mg for fines and 70 mg for core subsamples. Age-dating and isotope systematics allocations were typically 1 g for rocks and fines, but only 10% of that amount for core depth subsamples. Historically, allocations for organics and microbiology were 4 g (10% for cores). Modern allocations for biomarker detection are 100mg. Other disciplines supported have been cosmogenic nuclides, rock and soil petrology, sedimentary volatiles, reflectance, magnetics, and biohazard studies . Highly applicable to future sample return missions was the Apollo experience with organic contamination, estimated to be from 1 to 5 ng/g sample for Apollo 11 (Simonheit &Flory, 1970; Apollo 11, 12 &13 Organic contamination Monitoring History, U.C. Berkeley; Burlingame et al., 1970, Apollo 11 LSC , pp. 1779-1792). Eleven sources of contaminants, of which 7 are applicable to robotic missions, were identified and reduced; thus, improving Apollo 12 samples to 0.1 ng/g. Apollo sample documentation preserves the parentage, orientation, and location, packaging, handling and environmental histories of each of the 90,000 subsamples currently curated. Active research on Apollo samples continues today, and because 80% by weight of the Apollo collection remains pristine, researchers have a reservoir of material to support studies well into the future.

  7. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  8. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  9. Incorporating partially identified sample segments into acreage estimation procedures: Estimates using only observations from the current year

    NASA Technical Reports Server (NTRS)

    Sielken, R. L., Jr. (Principal Investigator)

    1981-01-01

    Several methods of estimating individual crop acreages using a mixture of completely identified and partially identified (generic) segments from a single growing year are derived and discussed. A small Monte Carlo study of eight estimators is presented. The relative empirical behavior of these estimators is discussed as are the effects of segment sample size and amount of partial identification. The principle recommendations are (1) to not exclude, but rather incorporate partially identified sample segments into the estimation procedure, (2) try to avoid having a large percentage (say 80%) of only partially identified segments, in the sample, and (3) use the maximum likelihood estimator although the weighted least squares estimator and least squares ratio estimator both perform almost as well. Sets of spring small grains (North Dakota) data were used.

  10. Imaging Extended Emission-Line Regions of Obscured AGN with the Subaru Hyper Suprime-Cam Survey

    NASA Astrophysics Data System (ADS)

    Sun, Ai-Lei; Greene, Jenny E.; Zakamska, Nadia L.; Goulding, Andy; Strauss, Michael A.; Huang, Song; Johnson, Sean; Kawaguchi, Toshihiro; Matsuoka, Yoshiki; Marsteller, Alisabeth A.; Nagao, Tohru; Toba, Yoshiki

    2018-05-01

    Narrow-line regions excited by active galactic nuclei (AGN) are important for studying AGN photoionization and feedback. Their strong [O III] lines can be detected with broadband images, allowing morphological studies of these systems with large-area imaging surveys. We develop a new broad-band imaging technique to reconstruct the images of the [O III] line using the Subaru Hyper Suprime-Cam (HSC) Survey aided with spectra from the Sloan Digital Sky Survey (SDSS). The technique involves a careful subtraction of the galactic continuum to isolate emission from the [O III]λ5007 and [O III]λ4959 lines. Compared to traditional targeted observations, this technique is more efficient at covering larger samples without dedicated observational resources. We apply this technique to an SDSS spectroscopically selected sample of 300 obscured AGN at redshifts 0.1 - 0.7, uncovering extended emission-line region candidates with sizes up to tens of kpc. With the largest sample of uniformly derived narrow-line region sizes, we revisit the narrow-line region size - luminosity relation. The area and radii of the [O III] emission-line regions are strongly correlated with the AGN luminosity inferred from the mid-infrared (15 μm rest-frame) with a power-law slope of 0.62^{+0.05}_{-0.06}± 0.10 (statistical and systematic errors), consistent with previous spectroscopic findings. We discuss the implications for the physics of AGN emission-line regions and future applications of this technique, which should be useful for current and next-generation imaging surveys to study AGN photoionization and feedback with large statistical samples.

  11. Clinical decision making and the expected value of information.

    PubMed

    Willan, Andrew R

    2007-01-01

    The results of the HOPE study, a randomized clinical trial, provide strong evidence that 1) ramipril prevents the composite outcome of cardiovascular death, myocardial infarction or stroke in patients who are at high risk of a cardiovascular event and 2) ramipril is cost-effective at a threshold willingness-to-pay of $10,000 to prevent an event of the composite outcome. In this report the concept of the expected value of information is used to determine if the information provided by the HOPE study is sufficient for decision making in the US and Canada. and results Using the cost-effectiveness data from a clinical trial, or from a meta-analysis of several trials, one can determine, based on the number of future patients that would benefit from the health technology under investigation, the expected value of sample information (EVSI) of a future trial as a function of proposed sample size. If the EVSI exceeds the cost for any particular sample size then the current information is insufficient for decision making and a future trial is indicated. If, on the other hand, there is no sample size for which the EVSI exceeds the cost, then there is sufficient information for decision making and no future trial is required. Using the data from the HOPE study these concepts are applied for various assumptions regarding the fixed and variable cost of a future trial and the number of patients who would benefit from ramipril. Expected value of information methods provide a decision-analytic alternative to the standard likelihood methods for assessing the evidence provided by cost-effectiveness data from randomized clinical trials.

  12. Investigating Effects of Nano- to Micro-Ampere Alternating Current Stimulation on Trichophyton rubrum Growth.

    PubMed

    Kwon, Dong Rak; Kwon, Hyunjung; Lee, Woo Ram; Park, Joonsoo

    2016-10-01

    Fungi are eukaryotic microorganisms including yeast and molds. Many studies have focused on modifying bacterial growth, but few on fungal growth. Microcurrent electricity may stimulate fungal growth. This study aims to investigate effects of microcurrent electric stimulation on Trichophyton rubrum growth. Standard-sized inoculums of T. rubrum derived from a spore suspension were applied to potato dextrose cornmeal agar (PDACC) plates, gently withdrawn with a sterile pipette, and were applied to twelve PDACC plates with a sterile spreader. Twelve Petri dishes were divided into four groups. The given amperage of electric current was 500 nA, 2 µA, and 4 µA in groups A, B, and C, respectively. No electric current was given in group D. In the first 48 hours, colonies only appeared in groups A and B (500 nA and 2 µA exposure). Colonies in group A (500 nA) were denser. Group C (4 µA) plates showed a barely visible film of fungus after 96 hours of incubation. Fungal growth became visible after 144 hours in the control group. Lower intensities of electric current caused faster fungal growth within the amperage range used in this study. Based on these results, further studies with a larger sample size, various fungal species, and various intensities of electric stimulation should be conducted.

  13. Evaluating estimators for numbers of females with cubs-of-the-year in the Yellowstone grizzly bear population

    USGS Publications Warehouse

    Cherry, S.; White, G.C.; Keating, K.A.; Haroldson, Mark A.; Schwartz, Charles C.

    2007-01-01

    Current management of the grizzly bear (Ursus arctos) population in Yellowstone National Park and surrounding areas requires annual estimation of the number of adult female bears with cubs-of-the-year. We examined the performance of nine estimators of population size via simulation. Data were simulated using two methods for different combinations of population size, sample size, and coefficient of variation of individual sighting probabilities. We show that the coefficient of variation does not, by itself, adequately describe the effects of capture heterogeneity, because two different distributions of capture probabilities can have the same coefficient of variation. All estimators produced biased estimates of population size with bias decreasing as effort increased. Based on the simulation results we recommend the Chao estimator for model M h be used to estimate the number of female bears with cubs of the year; however, the estimator of Chao and Shen may also be useful depending on the goals of the research.

  14. Civilian Health Insurance Options of Military Retirees: Findings from a Pilot Survey

    DTIC Science & Technology

    2007-01-01

    while providing important information, was a pilot study with a small sample size. Understanding the potential impact of an increase in TRICARE...that relies on TRICARE—even if they do not currently use TRICARE—is relevant from an actuarial standpoint. Nonusers who view TRICARE as their...primary source of health insurance coverage will pose an actuarial risk if they become unhealthy in the future. Purpose of This Report This report

  15. Considerations for Integrating Women into Closed Occupations in the U.S. Special Operations Forces

    DTIC Science & Technology

    2015-05-01

    effectiveness of integration. Ideally, studies adopting an experimental design (using both test and control groups ) would be preferred, but sample sizes may...data -- a survey of SOF personnel and a series of focus group discussions -- collected by the research team regarding the potential challenges to... controlled positions. This report summarizes our research , analysis, and conclusions. We used a mixed-methods approach. We reviewed the current state of

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir

    Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less

  17. A uniplanar three-axis gradient set for in vivo magnetic resonance microscopy.

    PubMed

    Demyanenko, Andrey V; Zhao, Lin; Kee, Yun; Nie, Shuyi; Fraser, Scott E; Tyszka, J Michael

    2009-09-01

    We present an optimized uniplanar magnetic resonance gradient design specifically tailored for MR imaging applications in developmental biology and histology. Uniplanar gradient designs sacrifice gradient uniformity for high gradient efficiency and slew rate, and are attractive for surface imaging applications where open access from one side of the sample is required. However, decreasing the size of the uniplanar gradient set presents several unique engineering challenges, particularly for heat dissipation and thermal insulation of the sample from gradient heating. We demonstrate a new three-axis, target-field optimized uniplanar gradient coil design that combines efficient cooling and insulation to significantly reduce sample heating at sample-gradient distances of less than 5mm. The instrument is designed for microscopy in horizontal bore magnets. Empirical gradient current efficiencies in the prototype coils lie between 3.75G/cm/A and 4.5G/cm/A with current and heating-limited maximum gradient strengths between 235G/cm and 450G/cm at a 2% duty cycle. The uniplanar gradient prototype is demonstrated with non-linearity corrections for both high-resolution structural imaging of tissue slices and for long time-course imaging of live, developing amphibian embryos in a horizontal bore 7T magnet.

  18. Statistical methods for identifying and bounding a UXO target area or minefield

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinstry, Craig A.; Pulsipher, Brent A.; Gilbert, Richard O.

    2003-09-18

    The sampling unit for minefield or UXO area characterization is typically represented by a geographical block or transect swath that lends itself to characterization by geophysical instrumentation such as mobile sensor arrays. New spatially based statistical survey methods and tools, more appropriate for these unique sampling units have been developed and implemented at PNNL (Visual Sample Plan software, ver. 2.0) with support from the US Department of Defense. Though originally developed to support UXO detection and removal efforts, these tools may also be used in current form or adapted to support demining efforts and aid in the development of newmore » sensors and detection technologies by explicitly incorporating both sampling and detection error in performance assessments. These tools may be used to (1) determine transect designs for detecting and bounding target areas of critical size, shape, and density of detectable items of interest with a specified confidence probability, (2) evaluate the probability that target areas of a specified size, shape and density have not been missed by a systematic or meandering transect survey, and (3) support post-removal verification by calculating the number of transects required to achieve a specified confidence probability that no UXO or mines have been missed.« less

  19. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  20. Using long ssDNA polynucleotides to amplify STRs loci in degraded DNA samples

    PubMed Central

    Pérez Santángelo, Agustín; Corti Bielsa, Rodrigo M.; Sala, Andrea; Ginart, Santiago; Corach, Daniel

    2017-01-01

    Obtaining informative short tandem repeat (STR) profiles from degraded DNA samples is a challenging task usually undermined by locus or allele dropouts and peak-high imbalances observed in capillary electrophoresis (CE) electropherograms, especially for those markers with large amplicon sizes. We hereby show that the current STR assays may be greatly improved for the detection of genetic markers in degraded DNA samples by using long single stranded DNA polynucleotides (ssDNA polynucleotides) as surrogates for PCR primers. These long primers allow a closer annealing to the repeat sequences, thereby reducing the length of the template required for the amplification in fragmented DNA samples, while at the same time rendering amplicons of larger sizes suitable for multiplex assays. We also demonstrate that the annealing of long ssDNA polynucleotides does not need to be fully complementary in the 5’ region of the primers, thus allowing for the design of practically any long primer sequence for developing new multiplex assays. Furthermore, genotyping of intact DNA samples could also benefit from utilizing long primers since their close annealing to the target STR sequences may overcome wrong profiling generated by insertions/deletions present between the STR region and the annealing site of the primers. Additionally, long ssDNA polynucleotides might be utilized in multiplex PCR assays for other types of degraded or fragmented DNA, e.g. circulating, cell-free DNA (ccfDNA). PMID:29099837

  1. Mechanical Properties and Microstructure of AZ31B Magnesium Alloy Processed by I-ECAP

    NASA Astrophysics Data System (ADS)

    Gzyl, Michal; Rosochowski, Andrzej; Pesci, Raphael; Olejnik, Lech; Yakushina, Evgenia; Wood, Paul

    2014-03-01

    Incremental equal channel angular pressing (I-ECAP) is a severe plastic deformation process used to refine grain size of metals, which allows processing very long billets. As described in the current article, an AZ31B magnesium alloy was processed for the first time by three different routes of I-ECAP, namely, A, BC, and C, at 523 K (250 °C). The structure of the material was homogenized and refined to ~5 microns of the average grain size, irrespective of the route used. Mechanical properties of the I-ECAPed samples in tension and compression were investigated. Strong influence of the processing route on yield and fracture behavior of the material was established. It was found that texture controls the mechanical properties of AZ31B magnesium alloy subjected to I-ECAP. SEM and OM techniques were used to obtain microstructural images of the I-ECAPed samples subjected to tension and compression. Increased ductility after I-ECAP was attributed to twinning suppression and facilitation of slip on basal plane. Shear bands were revealed in the samples processed by I-ECAP and subjected to tension. Tension-compression yield stress asymmetry in the samples tested along extrusion direction was suppressed in the material processed by routes BC and C. This effect was attributed to textural development and microstructural homogenization. Twinning activities in fine- and coarse-grained samples have also been studied.

  2. Sample size calculations for case-control studies

    Cancer.gov

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  3. Measuring nanoparticles size distribution in food and consumer products: a review.

    PubMed

    Calzolai, L; Gilliland, D; Rossi, F

    2012-08-01

    Nanoparticles are already used in several consumer products including food, food packaging and cosmetics, and their detection and measurement in food represent a particularly difficult challenge. In order to fill the void in the official definition of what constitutes a nanomaterial, the European Commission published in October 2011 its recommendation on the definition of 'nanomaterial'. This will have an impact in many different areas of legislation, such as the European Cosmetic Products Regulation, where the current definitions of nanomaterial will come under discussion regarding how they should be adapted in light of this new definition. This new definition calls for the measurement of the number-based particle size distribution in the 1-100 nm size range of all the primary particles present in the sample independently of whether they are in a free, unbound state or as part of an aggregate/agglomerate. This definition does present great technical challenges for those who must develop valid and compatible measuring methods. This review will give an overview of the current state of the art, focusing particularly on the suitability of the most used techniques for the size measurement of nanoparticles when addressing this new definition of nanomaterials. The problems to be overcome in measuring nanoparticles in food and consumer products will be illustrated with some practical examples. Finally, a possible way forward (based on the combination of different measuring techniques) for solving this challenging analytical problem is illustrated.

  4. Current collection from the space plasma through defects in high voltage solar array insulation. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Stillwell, R. P.

    1983-01-01

    For spacecraft operation in the near Earth environment, solar cell arrays constitute the major source of reliable long term power. Optimization of mass and power efficiency results in a general requirement for high voltage solar arrays. The space plasma environment, though, can result in large currents being collected by exposed solar cells. The solution of a protective covering of transparent insulation is not a complete solution, inasmuch as defects in the insulation result in anomalously large currents being collected through the defects. Tests simulating the electron collection from small defects in an insulation have shown that there are two major collection modes. The first mode involves current enhancement by means of a surface phenomenon involving the surrounding insulator. In the second mode the current collection is enhanced by vaporization and ionization of the insulators materials, in addition to the surface enhancement of the first mode. A model for the electron collection is the surface enhanced collection mode was developed. The model relates the secondary electron emission yield to the electron collection. It correctly predicts the qualitative effects of hole size, sample temperature and roughening of sample surface. The theory was also shown to predict electron collection within a factor of two for the polymers teflon and polyimide.

  5. Effect of Subelement Spacing in Rrp Nb3Sn Deformed Strands

    NASA Astrophysics Data System (ADS)

    Barzi, E.; Turrioni, D.; Alsharo'a, M.; Field, M.; Hong, S.; Parrell, J.; Yamada, R.; Zhang, Y.; Zlobin, A. V.

    2008-03-01

    The Restacked Rod Process (RRP) is the Nb3Sn strand technology presently producing the largest critical current densities at 4.2 K and 12 T. However, when subject to transverse plastic deformation, RRP subelements (SE) merge into each other, creating larger filaments with a somewhat continuous barrier. In this case, the strand sees a larger effective filament size and its instability can dramatically increase locally leading to a cable quench. To reduce and possibly eliminate this effect, Oxford Instruments Superconducting Technology (OST) developed for FNAL a modified RRP strand design with larger Cu spacing between SE's arranged in a 60/61 array. Strand samples of this design with sizes from 0.7 to 1 mm were first evaluated for transport current properties. A comparison study was then performed between the regular 54/61 and the modified 60/61 design using 0.7 mm round and deformed strands. Finite element modeling of the deformed strands was also performed with ANSYS.

  6. New approaches to trials in glomerulonephritis.

    PubMed

    Craig, Jonathan C; Tong, Allison; Strippoli, Giovanni F M

    2017-01-01

    Randomized controlled trials are required to reliably identify interventions to improve the outcomes for people with glomerulonephritis (GN). Unfortunately, although easier, observational studies are inherently unreliable even though the findings of both study designs agree most of the time. Currently there are ∼790 trials in GN, but suboptimal design and reporting, together with small sample sizes, mean that they may not be reliable for decision making. If the history is somewhat bleak, the future looks bright, with recent initiatives to improve the quality, size and relevance of clinical trials in nephrology, including greater patient engagement, trial networks, core outcome sets, registry-based trials and adaptive designs. Given the current state of the evidence informing the care of people with GN, disruptive technologies and pervasive culture change is required to ensure that the potential of trials to improve the health of people with this complex condition is to be realized. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  7. A new benthic foraminiferal proxy for near-bottom current velocities in the Gulf of Cadiz, northeastern Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Schönfeld, Joachim

    2002-10-01

    Recent benthic foraminiferal assemblages were analyzed in the Gulf of Cadiz, northeastern Atlantic, to study the impact of the Mediterranean Outflow Water (MOW) undercurrent on the benthic environment. Foraminiferal counts and the analysis of specimens attached to hard substrates from 26 surface samples reveal a relationship of epibenthic assemblages with sedimentary and hydrodynamic environment. Epibenthic species make up as much as 60% of the living assemblage at proximal sites with high current velocities and 3-18% in distal areas or near the margins of the MOW flow paths at low velocities. These foraminifers inhabit elevated substrates only within the MOW, which evidently provides an ecological niche for opportunistic suspension feeders. They adapt their settling elevation dynamically and occur at greater heights above the ambient sediment surface under stronger currents. Mobility, fixation strength, suspension feeding, and reproduction efficiency emerge as individual capabilities promoting the occupation of elevated substrates by certain epibenthic species. The active microhabitat selection is pursued as basic strategy of these foraminifers to optimize their food acquisition. A better access to food sources stimulates reproduction and leads to a greater contribution of foraminiferal tests to the surface sediments. Elevated epibenthos percentages from the dead assemblage and current velocities prevailing at the sampling sites are closely correlated. A compilation including other data from southern Portugal, Florida Straits, and the English Channel infers an exponential relationship between epibenthic abundances and flow strength implying that endobenthic species prevail even under high current velocities. A linear model provides a significantly better fit for the Gulf of Cadiz data however. This relation is used for a case study in order to estimate near-bottom current strengths for the late Holocene Peak III contourite in core M39008-3. Trends and absolute values of current velocities, inferred from the benthic foraminiferal proxy, are the same scale as estimates obtained from sediment grain-size distribution and hydrodynamic models. Epibenthic foraminifera thus bear a high potential as proxy for palaeocurrent studies that even may overcome objections by predetermined grain-size distributions in deep-sea cores.

  8. Analytical approaches for the characterization and quantification of nanoparticles in food and beverages.

    PubMed

    Mattarozzi, Monica; Suman, Michele; Cascio, Claudia; Calestani, Davide; Weigel, Stefan; Undas, Anna; Peters, Ruud

    2017-01-01

    Estimating consumer exposure to nanomaterials (NMs) in food products and predicting their toxicological properties are necessary steps in the assessment of the risks of this technology. To this end, analytical methods have to be available to detect, characterize and quantify NMs in food and materials related to food, e.g. food packaging and biological samples following metabolization of food. The challenge for the analytical sciences is that the characterization of NMs requires chemical as well as physical information. This article offers a comprehensive analysis of methods available for the detection and characterization of NMs in food and related products. Special attention was paid to the crucial role of sample preparation methods since these have been partially neglected in the scientific literature so far. The currently available instrumental methods are grouped as fractionation, counting and ensemble methods, and their advantages and limitations are discussed. We conclude that much progress has been made over the last 5 years but that many challenges still exist. Future perspectives and priority research needs are pointed out. Graphical Abstract Two possible analytical strategies for the sizing and quantification of Nanoparticles: Asymmetric Flow Field-Flow Fractionation with multiple detectors (allows the determination of true size and mass-based particle size distribution); Single Particle Inductively Coupled Plasma Mass Spectrometry (allows the determination of a spherical equivalent diameter of the particle and a number-based particle size distribution).

  9. Realistic weight perception and body size assessment in a racially diverse community sample of dieters.

    PubMed

    Cachelin, F M; Striegel-Moore, R H; Elder, K A

    1998-01-01

    Recently, a shift in obesity treatment away from emphasizing ideal weight loss goals to establishing realistic weight loss goals has been proposed; yet, what constitutes "realistic" weight loss for different populations is not clear. This study examined notions of realistic shape and weight as well as body size assessment in a large community-based sample of African-American, Asian, Hispanic, and white men and women. Participants were 1893 survey respondents who were all dieters and primarily overweight. Groups were compared on various variables of body image assessment using silhouette ratings. No significant race differences were found in silhouette ratings, nor in perceptions of realistic shape or reasonable weight loss. Realistic shape and weight ratings by both women and men were smaller than current shape and weight but larger than ideal shape and weight ratings. Compared with male dieters, female dieters considered greater weight loss to be realistic. Implications of the findings for the treatment of obesity are discussed.

  10. The influence of staff training on challenging behaviour in individuals with intellectual disability: a review.

    PubMed

    Cox, Alison D; Dube, Charmayne; Temple, Beverley

    2015-03-01

    Many individuals with intellectual disability engage in challenging behaviour. This can significantly limit quality of life and also negatively impact caregivers (e.g., direct care staff, family caregivers and teachers). Fortunately, efficacious staff training may alleviate some negative side effects of client challenging behaviour. Currently, a systematic review of studies evaluating whether staff training influences client challenging behaviour has not been conducted. The purpose of this article was to identify emerging patterns, knowledge gaps and make recommendations for future research on this topic. The literature search resulted in a total of 19 studies that met our inclusion criteria. Articles were separated into four staff training categories. Studies varied across sample size, support staff involved in training, study design, training duration and data collection strategy. A small sample size (n = 19) and few replication studies, alongside several other procedural limitations prohibited the identification of a best practice training approach. © The Author(s) 2014.

  11. A new tool called DISSECT for analysing large genomic data sets using a Big Data approach

    PubMed Central

    Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert

    2015-01-01

    Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010

  12. The functional spectrum of low-frequency coding variation.

    PubMed

    Marth, Gabor T; Yu, Fuli; Indap, Amit R; Garimella, Kiran; Gravel, Simon; Leong, Wen Fung; Tyler-Smith, Chris; Bainbridge, Matthew; Blackwell, Tom; Zheng-Bradley, Xiangqun; Chen, Yuan; Challis, Danny; Clarke, Laura; Ball, Edward V; Cibulskis, Kristian; Cooper, David N; Fulton, Bob; Hartl, Chris; Koboldt, Dan; Muzny, Donna; Smith, Richard; Sougnez, Carrie; Stewart, Chip; Ward, Alistair; Yu, Jin; Xue, Yali; Altshuler, David; Bustamante, Carlos D; Clark, Andrew G; Daly, Mark; DePristo, Mark; Flicek, Paul; Gabriel, Stacey; Mardis, Elaine; Palotie, Aarno; Gibbs, Richard

    2011-09-14

    Rare coding variants constitute an important class of human genetic variation, but are underrepresented in current databases that are based on small population samples. Recent studies show that variants altering amino acid sequence and protein function are enriched at low variant allele frequency, 2 to 5%, but because of insufficient sample size it is not clear if the same trend holds for rare variants below 1% allele frequency. The 1000 Genomes Exon Pilot Project has collected deep-coverage exon-capture data in roughly 1,000 human genes, for nearly 700 samples. Although medical whole-exome projects are currently afoot, this is still the deepest reported sampling of a large number of human genes with next-generation technologies. According to the goals of the 1000 Genomes Project, we created effective informatics pipelines to process and analyze the data, and discovered 12,758 exonic SNPs, 70% of them novel, and 74% below 1% allele frequency in the seven population samples we examined. Our analysis confirms that coding variants below 1% allele frequency show increased population-specificity and are enriched for functional variants. This study represents a large step toward detecting and interpreting low frequency coding variation, clearly lays out technical steps for effective analysis of DNA capture data, and articulates functional and population properties of this important class of genetic variation.

  13. Standard methods for sampling freshwater fishes: Opportunities for international collaboration

    USGS Publications Warehouse

    Bonar, Scott A.; Mercado-Silva, Norman; Hubert, Wayne A.; Beard, Douglas; Dave, Göran; Kubečka, Jan; Graeb, Brian D. S.; Lester, Nigel P.; Porath, Mark T.; Winfield, Ian J.

    2017-01-01

    With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by state and federal agencies, universities, nongovernmental organizations, and consulting businesses. Currently, standardization is practiced mostly in North America and Europe. Participants described how standardization has been important for management of long-term data sets, promoting fundamental scientific understanding, and assessing efficacy of large spatial scale management strategies. Academics indicated that standardization has been useful in fisheries education because time previously used to teach how sampling methods are developed is now more devoted to diagnosis and treatment of problem fish communities. Researchers reported that standardization allowed increased sample size for method validation and calibration. Group consensus was to retain continental standards where they currently exist but to further explore international and intercontinental standardization, specifically identifying where synergies and bridges exist, and identify means to collaborate with scientists where standardization is limited but interest and need occur.

  14. High-Precision Isotope Ratio Measurements of Sub-Picogram Actinide Samples

    NASA Astrophysics Data System (ADS)

    Pollington, A. D.; Kinman, W.

    2016-12-01

    One of the most exciting trends in analytical geochemistry over the past decade is the push towards smaller and smaller sample sizes while simultaneously achieving high precision isotope ratio measurements. This trend has been driven by advances in clean chemistry protocols, and by significant breakthroughs in mass spectrometer ionization efficiency and detector quality (stability and noise for low signals). In this presentation I will focus on new techniques currently being developed at Los Alamos National Laboratory for the characterization of ultra-small samples (pg, fg, ag), with particular focus on actinide measurements by MC-ICP-MS. Analyses of U, Pu, Th and Am are routinely carried out in our facility using multi-ion counting techniques. I will describe some of the challenges associated with using exclusively ion counting methods (e.g., stability, detector cross calibration, etc.), and how we work to mitigate them. While the focus of much of the work currently being carried out is in the broad field of nuclear forensics and safeguards, the techniques that are being developed are directly applicable to many geologic questions that require analyses of small samples of U and Th, for example. In addition to the description of the technique development, I will present case studies demonstrating the precision and accuracy of the method as applied to real-world samples.

  15. Critical current and flux dynamics in Ag-doped FeSe superconductor

    NASA Astrophysics Data System (ADS)

    Galluzzi, A.; Polichetti, M.; Buchkov, K.; Nazarova, E.; Mancusi, D.; Pace, S.

    2017-02-01

    The measurements of DC magnetization as a function of the temperature M(T), magnetic field M(H), and time M(t) have been performed in order to compare the superconducting and pinning properties of an undoped FeSe0.94 sample and a silver doped FeSe0.94 + 6 wt% Ag sample. The M(T) curves indicate an improvement of the superconducting critical temperature and a reduction of the non-superconducting phase Fe7Se8 due to the silver doping. This is confirmed by the field and temperature dependent critical current density Jc(H,T) extracted from the superconducting hysteresis loops at different temperatures within the Bean critical state model. Moreover, the combined analysis of the Jc(T) and of the pinning force Fp(H/Hirr) indicate that the pinning mechanisms in both samples can be described in the framework of the collective pinning theory. The U*(T, J) curves show a pinning crossover from an elastic creep regime of intermediate size flux bundles, for low temperatures, to a plastic creep regime at higher temperatures for both the samples. Finally, the vortex hopping attempt time has been evaluated for both samples and the results are comparable with the values reported in the literature for high Tc materials.

  16. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  17. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  18. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  19. O the Size Dependence of the Chemical Properties of Cloud Droplets: Exploratory Studies by Aircraft

    NASA Astrophysics Data System (ADS)

    Twohy, Cynthia H.

    1992-09-01

    Clouds play an important role in the climate of the earth and in the transport and transformation of chemical species, but many questions about clouds remain unanswered. In particular, the chemical properties of droplets may vary with droplet size, with potentially important consequences. The counterflow virtual impactor (CVI) separates droplets from interstitial particles and gases in a cloud and also can collect droplets in discrete size ranges. As such, the CVI is a useful tool for investigating the chemical components present in droplets of different sizes and their potential interactions with cloud processes. The purpose of this work is twofold. First, the sampling characteristics of the airborne CVI are investigated, using data from a variety of experiments. A thorough understanding of CVI properties is necessary in order to utilize the acquired data judiciously and effectively. Although the impaction characteristics of the CVI seem to be predictable by theory, the airborne instrument is subject to influences that may result in a reduced transmission efficiency for droplets, particularly if the inlet is not properly aligned. Ways to alleviate this problem are being investigated, but currently the imperfect sampling efficiency must be taken into account during data interpretation. Relationships between the physical and chemical properties of residual particles from droplets collected by the CVI and droplet size are then explored in both stratiform and cumulus clouds. The effects of various cloud processes and measurement limitations upon these relationships are discussed. In one study, chemical analysis of different -sized droplets sampled in stratiform clouds showed a dependence of chemical composition on droplet size, with larger droplets containing higher proportions of sodium than non-sea-salt sulfate and ammonium. Larger droplets were also associated with larger residual particles, as expected from simple cloud nucleation theory. In a study of marine cumulus clouds, the CVI was combined with a cloud condensation nucleus spectrometer to study the supersaturation spectra of residual particles from droplets. The median critical supersaturation of the droplet residual particles was consistently less than or equal to the median critical supersaturation of ambient particles except at cloud top, where residual particles exhibited a variety of critical supersaturations.

  20. Viewpoints: Interactive Exploration of Large Multivariate Earth and Space Science Data Sets

    NASA Astrophysics Data System (ADS)

    Levit, C.; Gazis, P. R.

    2006-05-01

    Analysis and visualization of extremely large and complex data sets may be one of the most significant challenges facing earth and space science investigators in the forthcoming decades. While advances in hardware speed and storage technology have roughly kept up with (indeed, have driven) increases in database size, the same is not of our abilities to manage the complexity of these data. Current missions, instruments, and simulations produce so much data of such high dimensionality that they outstrip the capabilities of traditional visualization and analysis software. This problem can only be expected to get worse as data volumes increase by orders of magnitude in future missions and in ever-larger supercomputer simulations. For large multivariate data (more than 105 samples or records with more than 5 variables per sample) the interactive graphics response of most existing statistical analysis, machine learning, exploratory data analysis, and/or visualization tools such as Torch, MLC++, Matlab, S++/R, and IDL stutters, stalls, or stops working altogether. Fortunately, the graphics processing units (GPUs) built in to all professional desktop and laptop computers currently on the market are capable of transforming, filtering, and rendering hundreds of millions of points per second. We present a prototype open-source cross-platform application which leverages much of the power latent in the GPU to enable smooth interactive exploration and analysis of large high- dimensional data using a variety of classical and recent techniques. The targeted application is the interactive analysis of large, complex, multivariate data sets, with dimensionalities that may surpass 100 and sample sizes that may exceed 106-108.

  1. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  2. Progress in Developing Transfer Functions for Surface Scanning Eddy Current Inspections

    NASA Astrophysics Data System (ADS)

    Shearer, J.; Heebl, J.; Brausch, J.; Lindgren, E.

    2009-03-01

    As US Air Force (USAF) aircraft continue to age, additional inspections are required for structural components. The validation of new inspections typically requires a capability demonstration of the method using representative structure with representative damage. To minimize the time and cost required to prepare such samples, Electric Discharge machined (EDM) notches are commonly used to represent fatigue cracks in validation studies. However, the sensitivity to damage typically changes as a function of damage type. This requires a mathematical relationship to be developed between the responses from the two different flaw types to enable the use of EDM notched samples to validate new inspections. This paper reviews progress to develop transfer functions for surface scanning eddy current inspections of aluminum and titanium alloys found in structural aircraft components. Multiple samples with well characterized grown fatigue cracks and master gages with EDM notches, both with a range of flaw sizes, were used to collect flaw signals with USAF field inspection equipment. Analysis of this empirical data was used to develop a transfer function between the response from the EDM notches and grown fatigue cracks.

  3. Characterization of bottom sediments in the Río de la Plata estuary

    NASA Astrophysics Data System (ADS)

    Simionato, Claudia G.; Moreira, Diego

    2016-04-01

    Bottom sediments and surface water samples were collected in the intermediate and outer Río de la Plata Estuary during 2009-2010, in six repeated cruises, with 26 stations each. Samples were processed for grain size using a laser particle size analyzer, and water and organic matter contents. The aim of this work is to analyze this data set to provide a comprehensive and objective characterization of the bottom sediments distribution, to study their composition and to progress in the construction of a conceptual model of the involved physical mechanisms. Principal Components Analysis is applied to the bottom sediments size histograms to investigate the spatial patterns. Variations in grain-size parameters contain information on possible sediment transport patterns, which were analyzed by means of trend vectors. Sediments show a gradational arrangement of textures, sand dominant at the head, silt in the intermediate estuary and clayey silt and clay at its mouth; textures become progressively more poorly sorted offshore, and the water and organic matter contents increase. And seem to be strongly related to the geometry and the hydrodynamics. Along the Northern coast of the intermediate estuary, well sorted medium and fine silt predominates, whereas in the Southern coast, coarser and less sorted silt prevails, due to differences in tidal currents and/or in water pathways. Around Barra del Indio, clay prevails over silt and sand, and the water and organic matter contents reach a maximum, probably due flocculation, and the reduction of the currents. Immediately seawards the salt wedge, net transport reverses its direction and well sorted coarser sand from the adjacent shelf dominates. Relict sediment is observed around the Santa Lucía River, consisting of poorly sorted fine silt and clay. The inferred net transport suggests convergence at the Barra del Indio shoal, which is consistent with the constant growing of the banks.

  4. Simulating thermal stress features on hot planetary surfaces in vacuum at high temperature facility in the PEL laboratory

    NASA Astrophysics Data System (ADS)

    Maturilli, A.; Ferrari, S.; Helbert, J.; D'Incecco, P.; D'Amore, M.

    2011-12-01

    In the Planetary Emissivity Laboratory (PEL) at the Institute for Planetary Research of the German Aerospace Center (DLR) in Berlin, we set-up a simulation chamber for the spectroscopic investigation of minerals separates under Mercurial conditions. The chamber can be evacuated to 10-4 bar and the target samples heated to 700 K within few minutes, thanks to the innovative inductive heating system. While developing the protocol for the high temperature spectroscopy measurements we discovered interesting "morphologies" on the sample surfaces. The powders are poured into stainless steel cups of 50 mm internal diameter, 8 mm height and 3 mm depth, having a 5 mm thick base (thus leaving 3 mm free space for the minerals), and rim 1 mm thick. We selected several minerals of interest for Mercurial surface composition and for each of them we analyzed various grain size separates, to study the influence of grain dimensions to the process of thermal stressing. We observed that for the smaller grain size separate (0-25 μm) the thermal stress mainly induces large depressions and fractures, while on larger grain sizes (125-250 μm) small depressions and a cratered surface. Our current working hypothesis is that these features are mainly caused by thermal stress induced by a radiatively quickly cooling surface layer covering the much hotter bulk material. Further investigation is ongoing to understand the processes better. The observed morphologies exhibit surprising similarities to features observed at planetary scale size for example on Mercury and even on Venus. Especially the high resolution images provided currently from MESSENGER'S Mercury Dual Imaging System (MDIS) instrument has revealed plains dominated by polygonal fractures whose origin still have to be determined. Our laboratory analogue studies might in the future provide some insight into the processes creating those features

  5. Study into the correlation of dominant pore throat size and SIP relaxation frequency

    NASA Astrophysics Data System (ADS)

    Kruschwitz, Sabine; Prinz, Carsten; Zimathies, Annett

    2016-12-01

    There is currently a debate within the SIP community about the characteristic textural length scale controlling relaxation time of consolidated porous media. One idea is that the relaxation time is dominated by the pore throat size distribution or more specifically the modal pore throat size as determined in mercury intrusion capillary pressure tests. Recently new studies on inverting pore size distributions from SIP data were published implying that the relaxation mechanisms and controlling length scale are well understood. In contrast new analytical model studies based on the Marshall-Madden membrane polarization theory suggested that two relaxation processes might compete: the one along the short narrow pore (the throat) with one across the wider pore in case the narrow pores become relatively long. This paper presents a first systematically focused study into the relationship of pore throat sizes and SIP relaxation times. The generality of predicted trends is investigated across a wide range of materials differing considerably in chemical composition, specific surface and pore space characteristics. Three different groups of relaxation behaviors can be clearly distinguished. The different behaviors are related to clay content and type, carbonate content, size of the grains and the wide pores in the samples.

  6. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  7. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  8. Sample size and power for cost-effectiveness analysis (part 1).

    PubMed

    Glick, Henry A

    2011-03-01

    Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.

  9. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  10. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  11. Otolith analysis of pre-restoration habitat use by Chinook salmon in the delta-flats and nearshore regions of the Nisqually River Estuary

    USGS Publications Warehouse

    Lind-Null, Angie; Larsen, Kim

    2010-01-01

    The Nisqually Fall Chinook population is one of 27 salmon stocks in the Puget Sound (Washington) evolutionarily significant unit listed as threatened under the federal Endangered Species Act (ESA). Extensive restoration of the Nisqually River delta ecosystem is currently taking place to assist in recovery of the stock as juvenile Fall Chinook salmon are dependent on the estuary. A pre-restoration baseline that includes the characterization of life history strategies, estuary residence times, growth rates, and habitat use is needed to evaluate the potential response of hatchery and natural origin Chinook salmon to restoration efforts and to determine restoration success. Otolith analysis was selected as a tool to examine Chinook salmon life history, growth, and residence in the Nisqually River estuary. Previously funded work on samples collected in 2004 (marked and unmarked) and 2005 (unmarked only) partially established a juvenile baseline on growth rates and length of residence associated with various habitats (freshwater, forested riverine tidal, emergent forested transition, estuarine emergent marsh, delta-flats and nearshore). However, residence times and growth rates for the delta-flats (DF) and nearshore (NS) habitats have been minimally documented due to small sample sizes. The purpose of the current study is to incorporate otolith microstructural analysis using otoliths from fish collected within the DF and NS habitats during sampling years 2004-08 to increase sample size and further evaluate between-year variation in otolith microstructure. Our results from this analysis indicated the delta-flats check (DFCK) on unmarked and marked Chinook samples in 2005-08 varied slightly in appearance from that seen on samples previously analyzed only from 2004. A fry migrant life history was observed on otoliths of unmarked Chinook collected in 2005, 2007, and 2008. Generally, freshwater mean increment width of unmarked fish, on average, was smaller compared to marked Chinook followed by tidal delta and DF/NS portions respectively. On average, the complete tidal delta growth rate was higher for marked Chinook compared to unmarked Chinook. The DF/NS growth rate was highest for unmarked and marked Chinook during 2008 compared to all other sampling years. The average DF/NS growth rate on unmarked Chinook was consistently lower than marked Chinook during all years; however, sample sizes were small during some years. Unmarked Chinook, on average, spent longer in the tidal delta compared to marked Chinook. Our results from this report suggest that otolith microstructural analysis can be a valuable tool in establishing baseline information on the utilization of Nisqually River estuary habitats by juvenile Chinook salmon prior to the newly funded restoration efforts.

  12. A sequential bioequivalence design with a potential ethical advantage.

    PubMed

    Fuglsang, Anders

    2014-07-01

    This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.

  13. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  14. Comparative microstructure study of oil palm fruit bunch fibre, mesocarp and kernels after microwave pre-treatment

    NASA Astrophysics Data System (ADS)

    Chang, Jessie S. L.; Chan, Y. S.; Law, M. C.; Leo, C. P.

    2017-07-01

    The implementation of microwave technology in palm oil processing offers numerous advantages; besides elimination of polluted palm oil mill effluent, it also reduces energy consumption, processing time and space. However, microwave exposure could damage a material’s microstructure which affected the quality of fruit that can be related to its physical structure including the texture and appearance. In this work, empty fruit bunches, mesocarp and kernel was microwave dried and their respective microstructures were examined. The microwave pretreatments were conducted at 100W and 200W and the microstructure investigation of both treated and untreated samples were evaluated using scanning electron microscope. The micrographs demonstrated that microwave does not significantly influence kernel and mesocarp but noticeable change was found on the empty fruit bunches where the sizes of the granular starch were reduced and a small portion of the silica bodies were disrupted. From the experimental data, the microwave irradiation was shown to be efficiently applied on empty fruit bunches followed by mesocarp and kernel as significant weight loss and size reduction was observed after the microwave treatments. The current work showed that microwave treatment did not change the physical surfaces of samples but sample shrinkage is observed.

  15. Understanding resilience in same-sex parented families: the work, love, play study

    PubMed Central

    2010-01-01

    Background While families headed by same-sex couples have achieved greater public visibility in recent years, there are still many challenges for these families in dealing with legal and community contexts that are not supportive of same-sex relationships. The Work, Love, Play study is a large longitudinal study of same-sex parents. It aims to investigate many facets of family life among this sample and examine how they change over time. The study focuses specifically on two key areas missing from the current literature: factors supporting resilience in same-sex parented families; and health and wellbeing outcomes for same-sex couples who undergo separation, including the negotiation of shared parenting arrangements post-separation. The current paper aims to provide a comprehensive overview of the design and methods of this longitudinal study and discuss its significance. Methods/Design The Work, Love, Play study is a mixed design, three wave, longitudinal cohort study of same-sex attracted parents. The sample includes lesbian, gay, bisexual and transgender parents in Australia and New Zealand (including single parents within these categories) caring for any children under the age of 18 years. The study will be conducted over six years from 2008 to 2014. Quantitative data are to be collected via three on-line surveys in 2008, 2010 and 2012 from the cohort of parents recruited in Wave1. Qualitative data will be collected via interviews with purposively selected subsamples in 2012 and 2013. Data collection began in 2008 and 355 respondents to Wave One of the study have agreed to participate in future surveys. Work is currently underway to increase this sample size. The methods and survey instruments are described. Discussion This study will make an important contribution to the existing research on same-sex parented families. Strengths of the study design include the longitudinal method, which will allow understanding of changes over time within internal family relationships and social supports. Further, the mixed method design enables triangulation of qualitative and quantitative data. A broad recruitment strategy has already enabled a large sample size with the inclusion of both gay men and lesbians. PMID:20211027

  16. Understanding resilience in same-sex parented families: the work, love, play study.

    PubMed

    Power, Jennifer J; Perlesz, Amaryll; Schofield, Margot J; Pitts, Marian K; Brown, Rhonda; McNair, Ruth; Barrett, Anna; Bickerdike, Andrew

    2010-03-09

    While families headed by same-sex couples have achieved greater public visibility in recent years, there are still many challenges for these families in dealing with legal and community contexts that are not supportive of same-sex relationships. The Work, Love, Play study is a large longitudinal study of same-sex parents. It aims to investigate many facets of family life among this sample and examine how they change over time. The study focuses specifically on two key areas missing from the current literature: factors supporting resilience in same-sex parented families; and health and wellbeing outcomes for same-sex couples who undergo separation, including the negotiation of shared parenting arrangements post-separation. The current paper aims to provide a comprehensive overview of the design and methods of this longitudinal study and discuss its significance. The Work, Love, Play study is a mixed design, three wave, longitudinal cohort study of same-sex attracted parents. The sample includes lesbian, gay, bisexual and transgender parents in Australia and New Zealand (including single parents within these categories) caring for any children under the age of 18 years. The study will be conducted over six years from 2008 to 2014. Quantitative data are to be collected via three on-line surveys in 2008, 2010 and 2012 from the cohort of parents recruited in Wave1. Qualitative data will be collected via interviews with purposively selected subsamples in 2012 and 2013. Data collection began in 2008 and 355 respondents to Wave One of the study have agreed to participate in future surveys. Work is currently underway to increase this sample size. The methods and survey instruments are described. This study will make an important contribution to the existing research on same-sex parented families. Strengths of the study design include the longitudinal method, which will allow understanding of changes over time within internal family relationships and social supports. Further, the mixed method design enables triangulation of qualitative and quantitative data. A broad recruitment strategy has already enabled a large sample size with the inclusion of both gay men and lesbians.

  17. Toward Advancing Nano-Object Count Metrology: A Best Practice Framework

    PubMed Central

    Boyko, Volodymyr; Meyers, Greg; Voetz, Matthias; Wohlleben, Wendel

    2013-01-01

    Background: A movement among international agencies and policy makers to classify industrial materials by their number content of sub–100-nm particles could have broad implications for the development of sustainable nanotechnologies. Objectives: Here we highlight current particle size metrology challenges faced by the chemical industry due to these emerging number percent content thresholds, provide a suggested best-practice framework for nano-object identification, and identify research needs as a path forward. Discussion: Harmonized methods for identifying nanomaterials by size and count for many real-world samples do not currently exist. Although particle size remains the sole discriminating factor for classifying a material as “nano,” inconsistencies in size metrology will continue to confound policy and decision making. Moreover, there are concerns that the casting of a wide net with still-unproven metrology methods may stifle the development and judicious implementation of sustainable nanotechnologies. Based on the current state of the art, we propose a tiered approach for evaluating materials. To enable future risk-based refinements of these emerging definitions, we recommend that this framework also be considered in environmental and human health research involving the implications of nanomaterials. Conclusion: Substantial scientific scrutiny is needed in the area of nanomaterial metrology to establish best practices and to develop suitable methods before implementing definitions based solely on number percent nano-object content for regulatory purposes. Strong cooperation between industry, academia, and research institutions will be required to fully develop and implement detailed frameworks for nanomaterial identification with respect to emerging count-based metrics. Citation: Brown SC, Boyko V, Meyers G, Voetz M, Wohlleben W. 2013. Toward advancing nano-object count metrology: a best practice framework. Environ Health Perspect 121:1282–1291; http://dx.doi.org/10.1289/ehp.1306957 PMID:24076973

  18. Does anodal transcranial direct current stimulation modulate sensory perception and pain? A meta-analysis study.

    PubMed

    Vaseghi, B; Zoghi, M; Jaberzadeh, S

    2014-09-01

    The primary aim of this systematic review was to evaluate the effects of anodal transcranial direct current stimulation (a-tDCS) on sensory (STh) and pain thresholds (PTh) in healthy individuals and pain levels (PL) in patients with chronic pain. Electronic databases were searched for a-tDCS studies. Methodological quality was examined using the PEDro and Downs and Black (D&B) assessment tools. a-tDCS of the primary motor cortex (M1) increases both STh (P<0.005, with the effect size of 22.19%) and PTh (P<0.001, effect size of 19.28%). In addition, STh was increased by a-tDCS of the primary sensory cortex (S1) (P<0.05 with an effect size of 4.34). Likewise, PL decreased significantly in the patient group following application of a-tDCS to both the M1 and dorsolateral prefrontal cortex (DLPFC). The average decrease in visual analogue score was 14.9% and 19.3% after applying a-tDCS on the M1 and DLPFC. Moreover, meta-analysis showed that in all subgroups (except a-tDCS of S1) active a-tDCS and sham stimulation produced significant differences. This review provides evidence for the effectiveness of a-tDCS in increasing STh/PTh in healthy group and decreasing PL in patients. However, due to small sample sizes in the included studies, our results should be interpreted cautiously. Given the level of blinding did not considered in inclusion criteria, the result of current study should be interpreted with caution. Site of stimulation should have a differential effect over pain relief. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  19. New non-linear photovoltaic effect in uniform bipolar semiconductor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volovichev, I.

    2014-11-21

    A linear theory of the new non-linear photovoltaic effect in the closed circuit consisting of a non-uniformly illuminated uniform bipolar semiconductor with neutral impurities is developed. The non-uniform photo-excitation of impurities results in the position-dependant current carrier mobility that breaks the semiconductor homogeneity and induces the photo-electromotive force (emf). As both the electron (or hole) mobility gradient and the current carrier generation rate depend on the light intensity, the photo-emf and the short-circuit current prove to be non-linear functions of the incident light intensity at an arbitrarily low illumination. The influence of the sample size on the photovoltaic effect magnitudemore » is studied. Physical relations and distinctions between the considered effect and the Dember and bulk photovoltaic effects are also discussed.« less

  20. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    PubMed

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  1. Genetic differentiation of spring-spawning and fall-spawning male Atlantic sturgeon in the James River, Virginia

    PubMed Central

    Balazik, Matthew T.; Farrae, Daniel J.; Darden, Tanya L.; Garman, Greg C.

    2017-01-01

    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus, Acipenseridae) populations are currently at severely depleted levels due to historic overfishing, habitat loss, and pollution. The importance of biologically correct stock structure for effective conservation and management efforts is well known. Recent improvements in our understanding of Atlantic sturgeon migrations, movement, and the occurrence of putative dual spawning groups leads to questions regarding the true stock structure of this endangered species. In the James River, VA specifically, captures of spawning Atlantic sturgeon and accompanying telemetry data suggest there are two discrete spawning groups of Atlantic sturgeon. The two putative spawning groups were genetically evaluated using a powerful microsatellite marker suite to determine if they are genetically distinct. Specifically, this study evaluates the genetic structure, characterizes the genetic diversity, estimates effective population size, and measures inbreeding of Atlantic sturgeon in the James River. The results indicate that fall and spring spawning James River Atlantic sturgeon groups are genetically distinct (overall FST = 0.048, F’ST = 0.181) with little admixture between the groups. The observed levels of genetic diversity and effective population sizes along with the lack of detected inbreeding all indicated that the James River has two genetically healthy populations of Atlantic sturgeon. The study also demonstrates that samples from adult Atlantic sturgeon, with proper sample selection criteria, can be informative when creating reference population databases. The presence of two genetically-distinct spawning groups of Atlantic sturgeon within the James River raises concerns about the current genetic assignment used by managers. Other nearby rivers may also have dual spawning groups that either are not accounted for or are pooled in reference databases. Our results represent the second documentation of genetically distinct dual spawning groups of Atlantic sturgeon in river systems along the U.S. Atlantic coast, suggesting that current reference population database should be updated to incorporate both new samples and our increased understanding of Atlantic sturgeon life history. PMID:28686610

  2. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  3. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  4. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors

    PubMed Central

    Weng, Jian; Dong, Shanshan; He, Hongjian; Chen, Feiyan; Peng, Xiaogang

    2015-01-01

    Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group. PMID:26207985

  5. Particulate matter emission by a vehicle running on unpaved road

    NASA Astrophysics Data System (ADS)

    Williams, David Scott; Shukla, Manoj K.; Ross, Jim

    2008-05-01

    The particulate matter (PM) emission from unpaved roads starts with the pulverization of surface material by the force of the vehicle, uplifting and subsequent exposure of road to strong air currents behind the wheels. The objectives of the project were to: demonstrate the utility of a simple technique for collecting suspended airborne PM emitted by vehicle running on an unpaved road, determine the mass balance of airborne PM at different heights, and determine the particle size and elemental composition of PM. We collected dust samples on sticky tapes using a rotorod sampler mounted on a tower across an unpaved road located at the Leyendecker Plant Sciences Research Center, Las Cruces, NM, USA. Dust samples were collected at 1.5, 4.5 and 6 m height above the ground surface on the east and west side of the road. One rotorod sampler was also installed at the centre of the road at 6 m height. Dust samples from unpaved road were mostly (70%) silt and clay-sized particles and were collected at all heights. The height and width of the PM plume and the amount of clay-sized particles captured on both sides of the road increased with speed and particle captured ranged from 0.05 to 159 μm. Dust particles between PM10 and PM2.5 did not correlate with vehicle speed but particles ⩽PM2.5 did. Emission factors estimated for the total suspended PM were 10147 g km-1 at 48 km h-1 and 11062 g km-1 at 64 km h-1 speed, respectively. The predominant elements detected in PM were carbon, aluminum and silica at all heights. Overall, sticky tape method coupled with electron microscopy was a useful technique for a rapid particle size and elemental characterization of airborne PM.

  6. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  7. Tandem array of nanoelectronic readers embedded coplanar to a fluidic nanochannel for correlated single biopolymer analysis

    PubMed Central

    Lesser-Rojas, Leonardo; Sriram, K. K.; Liao, Kuo-Tang; Lai, Shui-Chin; Kuo, Pai-Chia; Chu, Ming-Lee; Chou, Chia-Fu

    2014-01-01

    We have developed a two-step electron-beam lithography process to fabricate a tandem array of three pairs of tip-like gold nanoelectronic detectors with electrode gap size as small as 9 nm, embedded in a coplanar fashion to 60 nm deep, 100 nm wide, and up to 150 μm long nanochannels coupled to a world-micro-nanofluidic interface for easy sample introduction. Experimental tests with a sealed device using DNA-protein complexes demonstrate the coplanarity of the nanoelectrodes to the nanochannel surface. Further, this device could improve transverse current detection by correlated time-of-flight measurements of translocating samples, and serve as an autocalibrated velocimeter and nanoscale tandem Coulter counters for single molecule analysis of heterogeneous samples. PMID:24753731

  8. Sampling populations of humans across the world: ELSI issues.

    PubMed

    Knoppers, Bartha Maria; Zawati, Ma'n H; Kirby, Emily S

    2012-01-01

    There are an increasing number of population studies collecting data and samples to illuminate gene-environment contributions to disease risk and health. The rising affordability of innovative technologies capable of generating large amounts of data helps achieve statistical power and has paved the way for new international research collaborations. Most data and sample collections can be grouped into longitudinal, disease-specific, or residual tissue biobanks, with accompanying ethical, legal, and social issues (ELSI). Issues pertaining to consent, confidentiality, and oversight cannot be examined using a one-size-fits-all approach-the particularities of each biobank must be taken into account. It remains to be seen whether current governance approaches will be adequate to handle the impact of next-generation sequencing technologies on communication with participants in population biobanking studies.

  9. Novel Insights in the Fecal Egg Count Reduction Test for Monitoring Drug Efficacy against Soil-Transmitted Helminths in Large-Scale Treatment Programs

    PubMed Central

    Levecke, Bruno; Speybroeck, Niko; Dobson, Robert J.; Vercruysse, Jozef; Charlier, Johannes

    2011-01-01

    Background The fecal egg count reduction test (FECRT) is recommended to monitor drug efficacy against soil-transmitted helminths (STHs) in public health. However, the impact of factors inherent to study design (sample size and detection limit of the fecal egg count (FEC) method) and host-parasite interactions (mean baseline FEC and aggregation of FEC across host population) on the reliability of FECRT is poorly understood. Methodology/Principal Findings A simulation study was performed in which FECRT was assessed under varying conditions of the aforementioned factors. Classification trees were built to explore critical values for these factors required to obtain conclusive FECRT results. The outcome of this analysis was subsequently validated on five efficacy trials across Africa, Asia, and Latin America. Unsatisfactory (<85.0%) sensitivity and specificity results to detect reduced efficacy were found if sample sizes were small (<10) or if sample sizes were moderate (10–49) combined with highly aggregated FEC (k<0.25). FECRT remained inconclusive under any evaluated condition for drug efficacies ranging from 87.5% to 92.5% for a reduced-efficacy-threshold of 90% and from 92.5% to 97.5% for a threshold of 95%. The most discriminatory study design required 200 subjects independent of STH status (including subjects who are not excreting eggs). For this sample size, the detection limit of the FEC method and the level of aggregation of the FEC did not affect the interpretation of the FECRT. Only for a threshold of 90%, mean baseline FEC <150 eggs per gram of stool led to a reduced discriminatory power. Conclusions/Significance This study confirms that the interpretation of FECRT is affected by a complex interplay of factors inherent to both study design and host-parasite interactions. The results also highlight that revision of the current World Health Organization guidelines to monitor drug efficacy is indicated. We, therefore, propose novel guidelines to support future monitoring programs. PMID:22180801

  10. Do impression management and self-deception distort self-report measures with content of dynamic risk factors in offender samples? A meta-analytic review.

    PubMed

    Hildebrand, Martin; Wibbelink, Carlijn J M; Verschuere, Bruno

    Self-report measures provide an important source of information in correctional/forensic settings, yet at the same time the validity of that information is often questioned because self-reports are thought to be highly vulnerable to self-presentation biases. Primary studies in offender samples have provided mixed results with regard to the impact of socially desirable responding on self-reports. The main aim of the current study was therefore to investigate-via a meta-analytic review of published studies-the association between the two dimensions of socially desirable responding, impression management and self-deceptive enhancement, and self-report measures with content of dynamic risk factors using the Balanced Inventory of Desirable Responding (BIDR) in offender samples. These self-report measures were significantly and negatively related with self-deception (r = -0.120, p < 0.001; k = 170 effect sizes) and impression management (r = -0.158, p < 0.001; k = 157 effect sizes), yet there was evidence of publication bias for the impression management effect with the trim and fill method indicating that the relation is probably even smaller (r = -0.07). The magnitude of the effect sizes was small. Moderation analyses suggested that type of dynamic risk factor (e.g., antisocial cognition versus antisocial personality), incentives, and publication year affected the relationship between impression management and self-report measures with content of dynamic risk factors, whereas sample size, setting (e.g., incarcerated, community), and publication year influenced the relation between self-deception and these self-report measures. The results indicate that the use of self-report measures to assess dynamic risk factors in correctional/forensic settings is not inevitably compromised by socially desirable responding, yet caution is warranted for some risk factors (antisocial personality traits), particularly when incentives are at play. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. More Power to OATP1B1: An Evaluation of Sample Size in Pharmacogenetic Studies Using a Rosuvastatin PBPK Model for Intestinal, Hepatic, and Renal Transporter‐Mediated Clearances

    PubMed Central

    Burt, Howard; Abduljalil, Khaled; Neuhoff, Sibylle

    2016-01-01

    Abstract Rosuvastatin is a substrate of choice in clinical studies of organic anion‐transporting polypeptide (OATP)1B1‐ and OATP1B3‐associated drug interactions; thus, understanding the effect of OATP1B1 polymorphisms on the pharmacokinetics of rosuvastatin is crucial. Here, physiologically based pharmacokinetic (PBPK) modeling was coupled with a power calculation algorithm to evaluate the influence of sample size on the ability to detect an effect (80% power) of OATP1B1 phenotype on pharmacokinetics of rosuvastatin. Intestinal, hepatic, and renal transporters were mechanistically incorporated into a rosuvastatin PBPK model using permeability‐limited models for intestine, liver, and kidney, respectively, nested within a full PBPK model. Simulated plasma rosuvastatin concentrations in healthy volunteers were in agreement with previously reported clinical data. Power calculations were used to determine the influence of sample size on study power while accounting for OATP1B1 haplotype frequency and abundance in addition to its correlation with OATP1B3 abundance. It was determined that 10 poor‐transporter and 45 intermediate‐transporter individuals are required to achieve 80% power to discriminate the AUC0‐48h of rosuvastatin from that of the extensive‐transporter phenotype. This number was reduced to 7 poor‐transporter and 40 intermediate‐transporter individuals when the reported correlation between OATP1B1 and 1B3 abundance was taken into account. The current study represents the first example in which PBPK modeling in conjunction with power analysis has been used to investigate sample size in clinical studies of OATP1B1 polymorphisms. This approach highlights the influence of interindividual variability and correlation of transporter abundance on study power and should allow more informed decision making in pharmacogenomic study design. PMID:27385171

  12. Comparison of fluvial suspended-sediment concentrations and particle-size distributions measured with in-stream laser diffraction and in physical samples

    NASA Astrophysics Data System (ADS)

    Czuba, Jonathan A.; Straub, Timothy D.; Curran, Christopher A.; Landers, Mark N.; Domanski, Marian M.

    2015-01-01

    Laser-diffraction technology, recently adapted for in-stream measurement of fluvial suspended-sediment concentrations (SSCs) and particle-size distributions (PSDs), was tested with a streamlined (SL), isokinetic version of the Laser In Situ Scattering and Transmissometry (LISST) for measuring volumetric SSCs and PSDs ranging from 1.8 to 415 μm in 32 log-spaced size classes. Measured SSCs and PSDs from the LISST-SL were compared to a suite of 22 data sets (262 samples in all) of concurrent suspended-sediment and streamflow measurements using a physical sampler and acoustic Doppler current profiler collected during 2010-2012 at 16 U.S. Geological Survey streamflow-gaging stations in Illinois and Washington (basin areas: 38-69,264 km2). An unrealistically low computed effective density (mass SSC/volumetric SSC) of 1.24 g/mL (95% confidence interval: 1.05-1.45 g/mL) provided the best-fit value (R2 = 0.95; RMSE = 143 mg/L) for converting volumetric SSC to mass SSC for over two orders of magnitude of SSC (12-2,170 mg/L; covering a substantial range of SSC that can be measured by the LISST-SL) despite being substantially lower than the sediment particle density of 2.67 g/mL (range: 2.56-2.87 g/mL, 23 samples). The PSDs measured by the LISST-SL were in good agreement with those derived from physical samples over the LISST-SL's measureable size range. Technical and operational limitations of the LISST-SL are provided to facilitate the collection of more accurate data in the future. Additionally, the spatial and temporal variability of SSC and PSD measured by the LISST-SL is briefly described to motivate its potential for advancing our understanding of suspended-sediment transport by rivers.

  13. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  14. New Manufacturing Method for Paper Filler and Fiber Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doelle, Klaus

    2011-06-26

    The use of fillers in printing and writing papers has become a prerequisite for competing in a global market to reduce the cost of materials. Use of calcium carbonates (ranging from 18% to 30%) as filler is a common practice in the paper industry but the choices of fillers for each type of papers vary widely according to its use. The market for uncoated digital printing paper is one that continues to introduce exciting growth projections and it is important to understand the effect that different types of calcium carbonates have on the paper properties made of 100% eucalyptus pulp.more » The current study is focused on selecting the most suitable market available calcium carbonate for the production of uncoated Eucalyptus digital printing paper, targeting a potential filler increase of 5% above the currently used filler content. We made hand sheets using 13 different varieties of widely used calcium carbonates [Nine samples of PCC (two rhombic and seven scalenohedral, covering a wide particle size range from 1.2 {micro}m to 2.9 {micro}m), and four samples of GCC (three anionic and one cationic, with a particle size range from 0.7 {micro}m to 1.5 {micro}m)] available in the market followed by a 12” pilot plant paper machine run. The detailed analysis on the main structural, optical and strength properties of the hand sheets found that the most suitable calcium carbonate for uncoated Eucalyptus digital printing paper production is scalenohedral PCC, with a particle size of 1.9 {micro}m for its positive effects on thickness, stiffness, brightness and opacity of paper.« less

  15. Dust Composition in Climate Models: Current Status and Prospects

    NASA Astrophysics Data System (ADS)

    Pérez García-Pando, C.; Miller, R. L.; Perlwitz, J. P.; Kok, J. F.; Scanza, R.; Mahowald, N. M.

    2015-12-01

    Mineral dust created by wind erosion of soil particles is the dominant aerosol by mass in the atmosphere. It exerts significant effects on radiative fluxes, clouds, ocean biogeochemistry, and human health. Models that predict the lifecycle of mineral dust aerosols generally assume a globally uniform mineral composition. However, this simplification limits our understanding of the role of dust in the Earth system, since the effects of dust strongly depend on the particles' physical and chemical properties, which vary with their mineral composition. Hence, not only a detailed understanding of the processes determining the dust emission flux is needed, but also information about its size dependent mineral composition. Determining the mineral composition of dust aerosols is complicated. The largest uncertainty derives from the current atlases of soil mineral composition. These atlases provide global estimates of soil mineral fractions, but they are based upon massive extrapolation of a limited number of soil samples assuming that mineral composition is related to soil type. This disregards the potentially large variability of soil properties within each defined soil type. In addition, the analysis of these soil samples is based on wet sieving, a technique that breaks the aggregates found in the undisturbed parent soil. During wind erosion, these aggregates are subject to partial fragmentation, which generates differences on the size distribution and composition between the undisturbed parent soil and the emitted dust aerosols. We review recent progress on the representation of the mineral and chemical composition of dust in climate models. We discuss extensions of brittle fragmentation theory to prescribe the emitted size-resolved dust composition, and we identify key processes and uncertainties based upon model simulations and an unprecedented compilation of observations.

  16. Semi-automatic surface sediment sampling system - A prototype to be implemented in bivalve fishing surveys

    NASA Astrophysics Data System (ADS)

    Rufino, Marta M.; Baptista, Paulo; Pereira, Fábio; Gaspar, Miguel B.

    2018-01-01

    In the current work we propose a new method to sample surface sediment during bivalve fishing surveys. Fishing institutes all around the word carry out regular surveys with the aim of monitoring the stocks of commercial species. These surveys comprise often more than one hundred of sampling stations and cover large geographical areas. Although superficial sediment grain sizes are among the main drivers of benthic communities and provide crucial information for studies on coastal dynamics, overall there is a strong lack of this type of data, possibly, because traditional surface sediment sampling methods use grabs, that require considerable time and effort to be carried out on regular basis or on large areas. In face of these aspects, we developed an easy and un-expensive method to sample superficial sediments, during bivalve fisheries monitoring surveys, without increasing survey time or human resources. The method was successfully evaluated and validated during a typical bivalve survey carried out on the Northwest coast of Portugal, confirming that it had any interference with the survey objectives. Furthermore, the method was validated by collecting samples using a traditional Van Veen grabs (traditional method), which showed a similar grain size composition to the ones collected by the new method, on the same localities. We recommend that the procedure is implemented on regular bivalve fishing surveys, together with an image analysis system to analyse the collected samples. The new method will provide substantial quantity of data on surface sediment in coastal areas, using a non-expensive and efficient manner, with a high potential application in different fields of research.

  17. Technical assessment of processing plants as exemplified by the sorting of beverage cartons from lightweight packaging wastes.

    PubMed

    Feil, A; Thoden van Velzen, E U; Jansen, M; Vitz, P; Go, N; Pretz, T

    2016-02-01

    The recovery of beverage cartons (BC) in three lightweight packaging waste processing plants (LP) was analyzed with different input materials and input masses in the area of 21-50Mg. The data was generated by gravimetric determination of the sorting products, sampling and sorting analysis. Since the particle size of beverage cartons is larger than 120mm, a modified sampling plan was implemented and targeted multiple sampling (3-11 individual samplings) and a total sample size of respectively 1200l (ca. 60kg) for the BC-products and of about 2400l (ca. 120kg) for material-heterogeneous mixed plastics (MP) and sorting residue products. The results infer that the quantification of the beverage carton yield in the process, i.e., by including all product-containing material streams, can be specified only with considerable fluctuation ranges. Consequently, the total assessment, regarding all product streams, is rather qualitative than quantitative. Irregular operation conditions as well as unfavorable sampling conditions and capacity overloads are likely causes for high confidence intervals. From the results of the current study, recommendations can basically be derived for a better sampling in LP-processing plants. Despite of the suboptimal statistical results, the results indicate very clear that the plants show definite optimisation potentials with regard to the yield of beverage cartons as well as the required product purity. Due to the test character of the sorting trials the plant parameterization was not ideal for this sorting task and consequently the results should be interpreted with care. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Investigation of mineral transformations and ash deposition during staged combustion. Quarterly technical progress report, April 1, 1997--June 30, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harb, J.N.

    This report describes work performed in the fifteenth quarter of a fundamental study to examine the effect of staged combustion on ash formation and deposition. Efforts this quarter included addition of a new cyclone for improved particle sampling and modification of the existing sampling probe. Particulate samples were collected under a variety of experimental conditions for both coals under investigation. Deposits formed from the Black Thunder coal were also collected. Particle size and composition from the Pittsburgh No. 8 ash samples support previously reported results. In addition, the authors ability to distinguish char/ash associations has been refined and applied tomore » a variety of ash samples from this coal. The results show a clear difference between the behavior of included and excluded pyrite, and provide insight into the extent of pyrite oxidation. Ash samples from the Black Thunder coal have also been collected and analyzed. Results indicate a significant difference in the particle size of {open_quotes}unclassifiable{close_quotes} particles for ash formed during staged combustion. A difference in composition also appears to be present and is currently under investigation. Finally, deposits were collected under staged conditions for the Black Thunder coal. Specifically, two deposits were formed under similar conditions and allowed to mature under either reducing or oxidizing conditions in natural gas. Differences between the samples due to curing were noted. In addition, both deposits showed skeletal ash structures which resulted from in-situ burnout of the char after deposition.« less

  19. Testing the equivalence of modern human cranial covariance structure: Implications for bioarchaeological applications.

    PubMed

    von Cramon-Taubadel, Noreen; Schroeder, Lauren

    2016-10-01

    Estimation of the variance-covariance (V/CV) structure of fragmentary bioarchaeological populations requires the use of proxy extant V/CV parameters. However, it is currently unclear whether extant human populations exhibit equivalent V/CV structures. Random skewers (RS) and hierarchical analyses of common principal components (CPC) were applied to a modern human cranial dataset. Cranial V/CV similarity was assessed globally for samples of individual populations (jackknifed method) and for pairwise population sample contrasts. The results were examined in light of potential explanatory factors for covariance difference, such as geographic region, among-group distance, and sample size. RS analyses showed that population samples exhibited highly correlated multivariate responses to selection, and that differences in RS results were primarily a consequence of differences in sample size. The CPC method yielded mixed results, depending upon the statistical criterion used to evaluate the hierarchy. The hypothesis-testing (step-up) approach was deemed problematic due to sensitivity to low statistical power and elevated Type I errors. In contrast, the model-fitting (lowest AIC) approach suggested that V/CV matrices were proportional and/or shared a large number of CPCs. Pairwise population sample CPC results were correlated with cranial distance, suggesting that population history explains some of the variability in V/CV structure among groups. The results indicate that patterns of covariance in human craniometric samples are broadly similar but not identical. These findings have important implications for choosing extant covariance matrices to use as proxy V/CV parameters in evolutionary analyses of past populations. © 2016 Wiley Periodicals, Inc.

  20. The Role of ZnO Particle Size, Shape and Concentration on Liquid Crystal Order and Current-Voltage Properties for Potential Photovoltaic Applications

    NASA Astrophysics Data System (ADS)

    Martinez-Miranda, Luz J.; Branch, Janelle; Thompson, Robert; Taylor, Jefferson W.; Salamanca-Riba, Lourdes

    2012-02-01

    We investigate the role order plays in the transfer of charges in ZnO nanoparticle - octylcyanobiphenyl (8CB) liquid crystal system for photovoltaic applications as well as the role the nominally 7x5x5nm^3 or 20x5x5nm^3 ZnO nanoparticles play in improving that order. Our results for the 5nm nanoparticles show an improvement in the alignment of the liquid crystal with increasing weight percentage of ZnO nanoparticles^1. Our results for the 7x5x5 nm^3 sample show that the current is larger than the current obtained for the 5 nm samples. We find that order is improved for concentrations close to 35% wt ZnO for both the 7x5x5 nm^3 and 20x5x5 nm^3. We have analyzed the X-ray scans for both the 7x5x5 and the 20x5x5 nm^3 samples. The signal corresponding to the liquid crystal aligned parallel to the substrate is much smaller than the peak corresponding to the liquid crystal aligned approximately at 70 with respect to the substrate for the 7x5x5 nm^3 sample whereas this same peak is comparable or more intense for the 20x5x5 nm^3 sample. 1. L. J. Mart'inez-Miranda, Kaitlin M. Traister, Iriselies Mel'endez-Rodr'iguez, and Lourdes Salamanca-Riba, Appl. Phys. Letts, 97, 223301 (2010).

  1. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials.

    PubMed

    Mi, Michael Y; Betensky, Rebecca A

    2013-04-01

    Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample-size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Because the basic SPCD already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and whether we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample-size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample-size re-estimation, up to 25% power was recovered from underestimated sample-size scenarios. Given the numerous possible test parameters that could have been chosen for the simulations, the study's results are limited to situations described by the parameters that were used and may not generalize to all possible scenarios. Furthermore, dropout of patients is not considered in this study. It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments.

  2. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials

    PubMed Central

    Mi, Michael Y.; Betensky, Rebecca A.

    2013-01-01

    Background Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Purpose Because the basic SPCD design already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and if we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Methods Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. Results From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample size re-estimation, up to 25% power was recovered from underestimated sample size scenarios. Limitations Given the numerous possible test parameters that could have been chosen for the simulations, the study’s results are limited to situations described by the parameters that were used, and may not generalize to all possible scenarios. Furthermore, drop-out of patients is not considered in this study. Conclusions It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments. PMID:23283576

  3. Predictive accuracy of combined genetic and environmental risk scores.

    PubMed

    Dudbridge, Frank; Pashayan, Nora; Yang, Jian

    2018-02-01

    The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. © 2017 WILEY PERIODICALS, INC.

  4. Predictive accuracy of combined genetic and environmental risk scores

    PubMed Central

    Pashayan, Nora; Yang, Jian

    2017-01-01

    ABSTRACT The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. PMID:29178508

  5. 76 FR 56141 - Notice of Intent To Request New Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... level surveys of similar scope and size. The sample for each selected community will be strategically... of 2 hours per sample community. Full Study: The maximum sample size for the full study is 2,812... questionnaires. The initial sample size for this phase of the research is 100 respondents (10 respondents per...

  6. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.

    PubMed

    Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L

    2013-08-13

    United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.

  7. Eddy Current Testing for Detecting Small Defects in Thin Films

    NASA Astrophysics Data System (ADS)

    Obeid, Simon; Tranjan, Farid M.; Dogaru, Teodor

    2007-03-01

    Presented here is a technique of using Eddy Current based Giant Magneto-Resistance sensor (GMR) to detect surface and sub-layered minute defects in thin films. For surface crack detection, a measurement was performed on a copper metallization of 5-10 microns thick. It was done by scanning the GMR sensor on the surface of the wafer that had two scratches of 0.2 mm, and 2.5 mm in length respectively. In another experiment, metal coatings were deposited over the layers containing five defects with known lengths such that the defects were invisible from the surface. The limit of detection (resolution), in terms of defect size, of the GMR high-resolution Eddy Current probe was studied using this sample. Applications of Eddy Current testing include detecting defects in thin film metallic layers, and quality control of metallization layers on silicon wafers for integrated circuits manufacturing.

  8. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    ERIC Educational Resources Information Center

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  9. [Practical aspects regarding sample size in clinical research].

    PubMed

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  10. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    PubMed Central

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  11. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    PubMed

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  12. One Size Fits All? Applying Theoretical Predictions about Age and Emotional Experience to People with Functional Disabilities

    PubMed Central

    Piazza, Jennifer R.; Charles, Susan T.; Luong, Gloria; Almeida, David M.

    2015-01-01

    The current study examined whether commonly observed age differences in affective experience among community samples of healthy adults would generalize to a group of adults who live with significant functional disability. Age differences in daily affect and affective reactivity to daily stressors among a sample of participants with spinal cord injury were compared to a non-injured sample. Results revealed that patterns of affective experience varied by sample. Among non-injured adults, older age was associated with lower levels of daily negative affect (NA), higher levels of daily positive affect (PA), and less negative affective reactivity in response to daily stressors. In contrast, among participants with spinal cord injury, no age differences emerged. Findings, which support the model of Strength and Vulnerability Integration (SAVI), underscore the importance of taking life context into account when predicting age differences in affective well-being. PMID:26322552

  13. Experimental light scattering by small particles: system design and calibration

    NASA Astrophysics Data System (ADS)

    Maconi, Göran; Kassamakov, Ivan; Penttilä, Antti; Gritsevich, Maria; Hæggström, Edward; Muinonen, Karri

    2017-06-01

    We describe a setup for precise multi-angular measurements of light scattered by mm- to μm-sized samples. We present a calibration procedure that ensures accurate measurements. Calibration is done using a spherical sample (d = 5 mm, n = 1.517) fixed on a static holder. The ultimate goal of the project is to allow accurate multi-wavelength measurements (the full Mueller matrix) of single-particle samples which are levitated ultrasonically. The system comprises a tunable multimode Argon-krypton laser, with 12 wavelengths ranging from 465 to 676 nm, a linear polarizer, a reference photomultiplier tube (PMT) monitoring beam intensity, and several PMT:s mounted radially towards the sample at an adjustable radius. The current 150 mm radius allows measuring all azimuthal angles except for ±4° around the backward scattering direction. The measurement angle is controlled by a motor-driven rotational stage with an accuracy of 15'.

  14. Electrochemical alloying of immiscible Ag and Co for their structural and magnetic analyses

    NASA Astrophysics Data System (ADS)

    Santhi, Kalavathy; Kumarsan, Dhanapal; Vengidusamy, Naryanan; Arumainathan, Stephen

    2017-07-01

    Electrochemical alloying of immiscible Ag and Co was carried out at different current densities from electrolytes of two different concentrations, after optimizing the electrolytic bath and operating conditions. The samples obtained were characterized using X-ray diffraction to confirm the simultaneous deposition of Ag and Co and to determine their crystallographic structure. The atomic percentage of Ag and Co contents in the granular alloy was determined by ICP-OES analysis. The XPS spectra were observed to confirm the presence of Ag and Co in the metallic form in the granular alloy samples. The micrographs observed using scanning and transmission electron microscopes threw light on the surface morphology and the size of the particles. The magnetic nature of the samples was analyzed at room temperature by a vibration sample magnetometer. Their magnetic phase transition while heating was also studied to provide further evidence for the magnetic behaviour and the structure of the deposits.

  15. Computer-aided boundary delineation of agricultural lands

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1989-01-01

    The National Agricultural Statistics Service of the United States Department of Agriculture (USDA) presently uses labor-intensive aerial photographic interpretation techniques to divide large geographical areas into manageable-sized units for estimating domestic crop and livestock production. Prototype software, the computer-aided stratification (CAS) system, was developed to automate the procedure, and currently runs on a Sun-based image processing system. With a background display of LANDSAT Thematic Mapper and United States Geological Survey Digital Line Graph data, the operator uses a cursor to delineate agricultural areas, called sampling units, which are assigned to strata of land-use and land-cover types. The resultant stratified sampling units are used as input into subsequent USDA sampling procedures. As a test, three counties in Missouri were chosen for application of the CAS procedures. Subsequent analysis indicates that CAS was five times faster in creating sampling units than the manual techniques were.

  16. Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests

    Treesearch

    Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford

    1995-01-01

    To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...

  17. Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods

    DTIC Science & Technology

    2016-11-01

    ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample

  18. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  19. Asymptotic Distributions of Coalescence Times and Ancestral Lineage Numbers for Populations with Temporally Varying Size

    PubMed Central

    Chen, Hua; Chen, Kun

    2013-01-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n − An(t) follows a Poisson distribution, and as m → n, n(n−1)Tm/2N(0) follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference. PMID:23666939

  20. Asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size.

    PubMed

    Chen, Hua; Chen, Kun

    2013-07-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n - An(t) follows a Poisson distribution, and as m → n, $$n\\left(n-1\\right){T}_{m}/2N\\left(0\\right)$$ follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference.

  1. Effect of heterogeneity on the characterization of cell membrane compartments: I. Uniform size and permeability.

    PubMed

    Hall, Damien

    2010-03-15

    Observations of the motion of individual molecules in the membrane of a number of different cell types have led to the suggestion that the outer membrane of many eukaryotic cells may be effectively partitioned into microdomains. A major cause of this suggested partitioning is believed to be due to the direct/indirect association of the cytosolic face of the cell membrane with the cortical cytoskeleton. Such intimate association is thought to introduce effective hydrodynamic barriers into the membrane that are capable of frustrating molecular Brownian motion over distance scales greater than the average size of the compartment. To date, the standard analytical method for deducing compartment characteristics has relied on observing the random walk behavior of a labeled lipid or protein at various temporal frequencies and different total lengths of time. Simple theoretical arguments suggest that the presence of restrictive barriers imparts a characteristic turnover to a plot of mean squared displacement versus sampling period that can be interpreted to yield the average dimensions of the compartment expressed as the respective side lengths of a rectangle. In the following series of articles, we used computer simulation methods to investigate how well the conventional analytical strategy coped with heterogeneity in size, shape, and barrier permeability of the cell membrane compartments. We also explored questions relating to the necessary extent of sampling required (with regard to both the recorded time of a single trajectory and the number of trajectories included in the measurement bin) for faithful representation of the actual distribution of compartment sizes found using the SPT technique. In the current investigation, we turned our attention to the analytical characterization of diffusion through cell membrane compartments having both a uniform size and permeability. For this ideal case, we found that (i) an optimum sampling time interval existed for the analysis and (ii) the total length of time for which a trajectory was recorded was a key factor. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  2. Satisfaction with social networks: an examination of socioemotional selectivity theory across cohorts.

    PubMed

    Lansford, J E; Sherman, A M; Antonucci, T C

    1998-12-01

    This study examines L. L. Carstensen's (1993, 1995) socioemotional selectivity theory within and across three cohorts spanning 4 decades. Socioemotional selectivity theory predicts that as individuals age, they narrow their social networks to devote more emotional resources to fewer relationships with close friends and family. Data from 3 cohorts of nationally representative samples were analyzed to determine whether respondents' satisfaction with the size of their social networks differed by age, cohort, or both. Results support socioemotional selectivity theory: More older adults than younger adults were satisfied with the current size of their social networks rather than wanting larger networks. These findings are consistent across all cohorts. Results are discussed with respect to social relationships across the life course.

  3. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    PubMed

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  4. Interim analysis: A rational approach of decision making in clinical trial.

    PubMed

    Kumar, Amal; Chakraborty, Bhaswat S

    2016-01-01

    Interim analysis of especially sizeable trials keeps the decision process free of conflict of interest while considering cost, resources, and meaningfulness of the project. Whenever necessary, such interim analysis can also call for potential termination or appropriate modification in sample size, study design, and even an early declaration of success. Given the extraordinary size and complexity today, this rational approach helps to analyze and predict the outcomes of a clinical trial that incorporate what is learned during the course of a study or a clinical development program. Such approach can also fill the gap by directing the resources toward relevant and optimized clinical trials between unmet medical needs and interventions being tested currently rather than fulfilling only business and profit goals.

  5. Transgender Population Size in the United States: a Meta-Regression of Population-Based Probability Samples

    PubMed Central

    Sevelius, Jae M.

    2017-01-01

    Background. Transgender individuals have a gender identity that differs from the sex they were assigned at birth. The population size of transgender individuals in the United States is not well-known, in part because official records, including the US Census, do not include data on gender identity. Population surveys today more often collect transgender-inclusive gender-identity data, and secular trends in culture and the media have created a somewhat more favorable environment for transgender people. Objectives. To estimate the current population size of transgender individuals in the United States and evaluate any trend over time. Search methods. In June and July 2016, we searched PubMed, Cumulative Index to Nursing and Allied Health Literature, and Web of Science for national surveys, as well as “gray” literature, through an Internet search. We limited the search to 2006 through 2016. Selection criteria. We selected population-based surveys that used probability sampling and included self-reported transgender-identity data. Data collection and analysis. We used random-effects meta-analysis to pool eligible surveys and used meta-regression to address our hypothesis that the transgender population size estimate would increase over time. We used subsample and leave-one-out analysis to assess for bias. Main results. Our meta-regression model, based on 12 surveys covering 2007 to 2015, explained 62.5% of model heterogeneity, with a significant effect for each unit increase in survey year (F = 17.122; df = 1,10; b = 0.026%; P = .002). Extrapolating these results to 2016 suggested a current US population size of 390 adults per 100 000, or almost 1 million adults nationally. This estimate may be more indicative for younger adults, who represented more than 50% of the respondents in our analysis. Authors’ conclusions. Future national surveys are likely to observe higher numbers of transgender people. The large variety in questions used to ask about transgender identity may account for residual heterogeneity in our models. Public health implications. Under- or nonrepresentation of transgender individuals in population surveys is a barrier to understanding social determinants and health disparities faced by this population. We recommend using standardized questions to identify respondents with transgender and nonbinary gender identities, which will allow a more accurate population size estimate. PMID:28075632

  6. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  7. Advancing microwave technology for dehydration processing of biologics.

    PubMed

    Cellemme, Stephanie L; Van Vorst, Matthew; Paramore, Elisha; Elliott, Gloria D

    2013-10-01

    Our prior work has shown that microwave processing can be effective as a method for dehydrating cell-based suspensions in preparation for anhydrous storage, yielding homogenous samples with predictable and reproducible drying times. In the current work an optimized microwave-based drying process was developed that expands upon this previous proof-of-concept. Utilization of a commercial microwave (CEM SAM 255, Matthews, NC) enabled continuous drying at variable low power settings. A new turntable was manufactured from Ultra High Molecular Weight Polyethylene (UHMW-PE; Grainger, Lake Forest, IL) to provide for drying of up to 12 samples at a time. The new process enabled rapid and simultaneous drying of multiple samples in containment devices suitable for long-term storage and aseptic rehydration of the sample. To determine sample repeatability and consistency of drying within the microwave cavity, a concentration series of aqueous trehalose solutions were dried for specific intervals and water content assessed using Karl Fischer Titration at the end of each processing period. Samples were dried on Whatman S-14 conjugate release filters (Whatman, Maidestone, UK), a glass fiber membrane used currently in clinical laboratories. The filters were cut to size for use in a 13 mm Swinnex(®) syringe filter holder (Millipore(™), Billerica, MA). Samples of 40 μL volume could be dehydrated to the equilibrium moisture content by continuous processing at 20% with excellent sample-to-sample repeatability. The microwave-assisted procedure enabled high throughput, repeatable drying of multiple samples, in a manner easily adaptable for drying a wide array of biological samples. Depending on the tolerance for sample heating, the drying time can be altered by changing the power level of the microwave unit.

  8. Electrical and magnetic properties of nano-sized magnesium ferrite

    NASA Astrophysics Data System (ADS)

    T, Smitha; X, Sheena; J, Binu P.; Mohammed, E. M.

    2015-02-01

    Nano-sized magnesium ferrite was synthesized using sol-gel techniques. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. Vibration Sample Magnetometer was used to record the magnetic measurements. XRD analysis reveals the prepared sample is single phasic without any impurity. Particle size calculation shows the average crystallite size of the sample is 19nm. FTIR analysis confirmed spinel structure of the prepared samples. Magnetic measurement study shows that the sample is ferromagnetic with high degree of isotropy. Hysterisis loop was traced at temperatures 100K and 300K. DC electrical resistivity measurements show semiconducting nature of the sample.

  9. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  10. Evaluation of Respondent-Driven Sampling

    PubMed Central

    McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling method, and caution is required when interpreting findings based on the sampling method. PMID:22157309

  11. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    PubMed

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  12. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  13. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    NASA Astrophysics Data System (ADS)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  14. Risk Behaviors, Prevalence of HIV and Hepatitis C Virus Infection and Population Size of Current Injection Drug Users in a China-Myanmar Border City: Results from a Respondent-Driven Sampling Survey in 2012

    PubMed Central

    Li, Lei; Assanangkornchai, Sawitri; Duo, Lin; McNeil, Edward; Li, Jianhua

    2014-01-01

    Background Injection drug use has been the major cause of HIV/AIDS in China in the past two decades. We measured the prevalences of HIV and hepatitis C virus (HCV) prevalence and their associated risk factors among current injection drug users (IDUs) in Ruili city, a border region connecting China with Myanmar that has been undergoing serious drug use and HIV spread problems. An estimate of the number of current IDUs is also presented. Methods In 2012, Chinese IDUs who had injected within the past six months and aged ≥18 years were recruited using a respondent-driven sampling (RDS) technique. Participants underwent interviews and serological testing for HIV, HBV, HCV and syphilis. Logistic regression indentified factors associated with HIV and HCV infections. Multiplier method was used to obtain an estimate of the size of the current IDU population via combining available service data and findings from our survey. Results Among 370 IDUs recruited, the prevalence of HIV and HCV was 18.3% and 41.5%, respectively. 27.1% of participants had shared a needle/syringe in their lifetime. Consistent condom use rates were low among both regular (6.8%) and non-regular (30.4%) partners. Factors independently associated with being HIV positive included HCV infection, having a longer history of injection drug use and experience of needle/syringe sharing. Participants with HCV infection were more likely to be HIV positive, have injected more types of drugs, have shared other injection equipments and have unprotected sex with regular sex partners. The estimated number of current IDUs in Ruili city was 2,714 (95% CI: 1,617–5,846). Conclusions IDUs may continue to be a critical subpopulation for transmission of HIV and other infections in this region because of the increasing population and persistent high risk of injection and sexual behaviours. Developing innovative strategies that can improve accessibility of current harm reduction services and incorporate more comprehensive contents is urgently needed. PMID:25203256

  15. Investigation of half-quantized fluxoid states in strontium ruthenate mesoscopic superconducting rings

    NASA Astrophysics Data System (ADS)

    Jang, Joonho

    Spin-triplet superconductors can support exotic objects, such as chiral edge currents and half-quantum vortices (HQVs) characterized by the nontrivial winding of the spin structure. In this dissertation, we present cantilever magnetometry measurements performed on mesoscopic samples of Sr2RuO 4, a spin-triplet superconductor. Satisfying the total anti-symmetric property of the Cooper pair wave function, Sr2RuO4 is theoretically suggested to have angular momentum L = 1 and form domain structure with px +/- ipy order parameter that corresponds to Lz = +/-1. For micron-size samples, only a few number of domains would exist and signatures of domain walls and edge currents are expected to be measurable with current sensitivity. From the measurements of fluctuations of magnetic signal and the signatures of vortex entries, we found no evidence to support broken time-reversal symmetry (TRS) in these crystals. We argue that various scenarios exist to explain the negative result while still assuming the TRS breaking chiral order parameter. Also, micron-size annular-shaped Sr2RuO4 crystals were used to observe transitions between fluxoid states. Our observation of half-integer transitions is consistent with the existence of HQVs in a spin-triplet superconductor. Stability of the half states with an in-plane magnetic field is explained by the spin polarization in consequence of a differential phase winding of up and down spin components. These spin and charge dynamics can also be revealed in the current response to phase winding across a weak-link junction. The junctions were fabricated within ring geometry. The phase is varied by the external magnetic field and the current is calculated by measuring the magnetic moments of the ring. The current response shows second harmonics when the in-plane magnetic field is applied, and the data are successfully fitted when Gibbs free energy is expressed with additional spin degree of freedom. Our observations are consistent with spin-triplet pairing of the Sr 2RuO4, while requiring more investigations to confirm px +/- ipy order parameter in the crystal.

  16. Lab-scale ash production by abrasion and collision experiments of porous volcanic samples

    NASA Astrophysics Data System (ADS)

    Mueller, S. B.; Lane, S. J.; Kueppers, U.

    2015-09-01

    In the course of explosive eruptions, magma is fragmented into smaller pieces by a plethora of processes before and during deposition. Volcanic ash, fragments smaller than 2 mm, has near-volcano effects (e.g. increasing mobility of PDCs, threat to human infrastructure) but may also cause various problems over long duration and/or far away from the source (human health and aviation matters). We quantify the efficiency of ash generation during experimental fracturing of pumiceous and scoriaceous samples subjected to shear and normal stress fields. Experiments were designed to produce ash by overcoming the yield strength of samples from Tenerife (Canary Islands, Spain), Sicily and Lipari Islands (Italy), with this study having particular interest in the < 355 μm fraction. Fracturing within volcanic conduits, plumes and pyroclastic density currents (PDCs) was simulated through a series of abrasion (shear) and collision (normal) experiments. An understanding of these processes is crucial as they are capable of producing very fine ash (< 10 μm). These particles can remain in the atmosphere for several days and may travel large distances ( 1000s of km). This poses a threat to the aviation industry and human health. From the experiments we establish that abrasion produced the finest-grained material and up to 50% of the generated ash was smaller than 10 μm. In comparison, the collision experiments that applied mainly normal stress fields produced coarser grain sizes. Results were compared to established grain size distributions for natural fall and PDC deposits and good correlation was found. Energies involved in collision and abrasion experiments were calculated and showed an exponential correlation with ash production rate. Projecting these experimental results into the volcanic environment, the greatest amounts of ash are produced in the most energetic and turbulent regions of volcanic flows, which are proximal to the vent. Finest grain sizes are produced in PDCs and can be observed as co-ignimbrite clouds above density currents. Finally, a significant dependency was found between material density and the mass of fines produced, also observable in the total particle size distribution: higher values of open porosity promote the generation of finer-grained particles and overall greater ratios of ash. While this paper draws on numerous previous studies of particle comminution processes, it is the first to analyze and compare results of several comminution experiments with each other in order to characterize these mechanisms.

  17. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  18. Characterization of quantum interference control of injected currents in LT-GaAs for carrier-envelope phase measurements.

    PubMed

    Roos, Peter; Quraishi, Qudsia; Cundiff, Steven; Bhat, Ravi; Sipe, J

    2003-08-25

    We use two mutually coherent, harmonically related pulse trains to experimentally characterize quantum interference control (QIC) of injected currents in low-temperature-grown gallium arsenide. We observe real-time QIC interference fringes, optimize the QIC signal fidelity, uncover critical signal dependences regarding beam spatial position on the sample, measure signal dependences on the fundamental and second harmonic average optical powers, and demonstrate signal characteristics that depend on the focused beam spot sizes. Following directly from our motivation for this study, we propose an initial experiment to measure and ultimately control the carrier-envelope phase evolution of a single octave-spanning pulse train using the QIC phenomenon.

  19. A community trial of the impact of improved sexually transmitted disease treatment on the HIV epidemic in rural Tanzania: 2. Baseline survey results.

    PubMed

    Grosskurth, H; Mosha, F; Todd, J; Senkoro, K; Newell, J; Klokke, A; Changalucha, J; West, B; Mayaud, P; Gavyole, A

    1995-08-01

    To determine baseline HIV prevalence in a trial of improved sexually transmitted disease (STD) treatment, and to investigate risk factors for HIV. To assess comparability of intervention and comparison communities with respect to HIV/STD prevalence and risk factors. To assess adequacy of sample size. Twelve communities in Mwanza Region, Tanzania: one matched pair of roadside communities, four pairs of rural communities, and one pair of island communities. One community from each pair was randomly allocated to receive the STD intervention following the baseline survey. Approximately 1000 adults aged 15-54 years were randomly sampled from each community. Subjects were interviewed, and HIV and syphilis serology performed. Men with a positive leucocyte esterase dipstick test on urine, or reporting a current STD, were tested for urethral infections. A total of 12,534 adults were enrolled. Baseline HIV prevalences were 7.7% (roadside), 3.8% (rural) and 1.8% (islands). Associations were observed with marital status, injections, education, travel, history of STD and syphilis serology. Prevalence was higher in circumcised men, but not significantly after adjusting for confounders. Intervention and comparison communities were similar in the prevalence of HIV (3.8 versus 4.4%), active syphilis (8.7 versus 8.2%), and most recorded risk factors. Within-pair variability in HIV prevalence was close to the value assumed for sample size calculations. The trial cohort was successfully established. Comparability of intervention and comparison communities at baseline was confirmed for most factors. Matching appears to have achieved a trial of adequate sample size. The apparent lack of a protective effect of male circumcision contrasts with other studies in Africa.

  20. STAR FORMATION LAWS: THE EFFECTS OF GAS CLOUD SAMPLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calzetti, D.; Liu, G.; Koda, J., E-mail: calzetti@astro.umass.edu

    Recent observational results indicate that the functional shape of the spatially resolved star formation-molecular gas density relation depends on the spatial scale considered. These results may indicate a fundamental role of sampling effects on scales that are typically only a few times larger than those of the largest molecular clouds. To investigate the impact of this effect, we construct simple models for the distribution of molecular clouds in a typical star-forming spiral galaxy and, assuming a power-law relation between star formation rate (SFR) and cloud mass, explore a range of input parameters. We confirm that the slope and the scattermore » of the simulated SFR-molecular gas surface density relation depend on the size of the sub-galactic region considered, due to stochastic sampling of the molecular cloud mass function, and the effect is larger for steeper relations between SFR and molecular gas. There is a general trend for all slope values to tend to {approx}unity for region sizes larger than 1-2 kpc, irrespective of the input SFR-cloud relation. The region size of 1-2 kpc corresponds to the area where the cloud mass function becomes fully sampled. We quantify the effects of selection biases in data tracing the SFR, either as thresholds (i.e., clouds smaller than a given mass value do not form stars) or as backgrounds (e.g., diffuse emission unrelated to current star formation is counted toward the SFR). Apparently discordant observational results are brought into agreement via this simple model, and the comparison of our simulations with data for a few galaxies supports a steep (>1) power-law index between SFR and molecular gas.« less

Top