Sample records for reduce sample sizes

  1. Reduced Sampling Size with Nanopipette for Tapping-Mode Scanning Probe Electrospray Ionization Mass Spectrometry Imaging

    PubMed Central

    Kohigashi, Tsuyoshi; Otsuka, Yoichi; Shimazu, Ryo; Matsumoto, Takuya; Iwata, Futoshi; Kawasaki, Hideya; Arakawa, Ryuichi

    2016-01-01

    Mass spectrometry imaging (MSI) with ambient sampling and ionization can rapidly and easily capture the distribution of chemical components in a solid sample. Because the spatial resolution of MSI is limited by the size of the sampling area, reducing sampling size is an important goal for high resolution MSI. Here, we report the first use of a nanopipette for sampling and ionization by tapping-mode scanning probe electrospray ionization (t-SPESI). The spot size of the sampling area of a dye molecular film on a glass substrate was decreased to 6 μm on average by using a nanopipette. On the other hand, ionization efficiency increased with decreasing solvent flow rate. Our results indicate the compatibility between a reduced sampling area and the ionization efficiency using a nanopipette. MSI of micropatterns of ink on a glass and a polymer substrate were also demonstrated. PMID:28101441

  2. 75 FR 48815 - Medicaid Program and Children's Health Insurance Program (CHIP); Revisions to the Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-11

    ... size may be reduced by the finite population correction factor. The finite population correction is a statistical formula utilized to determine sample size where the population is considered finite rather than... program may notify us and the annual sample size will be reduced by the finite population correction...

  3. Influences of sampling size and pattern on the uncertainty of correlation estimation between soil water content and its influencing factors

    NASA Astrophysics Data System (ADS)

    Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua

    2017-12-01

    In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.

  4. Cryogenic homogenization and sampling of heterogeneous multi-phase feedstock

    DOEpatents

    Doyle, Glenn Michael; Ideker, Virgene Linda; Siegwarth, James David

    2002-01-01

    An apparatus and process for producing a homogeneous analytical sample from a heterogenous feedstock by: providing the mixed feedstock, reducing the temperature of the feedstock to a temperature below a critical temperature, reducing the size of the feedstock components, blending the reduced size feedstock to form a homogeneous mixture; and obtaining a representative sample of the homogeneous mixture. The size reduction and blending steps are performed at temperatures below the critical temperature in order to retain organic compounds in the form of solvents, oils, or liquids that may be adsorbed onto or absorbed into the solid components of the mixture, while also improving the efficiency of the size reduction. Preferably, the critical temperature is less than 77 K (-196.degree. C.). Further, with the process of this invention the representative sample may be maintained below the critical temperature until being analyzed.

  5. Post-stratified estimation: with-in strata and total sample size recommendations

    Treesearch

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  6. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  7. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  8. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    PubMed

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  9. An alternative method for determining particle-size distribution of forest road aggregate and soil with large-sized particles

    Treesearch

    Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese

    2014-01-01

    Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...

  10. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  11. Rock sampling. [apparatus for controlling particle size

    NASA Technical Reports Server (NTRS)

    Blum, P. (Inventor)

    1971-01-01

    An apparatus for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The device includes grinding means for cutting grooves in the rock surface and to provide a grouping of thin, shallow, parallel ridges and cutter means to reduce these ridges to a powder specimen. Collection means is provided for the powder. The invention relates to rock grinding and particularly to the sampling of rock specimens with good size control.

  12. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    PubMed

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  13. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    PubMed

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  14. Robust Covariate-Adjusted Log-Rank Statistics and Corresponding Sample Size Formula for Recurrent Events Data

    PubMed Central

    Song, Rui; Kosorok, Michael R.; Cai, Jianwen

    2009-01-01

    Summary Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study. PMID:18162107

  15. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  16. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  17. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  18. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  19. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  20. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  1. Damage Accumulation in Silica Glass Nanofibers.

    PubMed

    Bonfanti, Silvia; Ferrero, Ezequiel E; Sellerio, Alessandro L; Guerra, Roberto; Zapperi, Stefano

    2018-06-06

    The origin of the brittle-to-ductile transition, experimentally observed in amorphous silica nanofibers as the sample size is reduced, is still debated. Here we investigate the issue by extensive molecular dynamics simulations at low and room temperatures for a broad range of sample sizes, with open and periodic boundary conditions. Our results show that small sample-size enhanced ductility is primarily due to diffuse damage accumulation, that for larger samples leads to brittle catastrophic failure. Surface effects such as boundary fluidization contribute to ductility at room temperature by promoting necking, but are not the main driver of the transition. Our results suggest that the experimentally observed size-induced ductility of silica nanofibers is a manifestation of finite-size criticality, as expected in general for quasi-brittle disordered networks.

  2. A Bayesian sequential design using alpha spending function to control type I error.

    PubMed

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  3. Impact of Different Visual Field Testing Paradigms on Sample Size Requirements for Glaucoma Clinical Trials.

    PubMed

    Wu, Zhichao; Medeiros, Felipe A

    2018-03-20

    Visual field testing is an important endpoint in glaucoma clinical trials, and the testing paradigm used can have a significant impact on the sample size requirements. To investigate this, this study included 353 eyes of 247 glaucoma patients seen over a 3-year period to extract real-world visual field rates of change and variability estimates to provide sample size estimates from computer simulations. The clinical trial scenario assumed that a new treatment was added to one of two groups that were both under routine clinical care, with various treatment effects examined. Three different visual field testing paradigms were evaluated: a) evenly spaced testing, b) United Kingdom Glaucoma Treatment Study (UKGTS) follow-up scheme, which adds clustered tests at the beginning and end of follow-up in addition to evenly spaced testing, and c) clustered testing paradigm, with clusters of tests at the beginning and end of the trial period and two intermediary visits. The sample size requirements were reduced by 17-19% and 39-40% using the UKGTS and clustered testing paradigms, respectively, when compared to the evenly spaced approach. These findings highlight how the clustered testing paradigm can substantially reduce sample size requirements and improve the feasibility of future glaucoma clinical trials.

  4. Sulfuric acid intercalated-mechanical exfoliation of reduced graphene oxide from old coconut shell

    NASA Astrophysics Data System (ADS)

    Islamiyah, Wildatun; Nashirudin, Luthfi; Baqiya, Malik A.; Cahyono, Yoyok; Darminto

    2018-04-01

    We report a fecile preparation of reduced grapheme oxide (rGO) from an old coconut shell by rapid reduction of heating at 400°C, chemical exfoliation using H2SO4 and HCl intercalating and mechanical exfoliation using ultrasonication. The produced samples consist of random stacks of nanometer-sized sheets. The dispersions prepared from H2SO4 had broader size distributions and larger particle sizes than the that from HCl. An average size of rGO in H2SO4 and HCl is respectively 23.62 nm and 570.4 nm. Furthermore, sample prepared in H2SO4 exhibited a high electronical conductivity of 1.1 × 10-3 S/m with a low energy gap of 0.11 eV.

  5. Fabrication and Characterization of Surrogate Glasses Aimed to Validate Nuclear Forensic Techniques

    DTIC Science & Technology

    2017-12-01

    sample is processed while submerged and produces fine sized particles the exposure levels and risk of contamination from the samples is also greatly...induced the partial collapses of the xerogel network strengthened the network while the sample sizes were reduced [22], [26]. As a result the wt...inhomogeneous, making it difficult to clearly determine which features were present in the sample before LDHP and which were caused by it. In this study

  6. A comparative study of the physical properties of Cu-Zn ferrites annealed under different atmospheres and temperatures: Magnetic enhancement of Cu0.5Zn0.5Fe2O4 nanoparticles by a reducing atmosphere

    NASA Astrophysics Data System (ADS)

    Gholizadeh, Ahmad

    2018-04-01

    In the present work, the influence of different sintering atmospheres and temperatures on physical properties of the Cu0.5Zn0.5Fe2O4 nanoparticles including the redistribution of Zn2+ and Fe3+ ions, the oxidation of Fe atoms in the lattice, crystallite sizes, IR bands, saturation magnetization and magnetic core sizes have been investigated. The fitting of XRD patterns by using Fullprof program and also FT-IR measurement show the formation of a cubic structure with no presence of impurity phase for all the samples. The unit cell parameter of the samples sintered at the air- and inert-ambient atmospheres trend to decrease with sintering temperature, but for the samples sintered under carbon monoxide-ambient atmosphere increase. The magnetization curves versus the applied magnetic field, indicate different behaviour for the samples sintered at 700 °C with the respect to the samples sintered at 300 °C. Also, the saturation magnetization increases with the sintering temperature and reach a maximum 61.68 emu/g in the sample sintered under reducing atmosphere at 600 °C. The magnetic particle size distributions of samples have been calculated by fitting the M-H curves with the size distributed Langevin function. The results obtained from the XRD and FTIR measurements suggest that the magnetic core size has the dominant effect in variation of the saturation magnetization of the samples.

  7. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  8. How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation

    ERIC Educational Resources Information Center

    Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard

    2006-01-01

    Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…

  9. Estimation of the bottleneck size in Florida panthers

    USGS Publications Warehouse

    Culver, M.; Hedrick, P.W.; Murphy, K.; O'Brien, S.; Hornocker, M.G.

    2008-01-01

    We have estimated the extent of genetic variation in museum (1890s) and contemporary (1980s) samples of Florida panthers Puma concolor coryi for both nuclear loci and mtDNA. The microsatellite heterozygosity in the contemporary sample was only 0.325 that in the museum samples although our sample size and number of loci are limited. Support for this estimate is provided by a sample of 84 microsatellite loci in contemporary Florida panthers and Idaho pumas Puma concolor hippolestes in which the contemporary Florida panther sample had only 0.442 the heterozygosity of Idaho pumas. The estimated diversities in mtDNA in the museum and contemporary samples were 0.600 and 0.000, respectively. Using a population genetics approach, we have estimated that to reduce either the microsatellite heterozygosity or the mtDNA diversity this much (in a period of c. 80years during the 20th century when the numbers were thought to be low) that a very small bottleneck size of c. 2 for several generations and a small effective population size in other generations is necessary. Using demographic data from Yellowstone pumas, we estimated the ratio of effective to census population size to be 0.315. Using this ratio, the census population size in the Florida panthers necessary to explain the loss of microsatellite variation was c .41 for the non-bottleneck generations and 6.2 for the two bottleneck generations. These low bottleneck population sizes and the concomitant reduced effectiveness of selection are probably responsible for the high frequency of several detrimental traits in Florida panthers, namely undescended testicles and poor sperm quality. The recent intensive monitoring both before and after the introduction of Texas pumas in 1995 will make the recovery and genetic restoration of Florida panthers a classic study of an endangered species. Our estimates of the bottleneck size responsible for the loss of genetic variation in the Florida panther completes an unknown aspect of this account. ?? 2008 The Authors. Journal compilation ?? 2008 The Zoological Society of London.

  10. Synthesis And Characterization Of Reduced Size Ferrite Reinforced Polymer Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borah, Subasit; Bhattacharyya, Nidhi S.

    2008-04-24

    Small sized Co{sub 1-x}Ni{sub x}Fe{sub 2}O{sub 4} ferrite particles are synthesized by chemical route. The precursor materials are annealed at 400, 600 and 800 C. The crystallographic structure and phases of the samples are characterized by X-ray diffraction (XRD). The annealed ferrite samples crystallized into cubic spinel structure. Transmission Electron Microscopy (TEM) micrographs show that the average particle size of the samples are <20 nm. Particulate magneto-polymer composite materials are fabricated by reinforcing low density polyethylene (LDPE) matrix with the ferrite samples. The B-H loop study conducted at 10 kHz on the toroid shaped composite samples shows reduction in magneticmore » losses with decrease in size of the filler sample. Magnetic losses are detrimental for applications of ferrite at high powers. The reduction in magnetic loss shows a possible application of Co-Ni ferrites at high microwave power levels.« less

  11. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  12. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    PubMed Central

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  13. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  14. Beamline 10.3.2 at ALS: a hard X-ray microprobe for environmental and materials sciences.

    PubMed

    Marcus, Matthew A; MacDowell, Alastair A; Celestre, Richard; Manceau, Alain; Miller, Tom; Padmore, Howard A; Sublett, Robert E

    2004-05-01

    Beamline 10.3.2 at the ALS is a bend-magnet line designed mostly for work on environmental problems involving heavy-metal speciation and location. It offers a unique combination of X-ray fluorescence mapping, X-ray microspectroscopy and micro-X-ray diffraction. The optics allow the user to trade spot size for flux in a size range of 5-17 microm in an energy range of 3-17 keV. The focusing uses a Kirkpatrick-Baez mirror pair to image a variable-size virtual source onto the sample. Thus, the user can reduce the effective size of the source, thereby reducing the spot size on the sample, at the cost of flux. This decoupling from the actual source also allows for some independence from source motion. The X-ray fluorescence mapping is performed with a continuously scanning stage which avoids the time overhead incurred by step-and-repeat mapping schemes. The special features of this beamline are described, and some scientific results shown.

  15. Predictor sort sampling and one-sided confidence bounds on quantiles

    Treesearch

    Steve Verrill; Victoria L. Herian; David W. Green

    2002-01-01

    Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...

  16. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    PubMed

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  17. Exact tests using two correlated binomial variables in contemporary cancer clinical trials.

    PubMed

    Yu, Jihnhee; Kepner, James L; Iyer, Renuka

    2009-12-01

    New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.

  18. Population size estimation in Yellowstone wolves with error-prone noninvasive microsatellite genotypes.

    PubMed

    Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas

    2003-07-01

    Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias.

  19. Sample size for post-marketing safety studies based on historical controls.

    PubMed

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  20. 40 CFR 796.2750 - Sediment and soil adsorption isotherm.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... size analysis” is the determination of the various amounts of the different particle sizes in a sample... °C. (iii) Replications. Three replications of the experimental treatments shall be used. (iv) Soil...) Decrease the water content, air or oven-dry soils at or below 50 °C. (B) Reduce aggregate size before and...

  1. 40 CFR 796.2750 - Sediment and soil adsorption isotherm.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... size analysis” is the determination of the various amounts of the different particle sizes in a sample... °C. (iii) Replications. Three replications of the experimental treatments shall be used. (iv) Soil...) Decrease the water content, air or oven-dry soils at or below 50 °C. (B) Reduce aggregate size before and...

  2. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    PubMed

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  3. Integrated approaches for reducing sample size for measurements of trace elemental impurities in plutonium by ICP-OES and ICP-MS

    DOE PAGES

    Xu, Ning; Chamberlin, Rebecca M.; Thompson, Pam; ...

    2017-10-07

    This study has demonstrated that bulk plutonium chemical analysis can be performed at small scales (\\50 mg material) through three case studies. Analytical methods were developed for ICP-OES and ICP-MS instruments to measure trace impurities and gallium content in plutonium metals with comparable or improved detection limits, measurement accuracy and precision. In two case studies, the sample size has been reduced by 109, and in the third case study, by as much as 50009, so that the plutonium chemical analysis can be performed in a facility rated for lower-hazard and lower-security operations.

  4. Integrated approaches for reducing sample size for measurements of trace elemental impurities in plutonium by ICP-OES and ICP-MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Ning; Chamberlin, Rebecca M.; Thompson, Pam

    This study has demonstrated that bulk plutonium chemical analysis can be performed at small scales (\\50 mg material) through three case studies. Analytical methods were developed for ICP-OES and ICP-MS instruments to measure trace impurities and gallium content in plutonium metals with comparable or improved detection limits, measurement accuracy and precision. In two case studies, the sample size has been reduced by 109, and in the third case study, by as much as 50009, so that the plutonium chemical analysis can be performed in a facility rated for lower-hazard and lower-security operations.

  5. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    PubMed

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  6. Parallel Nonnegative Least Squares Solvers for Model Order Reduction

    DTIC Science & Technology

    2016-03-01

    NNLS problems that arise when the Energy Conserving Sampling and Weighting hyper -reduction procedure is used when constructing a reduced-order model...ScaLAPACK and performance results are presented. nonnegative least squares, model order reduction, hyper -reduction, Energy Conserving Sampling and...optimal solution. ........................................ 20 Table 6 Reduced mesh sizes produced for each solver in the ECSW hyper -reduction step

  7. 40 CFR 761.353 - Second level of sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Second level of sample selection. 761...-Site Disposal, in Accordance With § 761.61 § 761.353 Second level of sample selection. The second level of sample selection reduces the size of the 19-liter subsample that was collected according to...

  8. Sub-sampling genetic data to estimate black bear population size: A case study

    USGS Publications Warehouse

    Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.

    2007-01-01

    Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.

  9. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Size measuring techniques as tool to monitor pea proteins intramolecular crosslinking by transglutaminase treatment.

    PubMed

    Djoullah, Attaf; Krechiche, Ghali; Husson, Florence; Saurel, Rémi

    2016-01-01

    In this work, techniques for monitoring the intramolecular transglutaminase cross-links of pea proteins, based on protein size determination, were developed. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis profiles of transglutaminase-treated low concentration (0.01% w/w) pea albumin samples, compared to the untreated one (control), showed a higher electrophoretic migration of the major albumin fraction band (26 kDa), reflecting a decrease in protein size. This protein size decrease was confirmed, after DEAE column purification, by dynamic light scattering (DLS) where the hydrodynamic radius of treated samples appears to be reduced compared to the control one. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Accounting for between-study variation in incremental net benefit in value of information methodology.

    PubMed

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  12. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  13. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    PubMed

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  14. Motion mitigation for lung cancer patients treated with active scanning proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grassberger, Clemens, E-mail: Grassberger.Clemens@mgh.harvard.edu; Dowdell, Stephen; Sharp, Greg

    2015-05-15

    Purpose: Motion interplay can affect the tumor dose in scanned proton beam therapy. This study assesses the ability of rescanning and gating to mitigate interplay effects during lung treatments. Methods: The treatments of five lung cancer patients [48 Gy(RBE)/4fx] with varying tumor size (21.1–82.3 cm{sup 3}) and motion amplitude (2.9–30.6 mm) were simulated employing 4D Monte Carlo. The authors investigated two spot sizes (σ ∼ 12 and ∼3 mm), three rescanning techniques (layered, volumetric, breath-sampled volumetric) and respiratory gating with a 30% duty cycle. Results: For 4/5 patients, layered rescanning 6/2 times (for the small/large spot size) maintains equivalent uniformmore » dose within the target >98% for a single fraction. Breath sampling the timing of rescanning is ∼2 times more effective than the same number of continuous rescans. Volumetric rescanning is sensitive to synchronization effects, which was observed in 3/5 patients, though not for layered rescanning. For the large spot size, rescanning compared favorably with gating in terms of time requirements, i.e., 2x-rescanning is on average a factor ∼2.6 faster than gating for this scenario. For the small spot size however, 6x-rescanning takes on average 65% longer compared to gating. Rescanning has no effect on normal lung V{sub 20} and mean lung dose (MLD), though it reduces the maximum lung dose by on average 6.9 ± 2.4/16.7 ± 12.2 Gy(RBE) for the large and small spot sizes, respectively. Gating leads to a similar reduction in maximum dose and additionally reduces V{sub 20} and MLD. Breath-sampled rescanning is most successful in reducing the maximum dose to the normal lung. Conclusions: Both rescanning (2–6 times, depending on the beam size) as well as gating was able to mitigate interplay effects in the target for 4/5 patients studied. Layered rescanning is superior to volumetric rescanning, as the latter suffers from synchronization effects in 3/5 patients studied. Gating minimizes the irradiated volume of normal lung more efficiently, while breath-sampled rescanning is superior in reducing maximum doses to organs at risk.« less

  15. Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference

    PubMed Central

    Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.

    2016-01-01

    Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243

  16. Ratios of total suspended solids to suspended sediment concentrations by particle size

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.T.

    2011-01-01

    Wet-sieving sand-sized particles from a whole storm-water sample before splitting the sample into laboratory-prepared containers can reduce bias and improve the precision of suspended-sediment concentrations (SSC). Wet-sieving, however, may alter concentrations of total suspended solids (TSS) because the analytical method used to determine TSS may not have included the sediment retained on the sieves. Measuring TSS is still commonly used by environmental managers as a regulatory metric for solids in storm water. For this reason, a new method of correlating concentrations of TSS and SSC by particle size was used to develop a series of correction factors for SSC as a means to estimate TSS. In general, differences between TSS and SSC increased with greater particle size and higher sand content. Median correction factors to SSC ranged from 0.29 for particles larger than 500m to 0.85 for particles measuring from 32 to 63m. Great variability was observed in each fraction-a result of varying amounts of organic matter in the samples. Wide variability in organic content could reduce the transferability of the correction factors. ?? 2011 American Society of Civil Engineers.

  17. Sample preparation techniques for the determination of trace residues and contaminants in foods.

    PubMed

    Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M

    2007-06-15

    The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.

  18. Sample Size Estimation for Alzheimer's Disease Trials from Japanese ADNI Serial Magnetic Resonance Imaging.

    PubMed

    Fujishima, Motonobu; Kawaguchi, Atsushi; Maikusa, Norihide; Kuwano, Ryozo; Iwatsubo, Takeshi; Matsuda, Hiroshi

    2017-01-01

    Little is known about the sample sizes required for clinical trials of Alzheimer's disease (AD)-modifying treatments using atrophy measures from serial brain magnetic resonance imaging (MRI) in the Japanese population. The primary objective of the present study was to estimate how large a sample size would be needed for future clinical trials for AD-modifying treatments in Japan using atrophy measures of the brain as a surrogate biomarker. Sample sizes were estimated from the rates of change of the whole brain and hippocampus by the k-means normalized boundary shift integral (KN-BSI) and cognitive measures using the data of 537 Japanese Alzheimer's Neuroimaging Initiative (J-ADNI) participants with a linear mixed-effects model. We also examined the potential use of ApoE status as a trial enrichment strategy. The hippocampal atrophy rate required smaller sample sizes than cognitive measures of AD and mild cognitive impairment (MCI). Inclusion of ApoE status reduced sample sizes for AD and MCI patients in the atrophy measures. These results show the potential use of longitudinal hippocampal atrophy measurement using automated image analysis as a progression biomarker and ApoE status as a trial enrichment strategy in a clinical trial of AD-modifying treatment in Japanese people.

  19. Cognitive Behavioral Therapy: A Meta-Analysis of Race and Substance Use Outcomes

    PubMed Central

    Windsor, Liliane Cambraia; Jemal, Alexis; Alessi, Edward

    2015-01-01

    Cognitive behavioral therapy (CBT) is an effective intervention for reducing substance use. However, because CBT trials have included predominantly White samples caution must be used when generalizing these effects to Blacks and Hispanics. This meta-analysis compared the impact of CBT in reducing substance use between studies with a predominantly non-Hispanic White sample (hereafter NHW studies) and studies with a predominantly Black and/or Hispanic sample (hereafter BH studies). From 322 manuscripts identified in the literature, 17 met criteria for inclusion. Effect sizes between CBT and comparison group at posttest had similar effects on substance abuse across NHW and BH studies. However, when comparing pre-posttest effect sizes from groups receiving CBT between NHW and BH studies, CBT’s impact was significantly stronger in NHW studies. T-test comparisons indicated reduced retention/engagement in BH studies, albeit failing to reach statistical significance. Results highlight the need for further research testing CBT’s impact on substance use among Blacks and Hispanics. PMID:25285527

  20. Effect of bait and gear type on channel catfish catch and turtle bycatch in a reservoir

    USGS Publications Warehouse

    Cartabiano, Evan C.; Stewart, David R.; Long, James M.

    2014-01-01

    Hoop nets have become the preferred gear choice to sample channel catfish Ictalurus punctatus but the degree of bycatch can be high, especially due to the incidental capture of aquatic turtles. While exclusion and escapement devices have been developed and evaluated, few have examined bait choice as a method to reduce turtle bycatch. The use of Zote™ soap has shown considerable promise to reduce bycatch of aquatic turtles when used with trotlines but its effectiveness in hoop nets has not been evaluated. We sought to determine the effectiveness of hoop nets baited with cheese bait or Zote™ soap and trotlines baited with shad or Zote™ soap as a way to sample channel catfish and prevent capture of aquatic turtles. We used a repeated-measures experimental design and treatment combinations were randomly assigned using a Latin-square arrangement. Eight sampling locations were systematically selected and then sampled with either hoop nets or trotlines using Zote™ soap (both gears), waste cheese (hoop nets), or cut shad (trotlines). Catch rates did not statistically differ among the gear–bait-type combinations. Size bias was evident with trotlines consistently capturing larger sized channel catfish compared to hoop nets. Results from a Monte Carlo bootstrapping procedure estimated the number of samples needed to reach predetermined levels of sampling precision to be lowest for trotlines baited with soap. Moreover, trotlines baited with soap caught no aquatic turtles, while hoop nets captured many turtles and had high mortality rates. We suggest that Zote™ soap used in combination with multiple hook sizes on trotlines may be a viable alternative to sample channel catfish and reduce bycatch of aquatic turtles.

  1. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  2. The size-reduced Eudragit® RS microparticles prepared by solvent evaporation method - monitoring the effect of selected variables on tested parameters.

    PubMed

    Vasileiou, Kalliopi; Vysloužil, Jakub; Pavelková, Miroslava; Vysloužil, Jan; Kubová, Kateřina

    2018-01-01

    Size-reduced microparticles were successfully obtained by solvent evaporation method. Different parameters were applied in each sample and their influence on microparticles was evaluated. As a model drug the insoluble ibuprofen was selected for the encapsulation process with Eudragit® RS. The obtained microparticles were inspected by optical microscopy and scanning electron microscopy. The effect of aqueous phase volume (600, 400, 200 ml) and the concentration of polyvinyl alcohol (PVA; 1.0% and 0.1%) were studied. It was evaluated how those variations and also size can affect microparticle characteristics such as encapsulation efficiency, drug loading, burst effect and microparticle morphology. It was observed that the sample prepared with 600 ml aqueous phase and 1% concentration of polyvinyl alcohol gave the most favorable results.Key words: microparticles solvent evaporation sustained drug release Eudragit RS®.

  3. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  4. Effect of Study Design on Sample Size in Studies Intended to Evaluate Bioequivalence of Inhaled Short‐Acting β‐Agonist Formulations

    PubMed Central

    Zeng, Yaohui; Singh, Sachinkumar; Wang, Kai

    2017-01-01

    Abstract Pharmacodynamic studies that use methacholine challenge to assess bioequivalence of generic and innovator albuterol formulations are generally designed per published Food and Drug Administration guidance, with 3 reference doses and 1 test dose (3‐by‐1 design). These studies are challenging and expensive to conduct, typically requiring large sample sizes. We proposed 14 modified study designs as alternatives to the Food and Drug Administration–recommended 3‐by‐1 design, hypothesizing that adding reference and/or test doses would reduce sample size and cost. We used Monte Carlo simulation to estimate sample size. Simulation inputs were selected based on published studies and our own experience with this type of trial. We also estimated effects of these modified study designs on study cost. Most of these altered designs reduced sample size and cost relative to the 3‐by‐1 design, some decreasing cost by more than 40%. The most effective single study dose to add was 180 μg of test formulation, which resulted in an estimated 30% relative cost reduction. Adding a single test dose of 90 μg was less effective, producing only a 13% cost reduction. Adding a lone reference dose of either 180, 270, or 360 μg yielded little benefit (less than 10% cost reduction), whereas adding 720 μg resulted in a 19% cost reduction. Of the 14 study design modifications we evaluated, the most effective was addition of both a 90‐μg test dose and a 720‐μg reference dose (42% cost reduction). Combining a 180‐μg test dose and a 720‐μg reference dose produced an estimated 36% cost reduction. PMID:29281130

  5. Particle morphology characterization and manipulation in biomass slurries and the effect on rheological properties and enzymatic conversion.

    PubMed

    Dibble, Clare J; Shatova, Tatyana A; Jorgenson, Jennie L; Stickel, Jonathan J

    2011-01-01

    An improved understanding of how particle size distribution relates to enzymatic hydrolysis performance and rheological properties could enable enhanced biochemical conversion of lignocellulosic feedstocks. Particle size distribution can change as a result of either physical or chemical manipulation of a biomass sample. In this study, we employed image processing techniques to measure slurry particle size distribution and validated the results by showing that they are comparable to those from laser diffraction and sieving. Particle size and chemical changes of biomass slurries were manipulated independently and the resulting yield stress and enzymatic digestibility of slurries with different size distributions were measured. Interestingly, reducing particle size by mechanical means from about 1 mm to 100 μm did not reduce the yield stress of the slurries over a broad range of concentrations or increase the digestibility of the biomass over the range of size reduction studied here. This is in stark contrast to the increase in digestibility and decrease in yield stress when particle size is reduced by dilute-acid pretreatment over similar size ranges. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fitriana, Karina Nur, E-mail: nurfitriana.karina@gmail.com; Hafizah, Mas Ayu Elita, E-mail: kemasayu@yahoo.com; Manaf, Azwar, E-mail: azwar@ui.ac.id

    Synthesis of single phased SrO.6Fe{sub 2-x}Mn{sub x/2}Ti{sub x/2}O{sub 3} (x = 0.0; 0.5; and 1.0) nanoparticles has been prepared through mechanical alloying, assisted with the ultrasonic destruction process. Monocrystalline particles were obtained when x = 0 treated with ultrasonic destruction at 55 μm of transducer amplitude. Average particle size and crystallite size were reduced significantly from 723 nm to ∼87 nm for x = 0. The particle size was not significantly reduced when x = 0.5 and x = 1 was changed. On the other hand, substitution of Ti elements on some of Fe elements expectedly had a major effectmore » on reducing particle size. This was proven by larger size on both particle and crystallite size at x = 1 rather than x = 0.5, with comparison respectively 2:1 (in nm). In addition, a higher transducer power was required for modifying Strontium Hexaferrite (SHF) with more Ti elements and a bigger size of pre-ultrasonic destructed sample. It is concluded that the amplitude of the transducer in ultrasonic destruction process and the element of ionic substitution affects both average particle size and crystallite size of SHF.« less

  7. Hard choices in assessing survival past dams — a comparison of single- and paired-release strategies

    USGS Publications Warehouse

    Zydlewski, Joseph D.; Stich, Daniel S.; Sigourney, Douglas B.

    2017-01-01

    Mark–recapture models are widely used to estimate survival of salmon smolts migrating past dams. Paired releases have been used to improve estimate accuracy by removing components of mortality not attributable to the dam. This method is accompanied by reduced precision because (i) sample size is reduced relative to a single, large release; and (ii) variance calculations inflate error. We modeled an idealized system with a single dam to assess trade-offs between accuracy and precision and compared methods using root mean squared error (RMSE). Simulations were run under predefined conditions (dam mortality, background mortality, detection probability, and sample size) to determine scenarios when the paired release was preferable to a single release. We demonstrate that a paired-release design provides a theoretical advantage over a single-release design only at large sample sizes and high probabilities of detection. At release numbers typical of many survival studies, paired release can result in overestimation of dam survival. Failures to meet model assumptions of a paired release may result in further overestimation of dam-related survival. Under most conditions, a single-release strategy was preferable.

  8. A LDR-PCR approach for multiplex polymorphisms genotyping of severely degraded DNA with fragment sizes <100 bp.

    PubMed

    Zhang, Zhen; Wang, Bao-Jie; Guan, Hong-Yu; Pang, Hao; Xuan, Jin-Feng

    2009-11-01

    Reducing amplicon sizes has become a major strategy for analyzing degraded DNA typical of forensic samples. However, amplicon sizes in current mini-short tandem repeat-polymerase chain reaction (PCR) and mini-sequencing assays are still not suitable for analysis of severely degraded DNA. In this study, we present a multiplex typing method that couples ligase detection reaction with PCR that can be used to identify single nucleotide polymorphisms and small-scale insertion/deletions in a sample of severely fragmented DNA. This method adopts thermostable ligation for allele discrimination and subsequent PCR for signal enhancement. In this study, four polymorphic loci were used to assess the ability of this technique to discriminate alleles in an artificially degraded sample of DNA with fragment sizes <100 bp. Our results showed clear allelic discrimination of single or multiple loci, suggesting that this method might aid in the analysis of extremely degraded samples in which allelic drop out of larger fragments is observed.

  9. Risk of bias reporting in the recent animal focal cerebral ischaemia literature.

    PubMed

    Bahor, Zsanett; Liao, Jing; Macleod, Malcolm R; Bannach-Brown, Alexandra; McCann, Sarah K; Wever, Kimberley E; Thomas, James; Ottavi, Thomas; Howells, David W; Rice, Andrew; Ananiadou, Sophia; Sena, Emily

    2017-10-15

    Findings from in vivo research may be less reliable where studies do not report measures to reduce risks of bias. The experimental stroke community has been at the forefront of implementing changes to improve reporting, but it is not known whether these efforts are associated with continuous improvements. Our aims here were firstly to validate an automated tool to assess risks of bias in published works, and secondly to assess the reporting of measures taken to reduce the risk of bias within recent literature for two experimental models of stroke. We developed and used text analytic approaches to automatically ascertain reporting of measures to reduce risk of bias from full-text articles describing animal experiments inducing middle cerebral artery occlusion (MCAO) or modelling lacunar stroke. Compared with previous assessments, there were improvements in the reporting of measures taken to reduce risks of bias in the MCAO literature but not in the lacunar stroke literature. Accuracy of automated annotation of risk of bias in the MCAO literature was 86% (randomization), 94% (blinding) and 100% (sample size calculation); and in the lacunar stroke literature accuracy was 67% (randomization), 91% (blinding) and 96% (sample size calculation). There remains substantial opportunity for improvement in the reporting of animal research modelling stroke, particularly in the lacunar stroke literature. Further, automated tools perform sufficiently well to identify whether studies report blinded assessment of outcome, but improvements are required in the tools to ascertain whether randomization and a sample size calculation were reported. © 2017 The Author(s).

  10. An elutriation apparatus for assessing settleability of combined sewer overflows (CSOs).

    PubMed

    Marsalek, J; Krishnappan, B G; Exall, K; Rochfort, Q; Stephens, R P

    2006-01-01

    An elutriation apparatus was proposed for testing the settleability of combined sewer outflows (CSOs) and applied to 12 CSO samples. In this apparatus, solids settling is measured under dynamic conditions created by flow through a series of settling chambers of varying diameters and upward flow velocities. Such a procedure reproduces better turbulent settling in CSO tanks than the conventional settling columns, and facilitates testing coagulant additions under dynamic conditions. Among the limitations, one could name the relatively large size of the apparatus and samples (60 L), and inadequate handling of floatables. Settleability results obtained for the elutriation apparatus and a conventional settling column indicate large inter-event variation in CSO settleability. Under such circumstances, settling tanks need to be designed for "average" conditions and, within some limits, the differences in test results produced by various settleability testing apparatuses and procedures may be acceptable. Further development of the elutriation apparatus is under way, focusing on reducing flow velocities in the tubing connecting settling chambers and reducing the number of settling chambers employed. The first measure would reduce the risk of floc breakage in the connecting tubing and the second one would reduce the required sample size.

  11. Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size

    PubMed Central

    Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa

    2016-01-01

    Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913

  12. Sample Size Requirements and Study Duration for Testing Main Effects and Interactions in Completely Randomized Factorial Designs When Time to Event is the Outcome

    PubMed Central

    Moser, Barry Kurt; Halabi, Susan

    2013-01-01

    In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial. The change in the power produced by a reduced number of observations or by accruing no patients to certain factorial combinations is also described. PMID:25530661

  13. Estimating population size for Capercaillie (Tetrao urogallus L.) with spatial capture-recapture models based on genotypes from one field sample

    USGS Publications Warehouse

    Mollet, Pierre; Kery, Marc; Gardner, Beth; Pasinelli, Gilberto; Royle, Andy

    2015-01-01

    We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR) model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex). The SCR model yielded atotal population size estimate (posterior mean) of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130–147). The observed sex ratio was skewed towards males (0.63). The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54–0.61), suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.

  14. Alternative Models for Small Samples in Psychological Research: Applying Linear Mixed Effects Models and Generalized Estimating Equations to Repeated Measures Data

    ERIC Educational Resources Information Center

    Muth, Chelsea; Bales, Karen L.; Hinde, Katie; Maninger, Nicole; Mendoza, Sally P.; Ferrer, Emilio

    2016-01-01

    Unavoidable sample size issues beset psychological research that involves scarce populations or costly laboratory procedures. When incorporating longitudinal designs these samples are further reduced by traditional modeling techniques, which perform listwise deletion for any instance of missing data. Moreover, these techniques are limited in their…

  15. The quantitative impact of the mesopore size on the mass transfer mechanism of the new 1.9μm fully porous Titan-C18 particles. I: analysis of small molecules.

    PubMed

    Gritti, Fabrice; Guiochon, Georges

    2015-03-06

    Previous data have shown that could deliver a minimum reduced plate height as small as 1.7. Additionally, the reduction of the mesopore size after C18 derivatization and the subsequent restriction for sample diffusivity across the Titan-C18 particles were found responsible for the unusually small value of the experimental optimum reduced velocity (5 versus 10 for conventional particles) and for the large values of the average reduced solid-liquid mass transfer resistance coefficients (0.032 versus 0.016) measured for a series of seven n-alkanophenones. The improvements in column efficiency made by increasing the average mesopore size of the Titan silica from 80 to 120Å are investigated from a quantitative viewpoint based on the accurate measurements of the reduced coefficients (longitudinal diffusion, trans-particle mass transfer resistance, and eddy diffusion) and of the intra-particle diffusivity, pore, and surface diffusion for the same series of n-alkanophenone compounds. The experimental results reveal an increase (from 0% to 30%) of the longitudinal diffusion coefficients for the same sample concentration distribution (from 0.25 to 4) between the particle volume and the external volume of the column, a 40% increase of the intra-particle diffusivity for the same sample distribution (from 1 to 7) between the particle skeleton volume and the bulk phase, and a 15-30% decrease of the solid-liquid mass transfer coefficient for the n-alkanophenone compounds. Pore and surface diffusion are increased by 60% and 20%, respectively. The eddy dispersion term and the maximum column efficiency (295000plates/m) remain virtually unchanged. The rate of increase of the total plate height with increasing the chromatographic speed is reduced by 20% and it is mostly controlled (75% and 70% for 80 and 120Å pore size) by the flow rate dependence of the eddy dispersion term. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Bone Marrow Stem Cells and Ear Framework Reconstruction.

    PubMed

    Karimi, Hamid; Emami, Seyed-Abolhassan; Olad-Gubad, Mohammad-Kazem

    2016-11-01

    Repair of total human ear loss or congenital lack of ears is one of the challenging issues in plastic and reconstructive surgery. The aim of the present study was 3D reconstruction of the human ear with cadaveric ear cartilages seeded with human mesenchymal stem cells. We used cadaveric ear cartilages with preserved perichondrium. The samples were divided into 2 groups: group A (cartilage alone) and group B (cartilage seeded with a mixture of fibrin powder and mesenchymal stem cell [1,000,000 cells/cm] used and implanted in back of 10 athymic rats). After 12 weeks, the cartilages were removed and shape, size, weight, flexibility, and chondrocyte viability were evaluated. P value <0.05 was considered significant. In group A, size and weight of cartilages clearly reduced (P < 0.05) and then shape and flexibility (torsion of cartilages in clockwise and counterclockwise directions) were evaluated, which were found to be significantly reduced (P > 0.05). After staining with hematoxylin and eosin and performing microscopic examination, very few live chondrocytes were found in group A. In group B, size and weight of samples were not changed (P < 0.05); the shape and flexibility of samples were well maintained (P < 0.05) and on performing microscopic examination of cartilage samples, many live chondrocytes were found in cartilage (15-20 chondrocytes in each microscopic field). In samples with human stem cell, all variables (size, shape, weight, and flexibility) were significantly maintained and abundant live chondrocytes were found on performing microscopic examination. This method may be used for reconstruction of full defect of auricles in humans.

  17. Sample Size in Clinical Cardioprotection Trials Using Myocardial Salvage Index, Infarct Size, or Biochemical Markers as Endpoint.

    PubMed

    Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan

    2016-03-09

    Cardiac magnetic resonance (CMR) can quantify myocardial infarct (MI) size and myocardium at risk (MaR), enabling assessment of myocardial salvage index (MSI). We assessed how MSI impacts the number of patients needed to reach statistical power in relation to MI size alone and levels of biochemical markers in clinical cardioprotection trials and how scan day affect sample size. Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid scan day bias masking a treatment effect of 25%. Sample size in cardioprotection trials can be reduced 46% to 65% without compromising statistical power when using MSI by CMR as an outcome variable instead of MI size alone or biochemical markers. It is essential to ensure lack of bias in scan day between treatment and control arms to avoid compromising statistical power. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  18. Multiple category-lot quality assurance sampling: a new classification system with application to schistosomiasis control.

    PubMed

    Olives, Casey; Valadez, Joseph J; Brooker, Simon J; Pagano, Marcello

    2012-01-01

    Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n=15 and n=25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n=15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools.

  19. Polystyrene-Divinylbenzene-Based Adsorbents Reduce Endothelial Activation and Monocyte Adhesion Under Septic Conditions in a Pore Size-Dependent Manner.

    PubMed

    Eichhorn, Tanja; Rauscher, Sabine; Hammer, Caroline; Gröger, Marion; Fischer, Michael B; Weber, Viktoria

    2016-10-01

    Endothelial activation with excessive recruitment and adhesion of immune cells plays a central role in the progression of sepsis. We established a microfluidic system to study the activation of human umbilical vein endothelial cells by conditioned medium containing plasma from lipopolysaccharide-stimulated whole blood or from septic blood and to investigate the effect of adsorption of inflammatory mediators on endothelial activation. Treatment of stimulated whole blood with polystyrene-divinylbenzene-based cytokine adsorbents (average pore sizes 15 or 30 nm) prior to passage over the endothelial layer resulted in significantly reduced endothelial cytokine and chemokine release, plasminogen activator inhibitor-1 secretion, adhesion molecule expression, and in diminished monocyte adhesion. Plasma samples from sepsis patients differed substantially in their potential to induce endothelial activation and monocyte adhesion despite their almost identical interleukin-6 and tumor necrosis factor-alpha levels. Pre-incubation of the plasma samples with a polystyrene-divinylbenzene-based adsorbent (30 nm average pore size) reduced endothelial intercellular adhesion molecule-1 expression to baseline levels, resulting in significantly diminished monocyte adhesion. Our data support the potential of porous polystyrene-divinylbenzene-based adsorbents to reduce endothelial activation under septic conditions by depletion of a broad range of inflammatory mediators.

  20. Increased accuracy of batch fecundity estimates using oocyte stage ratios in Plectropomus leopardus.

    PubMed

    Carter, A B; Williams, A J; Russ, G R

    2009-08-01

    Using the ratio of the number of migratory nuclei to hydrated oocytes to estimate batch fecundity of common coral trout Plectropomus leopardus increases the time over which samples can be collected and, therefore, increases the sample size available and reduces biases in batch fecundity estimates.

  1. Performance of Identifiler Direct and PowerPlex 16 HS on the Applied Biosystems 3730 DNA Analyzer for processing biological samples archived on FTA cards.

    PubMed

    Laurin, Nancy; DeMoors, Anick; Frégeau, Chantal

    2012-09-01

    Direct amplification of STR loci from biological samples collected on FTA cards without prior DNA purification was evaluated using Identifiler Direct and PowerPlex 16 HS in conjunction with the use of a high throughput Applied Biosystems 3730 DNA Analyzer. In order to reduce the overall sample processing cost, reduced PCR volumes combined with various FTA disk sizes were tested. Optimized STR profiles were obtained using a 0.53 mm disk size in 10 μL PCR volume for both STR systems. These protocols proved effective in generating high quality profiles on the 3730 DNA Analyzer from both blood and buccal FTA samples. Reproducibility, concordance, robustness, sample stability and profile quality were assessed using a collection of blood and buccal samples on FTA cards from volunteer donors as well as from convicted offenders. The new developed protocols offer enhanced throughput capability and cost effectiveness without compromising the robustness and quality of the STR profiles obtained. These results support the use of these protocols for processing convicted offender samples submitted to the National DNA Data Bank of Canada. Similar protocols could be applied to the processing of casework reference samples or in paternity or family relationship testing. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. In vitro inflammatory and cytotoxic effects of size-segregated particulate samples collected during long-range transport of wildfire smoke to Helsinki.

    PubMed

    Jalava, Pasi I; Salonen, Raimo O; Hälinen, Arja I; Penttinen, Piia; Pennanen, Arto S; Sillanpää, Markus; Sandell, Erik; Hillamo, Risto; Hirvonen, Maija-Riitta

    2006-09-15

    The impact of long-range transport (LRT) episodes of wildfire smoke on the inflammogenic and cytotoxic activity of urban air particles was investigated in the mouse RAW 264.7 macrophages. The particles were sampled in four size ranges using a modified Harvard high-volume cascade impactor, and the samples were chemically characterized for identification of different emission sources. The particulate mass concentration in the accumulation size range (PM(1-0.2)) was highly increased during two LRT episodes, but the contents of total and genotoxic polycyclic aromatic hydrocarbons (PAH) in collected particulate samples were only 10-25% of those in the seasonal average sample. The ability of coarse (PM(10-2.5)), intermodal size range (PM(2.5-1)), PM(1-0.2) and ultrafine (PM(0.2)) particles to cause cytokine production (TNFalpha, IL-6, MIP-2) reduced along with smaller particle size, but the size range had a much smaller impact on induced nitric oxide (NO) production and cytotoxicity or apoptosis. The aerosol particles collected during LRT episodes had a substantially lower activity in cytokine production than the corresponding particles of the seasonal average period, which is suggested to be due to chemical transformation of the organic fraction during aging. However, the episode events were associated with enhanced inflammogenic and cytotoxic activities per inhaled cubic meter of air due to the greatly increased particulate mass concentration in the accumulation size range, which may have public health implications.

  3. Particle size and chemical control of heavy metals in bed sediment from the Rouge River, southeast Michigan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, K.S.; Cauvet, D.; Lybeer, M.

    1999-04-01

    Anthropogenic activities related to 100 years of industrialization in the metropolitan Detroit area have significantly enriched the bed sediment of the lower reaches of the Rouge River in Cr, Cu, Fe, Ni, Pb, and Zn. These enriched elements, which may represent a threat to biota, are predominantly present in sequentially extracted reducible and oxidizable chemical phases with small contributions from residual phases. In size-fractionated samples trace metal concentrations generally increase with decreasing particle size, with the greatest contribution to this increase from the oxidizable phase. Experimental results obtained on replicate samples of river sediment demonstrate that the accuracy of themore » sequential extraction procedure, evaluated by comparing the sums of the three individual fractions, is generally better than 10%. Oxidizable and reducible phases therefore constitute important sources of potentially available heavy metals that need to be explicitly considered when evaluating sediment and water quality impacts on biota.« less

  4. Reduced amygdalar and hippocampal size in adults with generalized social phobia.

    PubMed

    Irle, Eva; Ruhleder, Mirjana; Lange, Claudia; Seidler-Brandler, Ulrich; Salzer, Simone; Dechent, Peter; Weniger, Godehard; Leibing, Eric; Leichsenring, Falk

    2010-03-01

    Structural and functional brain imaging studies suggest abnormalities of the amygdala and hippocampus in posttraumatic stress disorder and major depressive disorder. However, structural brain imaging studies in social phobia are lacking. In total, 24 patients with generalized social phobia (GSP) and 24 healthy controls underwent 3-dimensional structural magnetic resonance imaging of the amygdala and hippocampus and a clinical investigation. Compared with controls, GSP patients had significantly reduced amygdalar (13%) and hippocampal (8%) size. The reduction in the size of the amygdala was statistically significant for men but not women. Smaller right-sided hippocampal volumes of GSP patients were significantly related to stronger disorder severity. Our sample included only patients with the generalized subtype of social phobia. Because we excluded patients with comorbid depression, our sample may not be representative. We report for the first time volumetric results in patients with GSP. Future assessment of these patients will clarify whether these changes are reversed after successful treatment and whether they predict treatment response.

  5. Interpolation Approach To Computer-Generated Holograms

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko

    1983-10-01

    A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.

  6. Synthesis of mesoscale, crumpled, reduced graphene oxide roses by water-in-oil emulsion approach

    NASA Astrophysics Data System (ADS)

    Sharma, Shruti; Pham, Viet H.; Boscoboinik, Jorge A.; Camino, Fernando; Dickerson, James H.; Tannenbaum, Rina

    2018-05-01

    Mesoscale crumpled graphene oxide roses (GO roses) were synthesized by using colloidal graphene oxide (GO) variants as precursors for a hybrid emulsification-rapid evaporation approach. This process produced rose-like, spherical, reduced mesostructures of colloidal GO sheets, with corrugated surfaces and particle sizes tunable in the range of ∼800 nm to 15 μm. Excellent reproducibility for particle size distribution is shown for each selected speed of homogenizer rotor among different sample batches. The morphology and chemical structure of these produced GO roses was investigated using electron microscopy and spectroscopy techniques. The proposed synthesis route provides control over particle size, morphology and chemical properties of the synthesized GO roses.

  7. USE OF EXPERT RATINGS AS SAMPLING STRATA FOR A MORE COST-EFFECTIVE PROBABILITY SAMPLE OF A RARE POPULATION

    PubMed Central

    McCaffrey, Daniel; Perlman, Judith; Marshall, Grant N.; Hambarsoomians, Katrin

    2010-01-01

    We consider situations in which externally observable characteristics allow experts to quickly categorize individual households as likely or unlikely to contain a member of a rare target population. This classification can form the basis of disproportionate stratified sampling such that households classified as “unlikely” are sampled at a lower rate than those classified as “likely,” thereby reducing screening costs. Design weights account for this approach and allow unbiased estimates for the target population. We demonstrate that with sensitivity and specificity of expert classification at least 70%, and ideally at least 80%, our approach can economically increase effective sample size for a rare population. We develop heuristics for implementing this approach and demonstrate that sensitivity drives design effects and screening costs whereas specificity only drives the latter. We demonstrate that the potential gains from this approach increase as the target population becomes rarer. We further show that for most applications, unlikely strata should be sampled at 1/6 to ½ the rate of likely strata. This approach was applied to a survey of Cambodian immigrants in which the 82% of households rated “unlikely” were sampled at ¼ the rate as “likely” households, reducing screening from 9.4 to 4.0 approaches per complete. Sensitivity and specificity were 86% and 91% respectively. Weighted estimation had a design effect of 1.26 so screening costs per effective sample size were reduced 47%. We also note that in this instance, expert classification appeared to be uncorrelated with survey outcomes of interest among eligibles. PMID:20936050

  8. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  9. Hydroxyapatite coatings containing Zn and Si on Ti-6Al-4Valloy by plasma electrolytic oxidation

    NASA Astrophysics Data System (ADS)

    Hwang, In-Jo; Choe, Han-Cheol

    2018-02-01

    In this study, hydroxyapatite coatings containing Zn and Si on Ti-6Al-4Valloy by plasma electrolytic oxidation were researched using various experimental instruments. The pore size is depended on the electrolyte concentration and the particle size and number of pore increase on surface part and pore part. In the case of Zn/Si sample, pore size was larger than that of Zn samples. The maximum size of pores decreased and minimum size of pores increased up to 10Zn/Si and Zn and Si affect the formation of pore shapes. As Zn ion concentration increases, the size of the particle tends to increase, the number of particles on the surface part is reduced, whereas the size of the particles and the number of particles on pore part increased. Zn is mainly detected at pore part, and Si is mainly detected at surface part. The crystallite size of anatase increased as the Zn ion concentration, whereas, in the case of Si ion added, crystallite size of anatase decreased.

  10. Sampling strategies for radio-tracking coyotes

    USGS Publications Warehouse

    Smith, G.J.; Cary, J.R.; Rongstad, O.J.

    1981-01-01

    Ten coyotes radio-tracked for 24 h periods were most active at night and moved little during daylight hours. Home-range size determined from radio-locations of 3 adult coyotes increased with the number of locations until an asymptote was reached at about 35-40 independent day locations or 3 6 nights of hourly radio-locations. Activity of the coyote did not affect the asymptotic nature of the home-range calculations, but home-range sizes determined from more than 3 nights of hourly locations were considerably larger than home-range sizes determined from daylight locations. Coyote home-range sizes were calculated from daylight locations, full-night tracking periods, and half-night tracking periods. Full- and half-lnight sampling strategies involved obtaining hourly radio-locations during 12 and 6 h periods, respectively. The half-night sampling strategy was the best compromise for our needs, as it adequately indexed the home-range size, reduced time and energy spent, and standardized the area calculation without requiring the researcher to become completely nocturnal. Sight tracking also provided information about coyote activity and sociability.

  11. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power

    PubMed Central

    Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943

  12. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.

    PubMed

    Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.

  13. Effect of Study Design on Sample Size in Studies Intended to Evaluate Bioequivalence of Inhaled Short-Acting β-Agonist Formulations.

    PubMed

    Zeng, Yaohui; Singh, Sachinkumar; Wang, Kai; Ahrens, Richard C

    2018-04-01

    Pharmacodynamic studies that use methacholine challenge to assess bioequivalence of generic and innovator albuterol formulations are generally designed per published Food and Drug Administration guidance, with 3 reference doses and 1 test dose (3-by-1 design). These studies are challenging and expensive to conduct, typically requiring large sample sizes. We proposed 14 modified study designs as alternatives to the Food and Drug Administration-recommended 3-by-1 design, hypothesizing that adding reference and/or test doses would reduce sample size and cost. We used Monte Carlo simulation to estimate sample size. Simulation inputs were selected based on published studies and our own experience with this type of trial. We also estimated effects of these modified study designs on study cost. Most of these altered designs reduced sample size and cost relative to the 3-by-1 design, some decreasing cost by more than 40%. The most effective single study dose to add was 180 μg of test formulation, which resulted in an estimated 30% relative cost reduction. Adding a single test dose of 90 μg was less effective, producing only a 13% cost reduction. Adding a lone reference dose of either 180, 270, or 360 μg yielded little benefit (less than 10% cost reduction), whereas adding 720 μg resulted in a 19% cost reduction. Of the 14 study design modifications we evaluated, the most effective was addition of both a 90-μg test dose and a 720-μg reference dose (42% cost reduction). Combining a 180-μg test dose and a 720-μg reference dose produced an estimated 36% cost reduction. © 2017, The Authors. The Journal of Clinical Pharmacology published by Wiley Periodicals, Inc. on behalf of American College of Clinical Pharmacology.

  14. Effects of rare earth ionic doping on microstructures and electrical properties of CaCu{sub 3}Ti{sub 4}O{sub 12} ceramics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, Renzhong; Department of Technology and Physics, Zhengzhou University of Light Industry, Zhengzhou 450002; Chen, Zhenping, E-mail: xrzbotao@163.com

    2015-06-15

    Graphical abstract: The dielectric constant decreases monotonically with reduced RE doping ion radius and is more frequency independent compared with that of pure CCTO sample. - Highlights: • The mean grain sizes decrease monotonically with reduced RE doping ionic radius. • Doping gives rise to the monotonic decrease of ϵ{sub r} with reduced RE ionic radius. • The nonlinear coefficient and breakdown field increase with RE ionic doping. • α of all the samples is associated with the potential barrier width rather than Φ{sub b}. - Abstract: Ca{sub 1–x}R{sub x}Cu{sub 3}Ti{sub 4}O{sub 12}(R = La, Nd, Eu, Gd, Er; xmore » = 0 and 0.005) ceramics were prepared by the conventional solid-state method. The influences of rare earth (RE) ion doping on the microstructure, dielectric and electrical properties of CaCu{sub 3}Ti{sub 4}O{sub 12} (CCTO) ceramics were investigated systematically. Single-phase formation is confirmed by XRD analyses. The mean grain size decreases monotonically with reduced RE ion radius. The EDS results reveal that RE ionic doping reduces Cu-rich phase segregation at the grain boundaries (GBs). Doping gives rise to the monotonic decrease of dielectric constant with reduced RE ionic radius but significantly improves stability with frequency. The lower dielectric loss of doped samples is obtained due to the increase of GB resistance. In addition, the nonlinear coefficient and breakdown field increase with RE ionic doping. Both the fine grains and the enhancement of potential barrier at GBs are responsible for the improvement of the nonlinear current–voltage properties in doped CCTO samples.« less

  15. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors

    PubMed Central

    Weng, Jian; Dong, Shanshan; He, Hongjian; Chen, Feiyan; Peng, Xiaogang

    2015-01-01

    Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group. PMID:26207985

  16. Novel Insights in the Fecal Egg Count Reduction Test for Monitoring Drug Efficacy against Soil-Transmitted Helminths in Large-Scale Treatment Programs

    PubMed Central

    Levecke, Bruno; Speybroeck, Niko; Dobson, Robert J.; Vercruysse, Jozef; Charlier, Johannes

    2011-01-01

    Background The fecal egg count reduction test (FECRT) is recommended to monitor drug efficacy against soil-transmitted helminths (STHs) in public health. However, the impact of factors inherent to study design (sample size and detection limit of the fecal egg count (FEC) method) and host-parasite interactions (mean baseline FEC and aggregation of FEC across host population) on the reliability of FECRT is poorly understood. Methodology/Principal Findings A simulation study was performed in which FECRT was assessed under varying conditions of the aforementioned factors. Classification trees were built to explore critical values for these factors required to obtain conclusive FECRT results. The outcome of this analysis was subsequently validated on five efficacy trials across Africa, Asia, and Latin America. Unsatisfactory (<85.0%) sensitivity and specificity results to detect reduced efficacy were found if sample sizes were small (<10) or if sample sizes were moderate (10–49) combined with highly aggregated FEC (k<0.25). FECRT remained inconclusive under any evaluated condition for drug efficacies ranging from 87.5% to 92.5% for a reduced-efficacy-threshold of 90% and from 92.5% to 97.5% for a threshold of 95%. The most discriminatory study design required 200 subjects independent of STH status (including subjects who are not excreting eggs). For this sample size, the detection limit of the FEC method and the level of aggregation of the FEC did not affect the interpretation of the FECRT. Only for a threshold of 90%, mean baseline FEC <150 eggs per gram of stool led to a reduced discriminatory power. Conclusions/Significance This study confirms that the interpretation of FECRT is affected by a complex interplay of factors inherent to both study design and host-parasite interactions. The results also highlight that revision of the current World Health Organization guidelines to monitor drug efficacy is indicated. We, therefore, propose novel guidelines to support future monitoring programs. PMID:22180801

  17. Kinetic studies on the reduction of iron ore nuggets by devolatilization of lean-grade coal

    NASA Astrophysics Data System (ADS)

    Biswas, Chanchal; Gupta, Prithviraj; De, Arnab; Chaudhuri, Mahua Ghosh; Dey, Rajib

    2016-12-01

    An isothermal kinetic study of a novel technique for reducing agglomerated iron ore by volatiles released by pyrolysis of lean-grade non-coking coal was carried out at temperature from 1050 to 1200°C for 10-120 min. The reduced samples were characterized by scanning electron microscopy, energy-dispersive X-ray spectroscopy, and chemical analysis. A good degree of metallization and reduction was achieved. Gas diffusion through the solid was identified as the reaction-rate-controlling resistance; however, during the initial period, particularly at lower temperatures, resistance to interfacial chemical reaction was also significant, though not dominant. The apparent rate constant was observed to increase marginally with decreasing size of the particles constituting the nuggets. The apparent activation energy of reduction was estimated to be in the range from 49.640 to 51.220 kJ/mol and was not observed to be affected by the particle size. The sulfur and carbon contents in the reduced samples were also determined.

  18. Preparation and characterization of Pt loaded WO3 films suitable for gas sensing applications

    NASA Astrophysics Data System (ADS)

    Jolly Bose, R.; Illyasukutty, Navas; Tan, K. S.; Rawat, R. S.; Vadakke Matham, Murukesan; Kohler, Heinz; Mahadevan Pillai, V. P.

    2018-05-01

    This paper presents the preparation of nanostructured platinum (Pt) loaded tungsten oxide (WO3) thin films by radio frequency (RF) magnetron sputtering technique. Even though, Pt loading does not produce any phase change in WO3 lattice, it deteriorates the crystalline quality and induces defects on WO3 films. The Pt loading in WO3 has profound impact on structural and optical properties of the films by which the particle size, lattice strain and optical band gap energy are reduced. Nanoporous film with reduced particle size is obtained for 5 wt% Pt loaded WO3 sample which is crucial for gas sensors. Hence the sensing response of 5 wt% Pt loaded sample is tested towards carbon monoxide (CO) gas along with pure WO3 sample. The sensing response of Pt loaded sample is nearly 15 times higher than pure WO3 sample in non-humid ambience at an operating temperature 200 °C. This indicates the suitability of the prepared films for gas sensors. The sensing response of pure WO3 film depends on the humidity while the Pt loaded WO3 film shows stable response in both humid and non-humid ambiences.

  19. Experimental strategies for imaging bioparticles with femtosecond hard X-ray pulses

    DOE PAGES

    Daurer, Benedikt J.; Okamoto, Kenta; Bielecki, Johan; ...

    2017-04-07

    This study explores the capabilities of the Coherent X-ray Imaging Instrument at the Linac Coherent Light Source to image small biological samples. The weak signal from small samples puts a significant demand on the experiment. AerosolizedOmono River virusparticles of ~40 nm in diameter were injected into the submicrometre X-ray focus at a reduced pressure. Diffraction patterns were recorded on two area detectors. The statistical nature of the measurements from many individual particles provided information about the intensity profile of the X-ray beam, phase variations in the wavefront and the size distribution of the injected particles. The results point to amore » wider than expected size distribution (from ~35 to ~300 nm in diameter). This is likely to be owing to nonvolatile contaminants from larger droplets during aerosolization and droplet evaporation. The results suggest that the concentration of nonvolatile contaminants and the ratio between the volumes of the initial droplet and the sample particles is critical in such studies. The maximum beam intensity in the focus was found to be 1.9 × 10 12photons per µm 2per pulse. The full-width of the focus at half-maximum was estimated to be 500 nm (assuming 20% beamline transmission), and this width is larger than expected. Under these conditions, the diffraction signal from a sample-sized particle remained above the average background to a resolution of 4.25 nm. Finally, the results suggest that reducing the size of the initial droplets during aerosolization is necessary to bring small particles into the scope of detailed structural studies with X-ray lasers.« less

  20. Capillary Absorption Spectrometer for 13C Isotopic Composition of Pico to Subpico Molar Sample Quantities

    NASA Astrophysics Data System (ADS)

    Moran, J.; Kelly, J.; Sams, R.; Newburn, M.; Kreuzer, H.; Alexander, M.

    2011-12-01

    Quick incorporation of IR spectroscopy based isotope measurements into cutting edge research in biogeochemical cycling attests to the advantages of a spectroscopy versus mass spectrometry method for making some 13C measurements. The simple principles of optical spectroscopy allow field portability and provide a more robust general platform for isotope measurements. We present results with a new capillary absorption spectrometer (CAS) with the capability of reducing the sample size required for high precision isotopic measurements to the picomolar level and potentially the sub-picomolar level. This work was motivated by the minute sample size requirements for laser ablation isotopic studies of carbon cycling in microbial communities but has potential to be a valuable tool in other areas of biological and geological research. The CAS instrument utilizes a capillary waveguide as a sample chamber for interrogating CO2 via near IR laser absorption spectroscopy. The capillary's small volume (~ 0.5 mL) combined with propagation and interaction of the laser mode with the entire sample reduces sample size requirements to a fraction of that accessible with commercially available IR absorption including those with multi-pass or ring-down cavity systems. Using a continuous quantum cascade laser system to probe nearly adjacent rovibrational transitions of different isotopologues of CO2 near 2307 cm-1 permits sample measurement at low analyte pressures (as low as 2 Torr) for further sensitivity improvement. A novel method to reduce cw-fringing noise in the hollow waveguide is presented, which allows weak absorbance features to be studied at the few ppm level after averaging 1,000 scans in 10 seconds. Detection limits down to the 20 picomoles have been observed, a concentration of approximately 400 ppm at 2 Torr in the waveguide with precision and accuracy at or better than 1 %. Improvements in detection and signal averaging electronics and laser power and mode quality are anticipated to reduce the required samples size to a 100-200 femtomoles of carbon. We report the application of the CAS system to a Laser Ablation-Catalytic-Combustion (LA-CC) micro-sampler system for selectively harvesting detailed sections of a solid surface for 13C analysis. This technique results in a three order of magnitude sensitivity improvement reported for our isotope measurement system compared to typical IRMS, providing new opportunities for making detailed investigations into wide ranges of microbial, physical, and chemical systems. The CAS is interfaced directly to the LA CC system currently operating at a 50 μm spatial resolution. We demonstrate that particulates produced by a Nd:YAG laser (λ=266nm) are isotopically homogenous with the parent material as measured by both IRMS and the CAS system. An improved laser ablation system operating at 193 nm with a spatial resolution of 2 microns or better is under development which will demonstrate the utility of the CAS system for sample sizes too low for IRMS. The improved sensitivities and optimized spatial targeting of such a system could interrogate targets as detailed as small cell clusters or intergrain organic deposits and could enhance ability to track biogeochemical carbon cycling.

  1. A strategy for characterized aerosol-sampling transport efficiency.

    NASA Astrophysics Data System (ADS)

    Schwarz, J. P.

    2017-12-01

    A fundamental concern when sampling aerosol in the laboratory or in situ, on the ground or (especially) from aircraft, is characterizing transport losses due to particles contacting the walls of tubing used for transport. Depending on the size range of the aerosol, different mechanisms dominate these losses: diffusion for the ultra-fine, and inertial and gravitational settling losses for the coarse mode. In the coarse mode, losses become intractable very quickly with increasing particle size above 5 µm diameter. Here we present these issues, with a concept approach to reducing aerosol losses via strategic dilution with porous tubing including results of laboratory testing of a prototype. We infer the potential value of this approach to atmospheric aerosol sampling.

  2. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials.

    PubMed

    Mi, Michael Y; Betensky, Rebecca A

    2013-04-01

    Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample-size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Because the basic SPCD already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and whether we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample-size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample-size re-estimation, up to 25% power was recovered from underestimated sample-size scenarios. Given the numerous possible test parameters that could have been chosen for the simulations, the study's results are limited to situations described by the parameters that were used and may not generalize to all possible scenarios. Furthermore, dropout of patients is not considered in this study. It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments.

  3. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials

    PubMed Central

    Mi, Michael Y.; Betensky, Rebecca A.

    2013-01-01

    Background Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Purpose Because the basic SPCD design already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and if we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Methods Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. Results From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample size re-estimation, up to 25% power was recovered from underestimated sample size scenarios. Limitations Given the numerous possible test parameters that could have been chosen for the simulations, the study’s results are limited to situations described by the parameters that were used, and may not generalize to all possible scenarios. Furthermore, drop-out of patients is not considered in this study. Conclusions It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments. PMID:23283576

  4. Effect of particle size distribution of maize and soybean meal on the precaecal amino acid digestibility in broiler chickens.

    PubMed

    Siegert, W; Ganzer, C; Kluth, H; Rodehutscord, M

    2018-02-01

    1. Herein, it was investigated whether different particle size distributions of feed ingredients achieved by grinding through a 2- or 3-mm grid would have an effect on precaecal (pc) amino acid (AA) digestibility. Maize and soybean meal were used as the test ingredients. 2. Maize and soybean meal was ground with grid sizes of 2 or 3 mm. Nine diets were prepared. The basal diet contained 500 g/kg of maize starch. The other experimental diets contained maize or soybean meal samples at concentrations of 250 and 500, and 150 and 300 g/kg, respectively, instead of maize starch. Each diet was tested using 6 replicate groups of 10 birds each. The regression approach was applied to calculate the pc AA digestibility of the test ingredients. 3. The reduction of the grid size from 3 to 2 mm reduced the average particle size of both maize and soybean meal, mainly by reducing the proportion of coarse particles. Reducing the grid size significantly (P < 0.050) increased the pc digestibility of all AA in the soybean meal. In maize, reducing the grid size decreased the pc digestibility of all AA numerically, but not significantly (P > 0.050). The mean numerical differences in pc AA digestibility between the grid sizes were 0.045 and 0.055 in maize and soybean meal, respectively. 4. Future studies investigating the pc AA digestibility should specify the particle size distribution and should investigate the test ingredients ground similarly for practical applications.

  5. In vitro inflammatory and cytotoxic effects of size-segregated particulate samples collected during long-range transport of wildfire smoke to Helsinki

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jalava, Pasi I.; Salonen, Raimo O.; Haelinen, Arja I.

    2006-09-15

    The impact of long-range transport (LRT) episodes of wildfire smoke on the inflammogenic and cytotoxic activity of urban air particles was investigated in the mouse RAW 264.7 macrophages. The particles were sampled in four size ranges using a modified Harvard high-volume cascade impactor, and the samples were chemically characterized for identification of different emission sources. The particulate mass concentration in the accumulation size range (PM{sub 1-0.2}) was highly increased during two LRT episodes, but the contents of total and genotoxic polycyclic aromatic hydrocarbons (PAH) in collected particulate samples were only 10-25% of those in the seasonal average sample. The abilitymore » of coarse (PM{sub 10-2.5}), intermodal size range (PM{sub 2.5-1}), PM{sub 1-0.2} and ultrafine (PM{sub 0.2}) particles to cause cytokine production (TNF{alpha}, IL-6, MIP-2) reduced along with smaller particle size, but the size range had a much smaller impact on induced nitric oxide (NO) production and cytotoxicity or apoptosis. The aerosol particles collected during LRT episodes had a substantially lower activity in cytokine production than the corresponding particles of the seasonal average period, which is suggested to be due to chemical transformation of the organic fraction during aging. However, the episode events were associated with enhanced inflammogenic and cytotoxic activities per inhaled cubic meter of air due to the greatly increased particulate mass concentration in the accumulation size range, which may have public health implications.« less

  6. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  7. Solution of reduced graphene oxide synthesized from coconut shells and its optical properties

    NASA Astrophysics Data System (ADS)

    Mas'udah, Kusuma Wardhani; Nugraha, I. Made Ananta; Abidin, Saiful; Mufid, Ali; Astuti, Fahmi; Darminto

    2016-04-01

    Reduced graphene oxide (r-GO)powder has been prepared from coconut shells by carbonization process at 400°C for 3, 4 and 5 hours.Theresulted sample mass was reduced to be 60% relativelycompared to the starting material. The longer heating duration has also led to the rGO with reduced crystalinity according to the X-ray diffractometry data and TEM. The rGO solution was prepared by adding powders of 5, 10 and 15 grams into 50 ml destiled water and then centrifused at 6000 rpm for 30 minutes.The resulted solutions were seen to be varied form clear transparant, light and dark yellow to black. Measurement using particle size analyser shows that the individual rGO particles tends to be agglomerating each others to form bigger size clustering, manifested by the observed bigger size particles for the increasing amount of soluted rGO powders in water.The varying UV-visible spectra of these rGO solutions together with their optical bandgaps will also be discussed in this study.

  8. Ultrafast image-based dynamic light scattering for nanoparticle sizing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Wu; Zhang, Jie; Liu, Lili

    An ultrafast sizing method for nanoparticles is proposed, called as UIDLS (Ultrafast Image-based Dynamic Light Scattering). This method makes use of the intensity fluctuation of scattered light from nanoparticles in Brownian motion, which is similar to the conventional DLS method. The difference in the experimental system is that the scattered light by nanoparticles is received by an image sensor instead of a photomultiplier tube. A novel data processing algorithm is proposed to directly get correlation coefficient between two images at a certain time interval (from microseconds to milliseconds) by employing a two-dimensional image correlation algorithm. This coefficient has been provedmore » to be a monotonic function of the particle diameter. Samples of standard latex particles (79/100/352/482/948 nm) were measured for validation of the proposed method. The measurement accuracy of higher than 90% was found with standard deviations less than 3%. A sample of nanosilver particle with nominal size of 20 ± 2 nm and a sample of polymethyl methacrylate emulsion with unknown size were also tested using UIDLS method. The measured results were 23.2 ± 3.0 nm and 246.1 ± 6.3 nm, respectively, which is substantially consistent with the transmission electron microscope results. Since the time for acquisition of two successive images has been reduced to less than 1 ms and the data processing time in about 10 ms, the total measuring time can be dramatically reduced from hundreds seconds to tens of milliseconds, which provides the potential for real-time and in situ nanoparticle sizing.« less

  9. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    PubMed

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Formation of a xerogel in reduced gravity using the acid catalysed silica sol-gel reaction

    NASA Astrophysics Data System (ADS)

    Pienaar, Christine L.; Steinberg, Theodore A.

    2006-01-01

    An acid catalysed silica sol-gel reaction was used to create a xerogel in reduced gravity. Samples were formed in a special apparatus which utilised vacuum and heating to speed up the gelation process. Testing was conducted aboard NASA's KC-135 aircraft which flies a parabolic trajectory, producing a series of 25 second reduced gravity periods. The samples formed in reduced gravity were compared against a control sample formed in normal gravity. 29Si NMR and nitrogen adsorption/desorption techniques yielded information on the molecular and physical structure of the xerogels. The microstructure of the reduced gravity samples contained more Q 4 groups and less Q 3 and Q2 groups than the control sample. The pore size of the reduced gravity samples was also larger than the control sample. This indicated that in a reduced gravity environment, where convection is lessened due to the removal of buoyancy forces, the microstructure formed through cyclisation reactions rather than bimolecularisation reactions. The latter requires the movement of molecules for reactions to occur whereas cyclisation only requires a favourable configuration. Q 4 groups are stabilised when contained in a ring structure and are unlikely to undergo repolymerisation. Thus reduced gravity favoured the formation of a xerogel through cyclisation, producing a structure with more highly coordinated Q groups. The xerogel formed in normal gravity contained both chain and ring structures as bimolecularisation reactions were able to effectively compete with cyclisation.

  11. Transport properties of bismuth telluride compound prepared by mechanical alloying

    NASA Astrophysics Data System (ADS)

    Khade, Poonam; Bagwaiya, Toshi; Bhattacharya, Shovit; Rayaprol, Sudhindra; Sahu, Ashok K.; Shelke, Vilas

    2017-05-01

    We have synthesized bismuth telluride compound using mechanical alloying and hot press sintering method. The phase formation, crystal structure was evaluated by X-ray diffraction and Raman spectroscopy. The scanning electron microscopy images indicated sub-micron sized grains. We observed low value of thermal conductivity 0.39 W/mK at room temperature as a result of grain size reduction by increasing deformation. The performance of the samples can be improved by reducing the grain size, which increases the grain boundary scattering.

  12. Focusing polycapillary to reduce parasitic scattering for inelastic x-ray measurements at high pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, P., E-mail: pchow@carnegiescience.edu; Xiao, Y. M.; Rod, E.

    2015-07-15

    The double-differential scattering cross-section for the inelastic scattering of x-ray photons from electrons is typically orders of magnitude smaller than that of elastic scattering. With samples 10-100 μm size in a diamond anvil cell at high pressure, the inelastic x-ray scattering signals from samples are obscured by scattering from the cell gasket and diamonds. One major experimental challenge is to measure a clean inelastic signal from the sample in a diamond anvil cell. Among the many strategies for doing this, we have used a focusing polycapillary as a post-sample optic, which allows essentially only scattered photons within its input fieldmore » of view to be refocused and transmitted to the backscattering energy analyzer of the spectrometer. We describe the modified inelastic x-ray spectrometer and its alignment. With a focused incident beam which matches the sample size and the field of view of polycapillary, at relatively large scattering angles, the polycapillary effectively reduces parasitic scattering from the diamond anvil cell gasket and diamonds. Raw data collected from the helium exciton measured by x-ray inelastic scattering at high pressure using the polycapillary method are compared with those using conventional post-sample slit collimation.« less

  13. Emerging Answers: Research Findings on Programs To Reduce Teen Pregnancy.

    ERIC Educational Resources Information Center

    Kirby, Douglas

    This report summarizes three bodies of research on teenage pregnancy and programs to reduce the risk of teenage pregnancy. Studies included in this report were completed in 1980 or later, conducted in the United States or Canada, targeted adolescents, employed an experimental or quasi-experimental design, had a sample size of at least 100 in the…

  14. Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.

    Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). Wemore » explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.« less

  15. SW-846 Test Method 3511: Organic Compounds in Water by Microextraction

    EPA Pesticide Factsheets

    a procedure for extracting selected volatile and semivolatileorganic compounds from water. The microscale approach minimizes sample size and solventusage, thereby reducing the supply costs, health and safety risks, and waste generated.

  16. A microfluidic platform for precision small-volume sample processing and its use to size separate biological particles with an acoustic microdevice [Precision size separation of biological particles in small-volume samples by an acoustic microfluidic system

    DOE PAGES

    Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...

    2015-11-23

    Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less

  17. [Ultra-Fine Pressed Powder Pellet Sample Preparation XRF Determination of Multi-Elements and Carbon Dioxide in Carbonate].

    PubMed

    Li, Xiao-li; An, Shu-qing; Xu, Tie-min; Liu, Yi-bo; Zhang, Li-juan; Zeng, Jiang-ping; Wang, Na

    2015-06-01

    The main analysis error of pressed powder pellet of carbonate comes from particle-size effect and mineral effect. So in the article in order to eliminate the particle-size effect, the ultrafine pressed powder pellet sample preparation is used to the determination of multi-elements and carbon-dioxide in carbonate. To prepare the ultrafine powder the FRITSCH planetary Micro Mill machine and tungsten carbide media is utilized. To conquer the conglomeration during the process of grinding, the wet grinding is preferred. The surface morphology of the pellet is more smooth and neat, the Compton scatter effect is reduced with the decrease in particle size. The intensity of the spectral line is varied with the change of the particle size, generally the intensity of the spectral line is increased with the decrease in the particle size. But when the particle size of more than one component of the material is decreased, the intensity of the spectral line may increase for S, Si, Mg, or decrease for Ca, Al, Ti, K, which depend on the respective mass absorption coefficient . The change of the composition of the phase with milling is also researched. The incident depth of respective element is given from theoretical calculation. When the sample is grounded to the particle size of less than the penetration depth of all the analyte, the effect of the particle size on the intensity of the spectral line is much reduced. In the experiment, when grounded the sample to less than 8 μm(d95), the particle-size effect is much eliminated, with the correction method of theoretical α coefficient and the empirical coefficient, 14 major, minor and trace element in the carbonate can be determined accurately. And the precision of the method is much improved with RSD < 2%, except Na2O. Carbon is ultra-light element, the fluorescence yield is low and the interference is serious. With the manual multi-layer crystal PX4, coarse collimator, empirical correction, X-ray spectrometer can be used to determine the carbon dioxide in the carbonate quantitatively. The intensity of the carbon is increase with the times of the measurement and the time delay even the pellet is stored in the dessicator. So employing the latest pressed powder pellet is suggested.

  18. Linear Combinations of Multiple Outcome Measures to Improve the Power of Efficacy Analysis ---Application to Clinical Trials on Early Stage Alzheimer Disease

    PubMed Central

    Xiong, Chengjie; Luo, Jingqin; Morris, John C; Bateman, Randall

    2018-01-01

    Modern clinical trials on Alzheimer disease (AD) focus on the early symptomatic stage or even the preclinical stage. Subtle disease progression at the early stages, however, poses a major challenge in designing such clinical trials. We propose a multivariate mixed model on repeated measures to model the disease progression over time on multiple efficacy outcomes, and derive the optimum weights to combine multiple outcome measures by minimizing the sample sizes to adequately power the clinical trials. A cross-validation simulation study is conducted to assess the accuracy for the estimated weights as well as the improvement in reducing the sample sizes for such trials. The proposed methodology is applied to the multiple cognitive tests from the ongoing observational study of the Dominantly Inherited Alzheimer Network (DIAN) to power future clinical trials in the DIAN with a cognitive endpoint. Our results show that the optimum weights to combine multiple outcome measures can be accurately estimated, and that compared to the individual outcomes, the combined efficacy outcome with these weights significantly reduces the sample size required to adequately power clinical trials. When applied to the clinical trial in the DIAN, the estimated linear combination of six cognitive tests can adequately power the clinical trial. PMID:29546251

  19. Multiple Category-Lot Quality Assurance Sampling: A New Classification System with Application to Schistosomiasis Control

    PubMed Central

    Olives, Casey; Valadez, Joseph J.; Brooker, Simon J.; Pagano, Marcello

    2012-01-01

    Background Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. Methodology We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n = 15 and n = 25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Principle Findings Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n = 15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. Conclusion/Significance This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools. PMID:22970333

  20. Comparative study of He bubble formation in nanostructured reduced activation steel and its coarsen-grained counterpart

    NASA Astrophysics Data System (ADS)

    Liu, W. B.; Zhang, J. H.; Ji, Y. Z.; Xia, L. D.; Liu, H. P.; Yun, D.; He, C. H.; Zhang, C.; Yang, Z. G.

    2018-03-01

    High temperature (550 °C) He ions irradiation was performed on nanostructured (NS) and coarsen-grained (CG) reduced activation steel to investigate the effects of GBs/interfaces on the formation of bubbles during irradiation. Experimental results showed that He bubbles were preferentially trapped at dislocations and/or grain boundaries (GBs) for both of the samples. Void denuded zones (VDZs) were observed in the CG samples, while VDZs near GBs were unobvious in NS sample. However, both the average bubble size and the bubble density in peak damage region of the CG sample were significantly larger than that observed in the NS sample, which indicated that GBs play an important role during the irradiation, and the NS steel had better irradiation resistance than its CG counterpart.

  1. Diagnostic test accuracy and prevalence inferences based on joint and sequential testing with finite population sampling.

    PubMed

    Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O

    2004-07-30

    The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.

  2. Harvest and group effects on pup survival in a cooperative breeder

    USGS Publications Warehouse

    Ausband, David E.; Mitchell, Michael S.; Stansbury, Carisa R.; Stenglein, Jennifer L.; Waits, Lisette P.

    2017-01-01

    Recruitment in cooperative breeders can be negatively affected by changes in group size and composition. The majority of cooperative breeding studies have not evaluated human harvest; therefore, the effects of recurring annual harvest and group characteristics on survival of young are poorly understood. We evaluated how harvest and groups affect pup survival using genetic sampling and pedigrees for grey wolves in North America. We hypothesized that harvest reduces pup survival because of (i) reduced group size, (ii) increased breeder turnover and/or (iii) reduced number of female helpers. Alternatively, harvest may increase pup survival possibly due to increased per capita food availability or it could be compensatory with other forms of mortality. Harvest appeared to be additive because it reduced both pup survival and group size. In addition to harvest, turnover of breeding males and the presence of older, non-breeding males also reduced pup survival. Large groups and breeder stability increased pup survival when there was harvest, however. Inferences about the effect of harvest on recruitment require knowledge of harvest rate of young as well as the indirect effects associated with changes in group size and composition, as we show. The number of young harvested is a poor measure of the effect of harvest on recruitment in cooperative breeders.

  3. Quantifying the size-resolved dynamics of indoor bioaerosol transport and control.

    PubMed

    Kunkel, S A; Azimi, P; Zhao, H; Stark, B C; Stephens, B

    2017-09-01

    Understanding the bioaerosol dynamics of droplets and droplet nuclei emitted during respiratory activities is important for understanding how infectious diseases are transmitted and potentially controlled. To this end, we conducted experiments to quantify the size-resolved dynamics of indoor bioaerosol transport and control in an unoccupied apartment unit operating under four different HVAC particle filtration conditions. Two model organisms (Escherichia coli K12 and bacteriophage T4) were aerosolized under alternating low and high flow rates to roughly represent constant breathing and periodic coughing. Size-resolved aerosol sampling and settle plate swabbing were conducted in multiple locations. Samples were analyzed by DNA extraction and quantitative polymerase chain reaction (qPCR). DNA from both organisms was detected during all test conditions in all air samples up to 7 m away from the source, but decreased in magnitude with the distance from the source. A greater fraction of T4 DNA was recovered from the aerosol size fractions smaller than 1 μm than E. coli K12 at all air sampling locations. Higher efficiency HVAC filtration also reduced the amount of DNA recovered in air samples and on settle plates located 3-7 m from the source. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Photographic techniques for characterizing streambed particle sizes

    USGS Publications Warehouse

    Whitman, Matthew S.; Moran, Edward H.; Ourso, Robert T.

    2003-01-01

    We developed photographic techniques to characterize coarse (>2-mm) and fine (≤2-mm) streambed particle sizes in 12 streams in Anchorage, Alaska. Results were compared with current sampling techniques to assess which provided greater sampling efficiency and accuracy. The streams sampled were wadeable and contained gravel—cobble streambeds. Gradients ranged from about 5% at the upstream sites to about 0.25% at the downstream sites. Mean particle sizes and size-frequency distributions resulting from digitized photographs differed significantly from those resulting from Wolman pebble counts for five sites in the analysis. Wolman counts were biased toward selecting larger particles. Photographic analysis also yielded a greater number of measured particles (mean = 989) than did the Wolman counts (mean = 328). Stream embeddedness ratings assigned from field and photographic observations were significantly different at 5 of the 12 sites, although both types of ratings showed a positive relationship with digitized surface fines. Visual estimates of embeddedness and digitized surface fines may both be useful indicators of benthic conditions, but digitizing surface fines produces quantitative rather than qualitative data. Benefits of the photographic techniques include reduced field time, minimal streambed disturbance, convenience of postfield processing, easy sample archiving, and improved accuracy and replication potential.

  5. Graphite Black shale of Vendas de Ceira, Coimbra, Portugal

    NASA Astrophysics Data System (ADS)

    Quinta-Ferreira, Mário; Silva, Daniela; Coelho, Nuno; Gomes, Ruben; Santos, Ana; Piedade, Aldina

    2017-04-01

    The graphite black shale of Vendas de Ceira located in south of Coimbra (Portugal), caused serious instability problems in recent road excavation slopes. The problems increased with the rain, transforming shales into a dark mud that acquires a metallic hue when dried. The black shales are attributed to the Devonian or eventually, to the Silurian. At the base of the slope is observed graphite black shale and on the topbrown schist. Samples were collected during the slope excavation works. Undisturbed and less altered materials were selected. Further, sampling was made difficult as the graphite shale was covered by a thick layer of reinforced concrete, which was used to stabilize the excavated surfaces. The mineralogy is mainly constituted by quartz, muscovite, ilite, ilmenite and feldspar without the presence of expansive minerals. The organic matter content is 0.3 to 0.4%. The durability evaluated by the Slake Durability Test varies from very low (Id2 of 6% for sample A) to high (98% for sample C). The grain size distribution of the shale particles, was determined after disaggregation with water, which allowed verifying that sample A has 37% of fines (5% of clay and 32% of silt) and 63% of sand, while sample C has only 14% of fines (2% clay and 12% silt) and 86% sand, showing that the decrease in particle size contributes to reduce durability. The unconfined linear expansion confirms the higher expandability (13.4%) for sample A, reducing to 12.1% for sample B and 10.5% for sample C. Due the shale material degradated with water, mercury porosimetry was used. While the dry weight of the three samples does not change significantly, around 26 kN/m3, the porosity is much higher in sample A with 7.9% of pores, reducing to 1.4% in sample C. The pores size vary between 0.06 to 0.26 microns, does not seem to have any significant influence in the shale behaviour. In order to have a comparison term, a porosity test was carried out on the low weatherable brown shale, which is quite abundant at the site. The main difference to the graphite shale is the high porosity of the brown shale with 14.7% and the low volume weight of 23 kN/m3, evidencing the distinct characteristics of the graphite schists. The maximum strength was evaluated by the Schmidt hammer, as the point load test could not be performed as the rock was very soft. The maximum estimated values on dry samples were 32 MPa for sample A and 85 MPa for sample C. The results show a singular material characterized by significant heterogeneity. It can be concluded that for the graphite schists the smaller particle size and higher porosity make the soft rock extremely weatherable when decompressed and exposed to water, as a result of high capillary tension and reduced cohesion. They also exhibit high expansion and an enormous degradation of the rock presenting a behaviour close to a soil. The graphite black schist is a highly weatherable soft rock, without expansive minerals, with small pores, in which the porosity, low strength and low cohesion allow their rapid degradation when decompressed and exposed to the action of Water.

  6. Mesh-size effects on drift sample composition as determined with a triple net sampler

    USGS Publications Warehouse

    Slack, K.V.; Tilley, L.J.; Kennelly, S.S.

    1991-01-01

    Nested nets of three different mesh apertures were used to study mesh-size effects on drift collected in a small mountain stream. The innermost, middle, and outermost nets had, respectively, 425 ??m, 209 ??m and 106 ??m openings, a design that reduced clogging while partitioning collections into three size groups. The open area of mesh in each net, from largest to smallest mesh opening, was 3.7, 5.7 and 8.0 times the area of the net mouth. Volumes of filtered water were determined with a flowmeter. The results are expressed as (1) drift retained by each net, (2) drift that would have been collected by a single net of given mesh size, and (3) the percentage of total drift (the sum of the catches from all three nets) that passed through the 425 ??m and 209 ??m nets. During a two day period in August 1986, Chironomidae larvae were dominant numerically in all 209 ??m and 106 ??m samples and midday 425 ??m samples. Large drifters (Ephemerellidae) occurred only in 425 ??m or 209 ??m nets, but the general pattern was an increase in abundance and number of taxa with decreasing mesh size. Relatively more individuals occurred in the larger mesh nets at night than during the day. The two larger mesh sizes retained 70% of the total sediment/detritus in the drift collections, and this decreased the rate of clogging of the 106 ??m net. If an objective of a sampling program is to compare drift density or drift rate between areas or sampling dates, the same mesh size should be used for all sample collection and processing. The mesh aperture used for drift collection should retain all species and life stages of significance in a study. The nested net design enables an investigator to test the adequacy of drift samples. ?? 1991 Kluwer Academic Publishers.

  7. Effect of the CTAB concentration on the upconversion emission of ZrO 2:Er 3+ nanocrystals

    NASA Astrophysics Data System (ADS)

    López-Luke, T.; De la Rosa, E.; Sólis, D.; Salas, P.; Angeles-Chavez, C.; Montoya, A.; Díaz-Torres, L. A.; Bribiesca, S.

    2006-10-01

    Upconversion emission of ZrO 2:Er 3+ (0.2 mol%) nanophosphor were studied as function of surfactant concentration after excitation at 968 nm. The strong green emission was produced by the transition 2H 11/2 + 4S 3/2 → 4I 15/2 and was explained in terms of cooperative energy transfer between neighboring ions. The upconverted signal was enhanced but the fluorescence decay time was reduced as either the surfactant concentration increases or the annealing time reduces. Experimental results show that surfactant concentration controls the particle size and morphology while annealing time control the phase composition and crystallite size. The highest intensity was obtained for a sample composed of a mixture of tetragonal (33 wt.%) and monoclinic (67 wt.%) phase with crystallite size of 31 and 59 nm, respectively. This result suggests that tetragonal crystalline structure and small crystallite size are more favorable for the upconversion emission.

  8. Development of composite calibration standard for quantitative NDE by ultrasound and thermography

    NASA Astrophysics Data System (ADS)

    Dayal, Vinay; Benedict, Zach G.; Bhatnagar, Nishtha; Harper, Adam G.

    2018-04-01

    Inspection of aircraft components for damage utilizing ultrasonic Non-Destructive Evaluation (NDE) is a time intensive endeavor. Additional time spent during aircraft inspections translates to added cost to the company performing them, and as such, reducing this expenditure is of great importance. There is also great variance in the calibration samples from one entity to another due to a lack of a common calibration set. By characterizing damage types, we can condense the required calibration sets and reduce the time required to perform calibration while also providing procedures for the fabrication of these standard sets. We present here our effort to fabricate composite samples with known defects and quantify the size and location of defects, such as delaminations, and impact damage. Ultrasonic and Thermographic images are digitally enhanced to accurately measure the damage size. Ultrasonic NDE is compared with thermography.

  9. The effect of covariate mean differences on the standard error and confidence interval for the comparison of treatment means.

    PubMed

    Liu, Xiaofeng Steven

    2011-05-01

    The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.

  10. Development and Validation of the Caring Loneliness Scale.

    PubMed

    Karhe, Liisa; Kaunonen, Marja; Koivisto, Anna-Maija

    2016-12-01

    The Caring Loneliness Scale (CARLOS) includes 5 categories derived from earlier qualitative research. This article assesses the reliability and construct validity of a scale designed to measure patient experiences of loneliness in a professional caring relationship. Statistical analysis with 4 different sample sizes included Cronbach's alpha and exploratory factor analysis with principal axis factoring extraction. The sample size of 250 gave the most useful and comprehensible structure, but all 4 samples yielded underlying content of loneliness experiences. The initial 5 categories were reduced to 4 factors with 24 items and Cronbach's alpha ranging from .77 to .90. The findings support the reliability and validity of CARLOS for the assessment of Finnish breast cancer and heart surgery patients' experiences but as all instruments, further validation is needed.

  11. Sampling benthic macroinvertebrates in a large flood-plain river: Considerations of study design, sample size, and cost

    USGS Publications Warehouse

    Bartsch, L.A.; Richardson, W.B.; Naimo, T.J.

    1998-01-01

    Estimation of benthic macroinvertebrate populations over large spatial scales is difficult due to the high variability in abundance and the cost of sample processing and taxonomic analysis. To determine a cost-effective, statistically powerful sample design, we conducted an exploratory study of the spatial variation of benthic macroinvertebrates in a 37 km reach of the Upper Mississippi River. We sampled benthos at 36 sites within each of two strata, contiguous backwater and channel border. Three standard ponar (525 cm(2)) grab samples were obtained at each site ('Original Design'). Analysis of variance and sampling cost of strata-wide estimates for abundance of Oligochaeta, Chironomidae, and total invertebrates showed that only one ponar sample per site ('Reduced Design') yielded essentially the same abundance estimates as the Original Design, while reducing the overall cost by 63%. A posteriori statistical power analysis (alpha = 0.05, beta = 0.20) on the Reduced Design estimated that at least 18 sites per stratum were needed to detect differences in mean abundance between contiguous backwater and channel border areas for Oligochaeta, Chironomidae, and total invertebrates. Statistical power was nearly identical for the three taxonomic groups. The abundances of several taxa of concern (e.g., Hexagenia mayflies and Musculium fingernail clams) were too spatially variable to estimate power with our method. Resampling simulations indicated that to achieve adequate sampling precision for Oligochaeta, at least 36 sample sites per stratum would be required, whereas a sampling precision of 0.2 would not be attained with any sample size for Hexagenia in channel border areas, or Chironomidae and Musculium in both strata given the variance structure of the original samples. Community-wide diversity indices (Brillouin and 1-Simpsons) increased as sample area per site increased. The backwater area had higher diversity than the channel border area. The number of sampling sites required to sample benthic macroinvertebrates during our sampling period depended on the study objective and ranged from 18 to more than 40 sites per stratum. No single sampling regime would efficiently and adequately sample all components of the macroinvertebrate community.

  12. Experimental strategies for imaging bioparticles with femtosecond hard X-ray pulses

    PubMed Central

    Okamoto, Kenta; Bielecki, Johan; Maia, Filipe R. N. C.; Mühlig, Kerstin; Seibert, M. Marvin; Hantke, Max F.; Benner, W. Henry; Svenda, Martin; Ekeberg, Tomas; Loh, N. Duane; Pietrini, Alberto; Zani, Alessandro; Rath, Asawari D.; Westphal, Daniel; Kirian, Richard A.; Awel, Salah; Wiedorn, Max O.; van der Schot, Gijs; Carlsson, Gunilla H.; Hasse, Dirk; Sellberg, Jonas A.; Barty, Anton; Andreasson, Jakob; Boutet, Sébastien; Williams, Garth; Koglin, Jason; Hajdu, Janos; Larsson, Daniel S. D.

    2017-01-01

    This study explores the capabilities of the Coherent X-ray Imaging Instrument at the Linac Coherent Light Source to image small biological samples. The weak signal from small samples puts a significant demand on the experiment. Aerosolized Omono River virus particles of ∼40 nm in diameter were injected into the submicrometre X-ray focus at a reduced pressure. Diffraction patterns were recorded on two area detectors. The statistical nature of the measurements from many individual particles provided information about the intensity profile of the X-ray beam, phase variations in the wavefront and the size distribution of the injected particles. The results point to a wider than expected size distribution (from ∼35 to ∼300 nm in diameter). This is likely to be owing to nonvolatile contaminants from larger droplets during aerosolization and droplet evaporation. The results suggest that the concentration of nonvolatile contaminants and the ratio between the volumes of the initial droplet and the sample particles is critical in such studies. The maximum beam intensity in the focus was found to be 1.9 × 1012 photons per µm2 per pulse. The full-width of the focus at half-maximum was estimated to be 500 nm (assuming 20% beamline transmission), and this width is larger than expected. Under these conditions, the diffraction signal from a sample-sized particle remained above the average background to a resolution of 4.25 nm. The results suggest that reducing the size of the initial droplets during aerosolization is necessary to bring small particles into the scope of detailed structural studies with X-ray lasers. PMID:28512572

  13. Comparative study of various pixel photodiodes for digital radiography: Junction structure, corner shape and noble window opening

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Uk; Cho, Minsik; Lee, Dae Hee; Yoo, Hyunjun; Kim, Myung Soo; Bae, Jun Hyung; Kim, Hyoungtaek; Kim, Jongyul; Kim, Hyunduk; Cho, Gyuseong

    2012-05-01

    Recently, large-size 3-transistors (3-Tr) active pixel complementary metal-oxide silicon (CMOS) image sensors have been being used for medium-size digital X-ray radiography, such as dental computed tomography (CT), mammography and nondestructive testing (NDT) for consumer products. We designed and fabricated 50 µm × 50 µm 3-Tr test pixels having a pixel photodiode with various structures and shapes by using the TSMC 0.25-m standard CMOS process to compare their optical characteristics. The pixel photodiode output was continuously sampled while a test pixel was continuously illuminated by using 550-nm light at a constant intensity. The measurement was repeated 300 times for each test pixel to obtain reliable results on the mean and the variance of the pixel output at each sampling time. The sampling rate was 50 kHz, and the reset period was 200 msec. To estimate the conversion gain, we used the mean-variance method. From the measured results, the n-well/p-substrate photodiode, among 3 photodiode structures available in a standard CMOS process, showed the best performance at a low illumination equivalent to the typical X-ray signal range. The quantum efficiencies of the n+/p-well, n-well/p-substrate, and n+/p-substrate photodiodes were 18.5%, 62.1%, and 51.5%, respectively. From a comparison of pixels with rounded and rectangular corners, we found that a rounded corner structure could reduce the dark current in large-size pixels. A pixel with four rounded corners showed a reduced dark current of about 200fA compared to a pixel with four rectangular corners in our pixel sample size. Photodiodes with round p-implant openings showed about 5% higher dark current, but about 34% higher sensitivities, than the conventional photodiodes.

  14. Microstructural and mechanical evolution during deformation and annealing of poly-phase marbles - constraints from laboratory experiments and field observations

    NASA Astrophysics Data System (ADS)

    Austin, N. J.; Evans, B.; Dresen, G. H.; Rybacki, E.

    2009-12-01

    Deformed rocks commonly consist of several mineral phases, each with dramatically different mechanical properties. In both naturally and experimentally deformed rocks, deformation mechanisms and, in turn, strength, are commonly investigated by analyzing microstructural elements such as crystallographic preferred orientation (CPO) and recrystallized grain size. Here, we investigated the effect of variations in the volume fraction and the geometry of rigid second phases on the strength and evolution of CPO and grain size of synthetic calcite rocks. Experiments using triaxial compression and torsional loading were conducted at 1023 K and equivalent strain rates between ~2e-6 and 1e-3 s-1. The second phases in these synthetic assemblages are rigid carbon spheres or splinters with known particle size distributions and geometries, which are chemically inert at our experimental conditions. Under hydrostatic conditions, the addition of as little as 1 vol.% carbon spheres poisons normal grain growth. Shape is also important: for an equivalent volume fraction and grain dimension, carbon splinters result in a finer calcite grain size than carbon spheres. In samples deformed at “high” strain rates, or which have “large” mean free spacing of the pinning phase, the final recrystallized grain size is well explained by competing grain growth and grain size reduction processes, where the grain-size reduction rate is determined by the rate that mechanical work is done during deformation. In these samples, the final grain size is finer than in samples heat-treated hydrostatically for equivalent durations. The addition of 1 vol.% spheres to calcite has little effect on either the strength or CPO development. Adding 10 vol.% splinters increases the strength at low strains and low strain rates, but has little effect on the strength at high strains and/or high strain rates, compared to pure samples. A CPO similar to that in pure samples is observed, although the intensity is reduced in samples containing 10 vol.% splinters. When 10 vol.% spheres are added to calcite, the strength of the aggregate is reduced, and a distinct and strong CPO develops. Viscoplastic self consistent calculations were used to model the evolution of CPO in these materials, and these suggest a variation in the activity of the various slip systems within pure samples and those containing 10 vol.% spheres. The applicability of these laboratory observations has been tested with field-based observations made in the Morcles Nappe (Swiss Helvetic Alps). In the Morcles Nappe, calcite grain size becomes progressively finer as the thrust contact is approached, and there is a concomitant increase in CPO intensity, with the strongest CPO’s in the finest-grained, quartz-rich limestones, nearest the thrust contact, which are interpreted to have been deformed to the highest strains. Thus, our laboratory results may be used to provide insight into the distribution of strain observed in natural shear zones.

  15. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    USGS Publications Warehouse

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.

  16. The Slope of Change: An Environmental Management Approach to Reduce Drinking on a Day of Celebration at a US College

    ERIC Educational Resources Information Center

    Marchell, Timothy C.; Lewis, Deborah D.; Croom, Katherine; Lesser, Martin L.; Murphy, Susan H.; Reyna, Valerie F.; Frank, Jeremy; Staiano-Coico, Lisa

    2013-01-01

    Objective: This research extends the literature on event-specific environmental management with a case study evaluation of an intervention designed to reduce student drinking at a university's year-end celebration. Participants: Cornell University undergraduates were surveyed each May from 2001 through 2009. Sample sizes ranged from 322 to 1,973.…

  17. A hard-to-read font reduces the framing effect in a large sample.

    PubMed

    Korn, Christoph W; Ries, Juliane; Schalk, Lennart; Oganian, Yulia; Saalbach, Henrik

    2018-04-01

    How can apparent decision biases, such as the framing effect, be reduced? Intriguing findings within recent years indicate that foreign language settings reduce framing effects, which has been explained in terms of deeper cognitive processing. Because hard-to-read fonts have been argued to trigger deeper cognitive processing, so-called cognitive disfluency, we tested whether hard-to-read fonts reduce framing effects. We found no reliable evidence for an effect of hard-to-read fonts on four framing scenarios in a laboratory (final N = 158) and an online study (N = 271). However, in a preregistered online study with a rather large sample (N = 732), a hard-to-read font reduced the framing effect in the classic "Asian disease" scenario (in a one-sided test). This suggests that hard-read-fonts can modulate decision biases-albeit with rather small effect sizes. Overall, our findings stress the importance of large samples for the reliability and replicability of modulations of decision biases.

  18. A low intensity sampling method for assessing blue crab abundance at Aransas National Wildlife Refuge and preliminary results on the relationship of blue crab abundance to whooping crane winter mortality

    USGS Publications Warehouse

    Pugesek, Bruce H.; Baldwin, Michael J.; Stehn, Thomas; Folk, Martin J.; Nesbitt, Stephen A.

    2008-01-01

    We sampled blue crabs (Callinectes sapidus) in marshes on the Aransas National Wildlife Refuge, Texas from 1997 to 2005 to determine whether whooping crane (Grus americana) mortality was related to the availability of this food source. For four years, 1997 - 2001, we sampled monthly from the fall through the spring. From these data, we developed a reduced sampling effort method that adequately characterized crab abundance and reduced the potential for disturbance to the cranes. Four additional years of data were collected with the reduced sampling effort methods. Yearly variation in crab numbers was high, ranging from a low of 0.1 crabs to a high of 3.4 crabs per 100-m transect section. Mortality among adult cranes was inversely related to crab abundance. We found no relationship between crab abundance and mortality among juvenile cranes, possibly as a result of a smaller population size of juveniles compared to adults.

  19. Forest inventory using multistage sampling with probability proportional to size. [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Lee, D. C. L.; Hernandezfilho, P.; Shimabukuro, Y. E.; Deassis, O. R.; Demedeiros, J. S.

    1984-01-01

    A multistage sampling technique, with probability proportional to size, for forest volume inventory using remote sensing data is developed and evaluated. The study area is located in the Southeastern Brazil. The LANDSAT 4 digital data of the study area are used in the first stage for automatic classification of reforested areas. Four classes of pine and eucalypt with different tree volumes are classified utilizing a maximum likelihood classification algorithm. Color infrared aerial photographs are utilized in the second stage of sampling. In the third state (ground level) the time volume of each class is determined. The total time volume of each class is expanded through a statistical procedure taking into account all the three stages of sampling. This procedure results in an accurate time volume estimate with a smaller number of aerial photographs and reduced time in field work.

  20. Hypotheses and fundamental study design characteristics for evaluating potential reduced-risk tobacco products. Part I: Heuristic.

    PubMed

    Murrelle, Lenn; Coggins, Christopher R E; Gennings, Chris; Carchman, Richard A; Carter, Walter H; Davies, Bruce D; Krauss, Marc R; Lee, Peter N; Schleef, Raymond R; Zedler, Barbara K; Heidbreder, Christian

    2010-06-01

    The risk-reducing effect of a potential reduced-risk tobacco product (PRRP) can be investigated conceptually in a long-term, prospective study of disease risks among cigarette smokers who switch to a PRRP and in appropriate comparison groups. Our objective was to provide guidance for establishing the fundamental design characteristics of a study intended to (1) determine if switching to a PRRP reduces the risk of lung cancer (LC) compared with continued cigarette smoking, and (2) compare, using a non-inferiority approach, the reduction in LC risk among smokers who switched to a PRRP to the reduction in risk among smokers who quit smoking entirely. Using standard statistical methods applied to published data on LC incidence after smoking cessation, we show that the sample size and duration required for a study designed to evaluate the potential for LC risk reduction for an already marketed PRRP, compared with continued smoking, varies depending on the LC risk-reducing effectiveness of the PRRP, from a 5-year study with 8000-30,000 subjects to a 15-year study with <5000 to 10,000 subjects. To assess non-inferiority to quitting, the required sample size tends to be about 10 times greater, again depending on the effectiveness of the PRRP. (c) 2009 Elsevier Inc. All rights reserved.

  1. The Effect of Hypnosis on Anxiety in Patients With Cancer: A Meta-Analysis.

    PubMed

    Chen, Pei-Ying; Liu, Ying-Mei; Chen, Mei-Ling

    2017-06-01

    Anxiety is a common form of psychological distress in patients with cancer. One recognized nonpharmacological intervention to reduce anxiety for various populations is hypnotherapy or hypnosis. However, its effect in reducing anxiety in cancer patients has not been systematically evaluated. This meta-analysis was designed to synthesize the immediate and sustained effects of hypnosis on anxiety of cancer patients and to identify moderators for these hypnosis effects. Qualified studies including randomized controlled trials (RCT) and pre-post design studies were identified by searching seven electronic databases: Scopus, Medline Ovidsp, PubMed, PsycInfo-Ovid, Academic Search Premier, CINAHL Plus with FT-EBSCO, and SDOL. Effect size (Hedges' g) was computed for each study. Random-effect modeling was used to combine effect sizes across studies. All statistical analyses were conducted with Comprehensive Meta-Analysis, version 2 (Biostat, Inc., Englewood, NJ, USA). Our meta-analysis of 20 studies found that hypnosis had a significant immediate effect on anxiety in cancer patients (Hedges' g: 0.70-1.41, p < .01) and the effect was sustained (Hedges' g: 0.61-2.77, p < .01). The adjusted mean effect size (determined by Duvan and Tweedie's trim-and-fill method) was 0.46. RCTs had a significantly higher effect size than non-RCT studies. Higher mean effect sizes were also found with pediatric study samples, hematological malignancy, studies on procedure-related stressors, and with mixed-gender samples. Hypnosis delivered by a therapist was significantly more effective than self-hypnosis. Hypnosis can reduce anxiety of cancer patients, especially for pediatric cancer patients who experience procedure-related stress. We recommend therapist-delivered hypnosis should be preferred until more effective self-hypnosis strategies are developed. © 2017 Sigma Theta Tau International.

  2. Optimizing image registration and infarct definition in stroke research.

    PubMed

    Harston, George W J; Minks, David; Sheerin, Fintan; Payne, Stephen J; Chappell, Michael; Jezzard, Peter; Jenkinson, Mark; Kennedy, James

    2017-03-01

    Accurate representation of final infarct volume is essential for assessing the efficacy of stroke interventions in imaging-based studies. This study defines the impact of image registration methods used at different timepoints following stroke, and the implications for infarct definition in stroke research. Patients presenting with acute ischemic stroke were imaged serially using magnetic resonance imaging. Infarct volume was defined manually using four metrics: 24-h b1000 imaging; 1-week and 1-month T2-weighted FLAIR; and automatically using predefined thresholds of ADC at 24 h. Infarct overlap statistics and volumes were compared across timepoints following both rigid body and nonlinear image registration to the presenting MRI. The effect of nonlinear registration on a hypothetical trial sample size was calculated. Thirty-seven patients were included. Nonlinear registration improved infarct overlap statistics and consistency of total infarct volumes across timepoints, and reduced infarct volumes by 4.0 mL (13.1%) and 7.1 mL (18.2%) at 24 h and 1 week, respectively, compared to rigid body registration. Infarct volume at 24 h, defined using a predetermined ADC threshold, was less sensitive to infarction than b1000 imaging. 1-week T2-weighted FLAIR imaging was the most accurate representation of final infarct volume. Nonlinear registration reduced hypothetical trial sample size, independent of infarct volume, by an average of 13%. Nonlinear image registration may offer the opportunity of improving the accuracy of infarct definition in serial imaging studies compared to rigid body registration, helping to overcome the challenges of anatomical distortions at subacute timepoints, and reducing sample size for imaging-based clinical trials.

  3. Size fractionation of waste-to-energy boiler ash enables separation of a coarse fraction with low dioxin concentrations.

    PubMed

    Weidemann, E; Allegrini, E; Fruergaard Astrup, T; Hulgaard, T; Riber, C; Jansson, S

    2016-03-01

    Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/F) formed in modern Waste-to-Energy plants are primarily found in the generated ashes and air pollution control residues, which are usually disposed of as hazardous waste. The objective of this study was to explore the occurrence of PCDD/F in different grain size fractions in the boiler ash, i.e. ash originating from the convection pass of the boiler. If a correlation between particle size and dioxin concentrations could be found, size fractionation of the ashes could reduce the total amount of hazardous waste. Boiler ash samples from ten sections of a boiler's convective part were collected over three sampling days, sieved into three different size fractions - <0.09 mm, 0.09-0.355 mm, and >0.355 mm - and analysed for PCDD/F. The coarse fraction (>0.355 mm) in the first sections of the horizontal convection pass appeared to be of low toxicity with respect to dioxin content. While the total mass of the coarse fraction in this boiler was relatively small, sieving could reduce the amount of ash containing toxic PCDD/F by around 0.5 kg per tonne input waste or around 15% of the collected boiler ash from the convection pass. The mid-size fraction in this study covered a wide size range (0.09-0.355 mm) and possibly a low toxicity fraction could be identified by splitting this fraction into more narrow size ranges. The ashes exhibited uniform PCDD/F homologue patterns which suggests a stable and continuous generation of PCDD/F. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Determining chewing efficiency using a solid test food and considering all phases of mastication.

    PubMed

    Liu, Ting; Wang, Xinmiao; Chen, Jianshe; van der Glas, Hilbert W

    2018-07-01

    Following chewing a solid food, the median particle size, X 50 , is determined after N chewing cycles, by curve-fitting of the particle size distribution. Reduction of X 50 with N is traditionally followed from N ≥ 15-20 cycles when using the artificial test food Optosil ® , because of initially unreliable values of X 50 . The aims of the study were (i) to enable testing at small N-values by using initial particles of appropriate size, shape and amount, and (ii) to compare measures of chewing ability, i.e. chewing efficiency (N needed to halve the initial particle size, N(1/2-Xo)) and chewing performance (X 50 at a particular N-value, X 50,N ). 8 subjects with a natural dentition chewed 4 types of samples of Optosil particles: (1) 8 cubes of 8 mm, border size relative to bin size (traditional test), (2) 9 half-cubes of 9.6 mm, mid-size; similar sample volume, (3) 4 half-cubes of 9.6 mm, and 2 half-cubes of 9.6 mm; reduced particle number and sample volume. All samples were tested with 4 N-values. Curve-fitting with a 2nd order polynomial function yielded log(X 50 )-log(N) relationships, after which N(1/2-Xo) and X 50,N were obtained. Reliable X 50 -values are obtained for all N-values when using half-cubes with a mid-size relative to bin sizes. By using 2 or 4 half-cubes, determination of N(1/2-Xo) or X 50,N needs less chewing cycles than traditionally. Chewing efficiency is preferable over chewing performance because of a comparison of inter-subject chewing ability at the same stage of food comminution and constant intra-subject and inter-subject ratios between and within samples respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. White sucker (Catostomus commersoni) growth and sexual maturation in pulp mill-contaminated and reference rivers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gagnon, M.M.; Bussieres, D.; Dodson, J.J.

    1995-02-01

    Induction of hepatic ethoxyresorufin-O-deethylase (EROD) activity and accumulation of chlorophenolic compounds typical of bleached-kraft mill effluent (BKME) in fish sampled downstream of a pulp mill on the St. Maurice River, Quebec, Canada, provided evidence of chemical exposure to BKME. In comparison, fish sampled over the same distances and in similar habitats in a noncontaminated reference river, the Gatineau River, demonstrated low EROD activity and contamination levels. Accelerated growth of white suckers occurred between 2 and 10 years of age in both rivers at downstream stations relative to upstream stations, suggesting the existence of gradients of nutrient enrichment independent of BKMEmore » contamination. The impact of BKME exposure was expressed as reduced investment in reproduction, as revealed by greater length at maturity, reduced gonad size, and more variable fecundity. These effects were not obvious in simple upstream-downstream comparisons, but became evident when fish from the uncontaminated Gatineau River showed increased gonadal development and reduced age and size at maturity in response to enhanced growth rates.« less

  7. X-ray tomography investigation of intensive sheared Al–SiC metal matrix composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Giovanni, Mario; Warnett, Jason M.; Williams, Mark A.

    2015-12-15

    X-ray computed tomography (XCT) was used to characterise three dimensional internal structure of Al–SiC metal matrix composites. The alloy composite was prepared by casting method with the application of intensive shearing to uniformly disperse SiC particles in the matrix. Visualisation of SiC clusters as well as porosity distribution were evaluated and compared with non-shearing samples. Results showed that the average particle size as well as agglomerate size is smaller in sheared sample compared to conventional cast samples. Further, it was observed that the volume fraction of porosity was reduced by 50% compared to conventional casting, confirming that the intensive shearingmore » helps in deagglomeration of particle clusters and decrease in porosity of Al–SiC metal matrix composites. - Highlights: • XCT was used to visualise 3D internal structure of Al-SiC MMC. • Al-SiC MMC was prepared by casting with the application of intensive shearing. • SiC particles and porosity distribution were evaluated. • Results show shearing deagglomerates particle clusters and reduces porosity in MMC.« less

  8. Smart Sampling and HPC-based Probabilistic Look-ahead Contingency Analysis Implementation and its Evaluation with Real-world Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Etingov, Pavel V.; Ren, Huiying

    This paper describes a probabilistic look-ahead contingency analysis application that incorporates smart sampling and high-performance computing (HPC) techniques. Smart sampling techniques are implemented to effectively represent the structure and statistical characteristics of uncertainty introduced by different sources in the power system. They can significantly reduce the data set size required for multiple look-ahead contingency analyses, and therefore reduce the time required to compute them. High-performance-computing (HPC) techniques are used to further reduce computational time. These two techniques enable a predictive capability that forecasts the impact of various uncertainties on potential transmission limit violations. The developed package has been tested withmore » real world data from the Bonneville Power Administration. Case study results are presented to demonstrate the performance of the applications developed.« less

  9. Sample size re-estimation and other midcourse adjustments with sequential parallel comparison design.

    PubMed

    Silverman, Rachel K; Ivanova, Anastasia

    2017-01-01

    Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.

  10. The impact of infertility on family size in the USA: data from the National Survey of Family Growth.

    PubMed

    Breyer, Benjamin N; Smith, James F; Shindel, Alan W; Sharlip, Ira D; Eisenberg, Michael L

    2010-09-01

    Investigators have postulated that family size may be influenced by biologic fertility potential in addition to sociodemographic factors. The aim of the current study is to determine if a diagnosis of infertility is associated with family size in the USA. We analyzed data from the male and female samples of the 2002 National Survey of Family Growth using multivariable logistic regression models to determine the relationship between infertility and family size while adjusting for sociodemographic and reproductive characteristics. In the survey, 4409 women and 1739 men met the inclusion criteria, of whom 10.2% and 9.7%, respectively, were classified as infertile, on the basis of having sought reproductive assistance. Infertile females had a 34% reduced odds of having an additional child compared with women who did not seek reproductive assistance. For each additional 6 months it took a woman to conceive her first child, the odds of having a larger family fell by 9% and the odds of having a second child were reduced by 11%. A diagnosis of male infertility reduced the odds of having a larger family more than a diagnosis of female infertility. A diagnosis of infertility, especially male factor, is associated with reduced odds of having a larger family, implicating a biologic role in the determination of family size in the USA.

  11. An investigation of phase transformation and crystallinity in laser surface modified H13 steel

    NASA Astrophysics Data System (ADS)

    Aqida, S. N.; Brabazon, D.; Naher, S.

    2013-03-01

    This paper presents a laser surface modification process of AISI H13 tool steel using 0.09, 0.2 and 0.4 mm size of laser spot with an aim to increase hardness properties. A Rofin DC-015 diffusion-cooled CO2 slab laser was used to process AISI H13 tool steel samples. Samples of 10 mm diameter were sectioned to 100 mm length in order to process a predefined circumferential area. The parameters selected for examination were laser peak power, overlap percentage and pulse repetition frequency (PRF). X-ray diffraction analysis (XRD) was conducted to measure crystallinity of the laser-modified surface. X-ray diffraction patterns of the samples were recorded using a Bruker D8 XRD system with Cu K α ( λ=1.5405 Å) radiation. The diffraction patterns were recorded in the 2 θ range of 20 to 80°. The hardness properties were tested at 981 mN force. The laser-modified surface exhibited reduced crystallinity compared to the un-processed samples. The presence of martensitic phase was detected in the samples processed using 0.4 mm spot size. Though there was reduced crystallinity, a high hardness was measured in the laser-modified surface. Hardness was increased more than 2.5 times compared to the as-received samples. These findings reveal the phase source of the hardening mechanism and grain composition in the laser-modified surface.

  12. Effect of microstructure on the thermoelectric performance of La{sub 1−x}Sr{sub x}CoO{sub 3}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viskadourakis, Z.; Department of Mechanical and Manufacturing Engineering, University of Cypruss, 75 Kallipoleos Avenue, P.O. Box 20537, 1678 Nicosia; Athanasopoulos, G.I.

    We present a case where the microstructure has a profound effect on the thermoelectric properties of oxide compounds. Specifically, we have investigated the effect of different sintering treatments on La{sub 1−x}Sr{sub x}CoO{sub 3} samples synthesized using the Pechini method. We found that the samples, which are dense and consist of inhomogeneously-mixed grains of different size, exhibit both higher Seebeck coefficient and thermoelectric figure of merit than the samples, which are porous and consist of grains with almost identical size. The enhancement of Seebeck coefficient in the dense samples is attributed to the so-called “energy-filtering” mechanism that is related to themore » energy barrier of the grain boundary. On the other hand, the thermal conductivity for the porous compounds is significantly reduced in comparison to the dense compounds. It is suggested that a fine-manipulation of grain size ratio combined with a fine-tuning of porosity could considerably enhance the thermoelectric performance of oxides. - Graphical abstract: The enhancement of the dimensionless thermoelectric figure ZT of merit is presented for two equally Sr-doped LaCoO3 compounds, possessing different microstructure, indicating the effect of the latter to the thermoelectric performance of the La{sub 1−x}Sr{sub x}CoO{sub 3} solid solution. - Highlights: • Electrical and thermal transport properties are affected by the microstructure in La{sub 1−x}Sr{sub x}CoO{sub 3} polycrystalline materials. • Coarse/fine grain size distribution enhances the Seebeck coefficient. • Porosity reduces the thermal conductivity in La{sub 1−x}Sr{sub x}CoO{sub 3} polycrystalline samples. • The combination of large/small grain ratio distribution with the high porosity may result to the enhancement of the thermoelectric performance of the material.« less

  13. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  14. Characterization of the particulate emissions from the BP Deepwater Horizon surface oil burns.

    PubMed

    Gullett, Brian K; Hays, Michael D; Tabor, Dennis; Wal, Randy Vander

    2016-06-15

    Sampling of the smoke plumes from the BP Deepwater Horizon surface oil burns led to the unintentional collection of soot particles on the sail of an instrument-bearing, tethered aerostat. This first-ever plume sampling from oil burned at an actual spill provided an opportunistic sample from which to characterize the particles' chemical properties for polycyclic aromatic hydrocarbons (PAHs), organic carbon, elemental carbon, metals, and polychlorinated dibenzodioxins/dibenzofurans (PCDDs/PCDFs) and physical properties for size and nanostructure. Thermal-optical analyses indicated that the particulate matter was 93% carbon with 82% being refractory elemental carbon. PAHs accounted for roughly 68μg/g of the PM filter mass and 5mg/kg oil burned, much lower than earlier laboratory based studies. Microscopy indicated that the soot is distinct from more common soot by its aggregate size, primary particle size, and nanostructure. PM-bound metals were largely unremarkable but PCDD/PCDF formation was observed, contrary to other's findings. Levels of lighter PCDD/PCDF and PAH compounds were reduced compared to historical samples, possibly due to volatilization or photo-oxidation. Published by Elsevier Ltd.

  15. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    USGS Publications Warehouse

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  16. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  17. Applications of remote sensing, volume 1

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. ECHO successfully exploits the redundancy of states characteristics of sampled imagery of ground scenes to achieve better classification accuracy, reduce the number of classifications required, and reduce the variability of classification results. The information required to produce ECHO classifications are cell size, cell homogeneity, cell-to-field annexation parameters, input data, and a class conditional marginal density statistics deck.

  18. Influence of an Antiperspirant on Foot Blister Incidence during Cross-Country Hiking

    DTIC Science & Technology

    1999-11-01

    blisters also increases. Therefore reducing Moisture may reduce blister incidence during physical activity. Objective: We examined whether an antiperspirant ...that used either an antiperspirant (20% aluminum chloride hexahydrate in anhydrous ethyl alcohol) or placebo (anhydrous ethyl alcohol) preparation...blisters before and after. Results: Because of dropouts, the final sample size was 667 cadets with 328 in the antiperspirant group and 339 in the

  19. Public Acceptability in the UK and USA of Nudging to Reduce Obesity: The Example of Reducing Sugar-Sweetened Beverages Consumption.

    PubMed

    Petrescu, Dragos C; Hollands, Gareth J; Couturier, Dominique-Laurent; Ng, Yin-Lam; Marteau, Theresa M

    2016-01-01

    "Nudging"-modifying environments to change people's behavior, often without their conscious awareness-can improve health, but public acceptability of nudging is largely unknown. We compared acceptability, in the United Kingdom (UK) and the United States of America (USA), of government interventions to reduce consumption of sugar-sweetened beverages. Three nudge interventions were assessed: i. reducing portion Size, ii. changing the Shape of the drink containers, iii. changing their shelf Location; alongside two traditional interventions: iv. Taxation and v. Education. We also tested the hypothesis that describing interventions as working through non-conscious processes decreases their acceptability. Predictors of acceptability, including perceived intervention effectiveness, were also assessed. Participants (n = 1093 UK and n = 1082 USA) received a description of each of the five interventions which varied, by randomisation, in how the interventions were said to affect behaviour: (a) via conscious processes; (b) via non-conscious processes; or (c) no process stated. Acceptability was derived from responses to three items. Levels of acceptability for four of the five interventions did not differ significantly between the UK and US samples; reducing portion size was less accepted by the US sample. Within each country, Education was rated as most acceptable and Taxation the least, with the three nudge-type interventions rated between these. There was no evidence to support the study hypothesis: i.e. stating that interventions worked via non-conscious processes did not decrease their acceptability in either the UK or US samples. Perceived effectiveness was the strongest predictor of acceptability for all interventions across the two samples. In conclusion, nudge interventions to reduce consumption of sugar-sweetened beverages seem similarly acceptable in the UK and USA, being more acceptable than taxation, but less acceptable than education. Contrary to prediction, we found no evidence that highlighting the non-conscious processes by which nudge interventions may work decreases their acceptability. However, highlighting the effectiveness of all interventions has the potential to increase their acceptability.

  20. Annealing to optimize the primary drying rate, reduce freezing-induced drying rate heterogeneity, and determine T(g)' in pharmaceutical lyophilization.

    PubMed

    Searles, J A; Carpenter, J F; Randolph, T W

    2001-07-01

    In a companion paper we show that the freezing of samples in vials by shelf-ramp freezing results in significant primary drying rate heterogeneity because of a dependence of the ice crystal size on the nucleation temperature during freezing.1 The purpose of this study was to test the hypothesis that post-freezing annealing, in which the product is held at a predetermined temperature for a specified duration, can reduce freezing-induced heterogeneity in sublimation rates. In addition, we test the impact of annealing on primary drying rates. Finally, we use the kinetics of relaxations during annealing to provide a simple measurement of T(g)', the glass transition temperature of the maximally freeze-concentrated amorphous phase, under conditions and time scales most appropriate for industrial lyophilization cycles. Aqueous solutions of hydroxyethyl starch (HES), sucrose, and HES:sucrose were either frozen by placement on a shelf while the temperature was reduced ("shelf-ramp frozen") or by immersion into liquid nitrogen. Samples were then annealed for various durations over a range of temperatures and partially lyophilized to determine the primary drying rate. The morphology of fully dried liquid nitrogen-frozen samples was examined using scanning electron microscopy. Annealing reduced primary drying rate heterogeneity for shelf-ramp frozen samples, and resulted in up to 3.5-fold increases in the primary drying rate. These effects were due to increased ice crystal sizes, simplified amorphous structures, and larger and more numerous holes on the cake surface of annealed samples. Annealed HES samples dissolved slightly faster than their unannealed counterparts. Annealing below T(g)' did not result in increased drying rates. We present a simple new annealing-lyophilization method of T(g)' determination that exploits this phenomenon. It can be carried out with a balance and a freeze-dryer, and has the additional advantage that a large number of candidate formulations can be evaluated simultaneously.

  1. Image acquisition system using on sensor compressed sampling technique

    NASA Astrophysics Data System (ADS)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  2. EXTENDING THE FLOOR AND THE CEILING FOR ASSESSMENT OF PHYSICAL FUNCTION

    PubMed Central

    Fries, James F.; Lingala, Bharathi; Siemons, Liseth; Glas, Cees A. W.; Cella, David; Hussain, Yusra N; Bruce, Bonnie; Krishnan, Eswar

    2014-01-01

    Objective The objective of the current study was to improve the assessment of physical function by improving the precision of assessment at the floor (extremely poor function) and at the ceiling (extremely good health) of the health continuum. Methods Under the NIH PROMIS program, we developed new physical function floor and ceiling items to supplement the existing item bank. Using item response theory (IRT) and the standard PROMIS methodology, we developed 30 floor items and 26 ceiling items and administered them during a 12-month prospective observational study of 737 individuals at the extremes of health status. Change over time was compared across anchor instruments and across items by means of effect sizes. Using the observed changes in scores, we back-calculated sample size requirements for the new and comparison measures. Results We studied 444 subjects with chronic illness and/or extreme age, and 293 generally fit subjects including athletes in training. IRT analyses confirmed that the new floor and ceiling items outperformed reference items (p<0.001). The estimated post-hoc sample size requirements were reduced by a factor of two to four at the floor and a factor of two at the ceiling. Conclusion Extending the range of physical function measurement can substantially improve measurement quality, can reduce sample size requirements and improve research efficiency. The paradigm shift from Disability to Physical Function includes the entire spectrum of physical function, signals improvement in the conceptual base of outcome assessment, and may be transformative as medical goals more closely approach societal goals for health. PMID:24782194

  3. The Long-Term Oxygen Treatment Trial for Chronic Obstructive Pulmonary Disease: Rationale, Design, and Lessons Learned.

    PubMed

    Yusen, Roger D; Criner, Gerard J; Sternberg, Alice L; Au, David H; Fuhlbrigge, Anne L; Albert, Richard K; Casaburi, Richard; Stoller, James K; Harrington, Kathleen F; Cooper, J Allen D; Diaz, Philip; Gay, Steven; Kanner, Richard; MacIntyre, Neil; Martinez, Fernando J; Piantadosi, Steven; Sciurba, Frank; Shade, David; Stibolt, Thomas; Tonascia, James; Wise, Robert; Bailey, William C

    2018-01-01

    The Long-Term Oxygen Treatment Trial demonstrated that long-term supplemental oxygen did not reduce time to hospital admission or death for patients who have stable chronic obstructive pulmonary disease and resting and/or exercise-induced moderate oxyhemoglobin desaturation, nor did it provide benefit for any other outcome measured in the trial. Nine months after initiation of patient screening, after randomization of 34 patients to treatment, a trial design amendment broadened the eligible population, expanded the primary outcome, and reduced the goal sample size. Within a few years, the protocol underwent minor modifications, and a second trial design amendment lowered the required sample size because of lower than expected treatment group crossover rates. After 5.5 years of recruitment, the trial met its amended sample size goal, and 1 year later, it achieved its follow-up goal. The process of publishing the trial results brought renewed scrutiny of the study design and the amendments. This article expands on the previously published design and methods information, provides the rationale for the amendments, and gives insight into the investigators' decisions about trial conduct. The story of the Long-Term Oxygen Treatment Trial may assist investigators in future trials, especially those that seek to assess the efficacy and safety of long-term oxygen therapy. Clinical trial registered with clinicaltrials.gov (NCT00692198).

  4. Power analysis to detect treatment effects in longitudinal clinical trials for Alzheimer's disease.

    PubMed

    Huang, Zhiyue; Muniz-Terrera, Graciela; Tom, Brian D M

    2017-09-01

    Assessing cognitive and functional changes at the early stage of Alzheimer's disease (AD) and detecting treatment effects in clinical trials for early AD are challenging. Under the assumption that transformed versions of the Mini-Mental State Examination, the Clinical Dementia Rating Scale-Sum of Boxes, and the Alzheimer's Disease Assessment Scale-Cognitive Subscale tests'/components' scores are from a multivariate linear mixed-effects model, we calculated the sample sizes required to detect treatment effects on the annual rates of change in these three components in clinical trials for participants with mild cognitive impairment. Our results suggest that a large number of participants would be required to detect a clinically meaningful treatment effect in a population with preclinical or prodromal Alzheimer's disease. We found that the transformed Mini-Mental State Examination is more sensitive for detecting treatment effects in early AD than the transformed Clinical Dementia Rating Scale-Sum of Boxes and Alzheimer's Disease Assessment Scale-Cognitive Subscale. The use of optimal weights to construct powerful test statistics or sensitive composite scores/endpoints can reduce the required sample sizes needed for clinical trials. Consideration of the multivariate/joint distribution of components' scores rather than the distribution of a single composite score when designing clinical trials can lead to an increase in power and reduced sample sizes for detecting treatment effects in clinical trials for early AD.

  5. Grain-size-induced weakening of H2O ices I and II and associated anisotropic recrystallization

    USGS Publications Warehouse

    Stern, L.A.; Durham, W.B.; Kirby, S.H.

    1997-01-01

    Grain-size-dependent flow mechanisms tend to be favored over dislocation creep at low differential stresses and can potentially influence the rheology of low-stress, low-strain rate environments such as those of planetary interiors. We experimentally investigated the effect of reduced grain size on the solid-state flow of water ice I, a principal component of the asthenospheres of many icy moons of the outer solar system, using techniques new to studies of this deformation regime. We fabricated fully dense ice samples of approximate grain size 2 ?? 1 ??m by transforming "standard" ice I samples of 250 ?? 50 ??m grain size to the higher-pressure phase ice II, deforming them in the ice II field, and then rapidly releasing the pressure deep into the ice I stability field. At T ??? 200 K, slow growth and rapid nucleation of ice I combine to produce a fine grain size. Constant-strain rate deformation tests conducted on these samples show that deformation rates are less stress sensitive than for standard ice and that the fine-grained material is markedly weaker than standard ice, particularly during the transient approach to steady state deformation. Scanning electron microscope examination of the deformed fine-grained ice samples revealed an unusual microstructure dominated by platelike grains that grew normal to the compression direction, with c axes preferentially oriented parallel to compression. In samples tested at T ??? 220 K the elongation of the grains is so pronounced that the samples appear finely banded, with aspect ratios of grains approaching 50:1. The anisotropic growth of these crystallographically oriented neoblasts likely contributes to progressive work hardening observed during the transient stage of deformation. We have also documented remarkably similar microstructural development and weak mechanical behavior in fine-grained ice samples partially transformed and deformed in the ice II field.

  6. Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems

    PubMed Central

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.

    2016-01-01

    Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982

  7. Designing Case-Control Studies: Decisions About the Controls

    PubMed Central

    Hodge, Susan E.; Subaran, Ryan L.; Weissman, Myrna M.; Fyer, Abby J.

    2014-01-01

    The authors quantified, first, the effect of misclassified controls (i.e., individuals who are affected with the disease under study but who are classified as controls) on the ability of a case-control study to detect an association between a disease and a genetic marker, and second, the effect of leaving misclassified controls in the study, as opposed to removing them (thus decreasing sample size). The authors developed an informativeness measure of a study’s ability to identify real differences between cases and controls. They then examined this measure’s behavior when there are no misclassified controls, when there are misclassified controls, and when there were misclassified controls but they have been removed from the study. The results show that if, for example, 10% of controls are misclassified, the study’s informativeness is reduced to approximately 81% of what it would have been in a sample with no misclassified controls, whereas if these misclassified controls are removed from the study, the informativeness is only reduced to about 90%, despite the reduced sample size. If 25% are misclassified, those figures become approximately 56% and 75%, respectively. Thus, leaving the misclassified controls in the control sample is worse than removing them altogether. Finally, the authors illustrate how insufficient power is not necessarily circumvented by having an unlimited number of controls. The formulas provided by the authors enable investigators to make rational decisions about removing misclassified controls or leaving them in. PMID:22854929

  8. The influence of the compression interface on the failure behavior and size effect of concrete

    NASA Astrophysics Data System (ADS)

    Kampmann, Raphael

    The failure behavior of concrete materials is not completely understood because conventional test methods fail to assess the material response independent of the sample size and shape. To study the influence of strength and strain affecting test conditions, four typical concrete sample types were experimentally evaluated in uniaxial compression and analyzed for strength, deformational behavior, crack initiation/propagation, and fracture patterns under varying boundary conditions. Both low friction and conventional compression interfaces were assessed. High-speed video technology was used to monitor macrocracking. Inferential data analysis proved reliably lower strength results for reduced surface friction at the compression interfaces, regardless of sample shape. Reciprocal comparisons revealed statistically significant strength differences between most sample shapes. Crack initiation and propagation was found to differ for dissimilar compression interfaces. The principal stress and strain distributions were analyzed, and the strain domain was found to resemble the experimental results, whereas the stress analysis failed to explain failure for reduced end confinement. Neither stresses nor strains indicated strength reductions due to reduced friction, and therefore, buckling effects were considered. The high-speed video analysis revealed localize buckling phenomena, regardless of end confinement. Slender elements were the result of low friction, and stocky fragments developed under conventional confinement. The critical buckling load increased accordingly. The research showed that current test methods do not reflect the "true'' compressive strength and that concrete failure is strain driven. Ultimate collapse results from buckling preceded by unstable cracking.

  9. Efficacy of a 2% climbazole shampoo for reducing Malassezia population sizes on the skin of naturally infected dogs.

    PubMed

    Cavana, P; Petit, J-Y; Perrot, S; Guechi, R; Marignac, G; Reynaud, K; Guillot, J

    2015-12-01

    Shampoo therapy is often recommended for the control of Malassezia overgrowth in dogs. The aim of this study was to evaluate the in vivo activity of a 2% climbazole shampoo against Malassezia pachydermatis yeasts in naturally infected dogs. Eleven research colony Beagles were used. The dogs were distributed randomly into two groups: group A (n=6) and group B (n=5). Group A dogs were washed with a 2% climbazole shampoo, while group B dogs were treated with a physiological shampoo base. The shampoos were applied once weekly for two weeks. The population size of Malassezia yeasts on skin was determined by fungal culture through modified Dixon's medium contact plates pressed on left concave pinna, axillae, groins, perianal area before and after shampoo application. Samples collected were compared by Wilcoxon rank sum test. Samples collected after 2% climbazole shampoo application showed a significant and rapid reduction of Malassezia population sizes. One hour after the first climbazole shampoo application, Malassezia reduction was already statistically significant and 15 days after the second climbazole shampoo, Malassezia population sizes were still significantly decreased. No significant reduction of Malassezia population sizes was observed in group B dogs. The application of a 2% climbazole shampoo significantly reduced Malassezia population sizes on the skin of naturally infected dogs. Application of 2% climbazole shampoo may be useful for the control of Malassezia overgrowth and it may be also proposed as prevention when recurrences are frequent. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  10. The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival

    PubMed Central

    Kordjazi, Ziya; Frusher, Stewart; Buxton, Colin; Gardner, Caleb; Bird, Tomas

    2016-01-01

    Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters) were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day) and four levels of the number of sampling-days (2, 4, 6 and 7 days). The most parsimonious Cormack-Jolly-Seber (CJS) model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery. PMID:26990561

  11. Size variation in early human mandibles and molars from Klasies River, South Africa: comparison with other middle and late Pleistocene assemblages and with modern humans.

    PubMed

    Royer, Danielle F; Lockwood, Charles A; Scott, Jeremiah E; Grine, Frederick E

    2009-10-01

    Previous studies of the Middle Stone Age human remains from Klasies River have concluded that they exhibited more sexual dimorphism than extant populations, but these claims have not been assessed statistically. We evaluate these claims by comparing size variation in the best-represented elements at the site, namely the mandibular corpora and M(2)s, to that in samples from three recent human populations using resampling methods. We also examine size variation in these same elements from seven additional middle and late Pleistocene sites: Skhūl, Dolní Vestonice, Sima de los Huesos, Arago, Krapina, Shanidar, and Vindija. Our results demonstrate that size variation in the Klasies assemblage was greater than in recent humans, consistent with arguments that the Klasies people were more dimorphic than living humans. Variation in the Skhūl, Dolní Vestonice, and Sima de los Huesos mandibular samples is also higher than in the recent human samples, indicating that the Klasies sample was not unusual among middle and late Pleistocene hominins. In contrast, the Neandertal samples (Krapina, Shanidar, and Vindija) do not evince relatively high mandibular and molar variation, which may indicate that the level of dimorphism in Neandertals was similar to that observed in extant humans. These results suggest that the reduced levels of dimorphism in Neandertals and living humans may have developed independently, though larger fossil samples are needed to test this hypothesis.

  12. Development of shrinkage resistant microfibre-reinforced cement-based composites

    NASA Astrophysics Data System (ADS)

    Hamedanimojarrad, P.; Adam, G.; Ray, A. S.; Thomas, P. S.; Vessalas, K.

    2012-06-01

    Different shrinkage types may cause serious durability dilemma on restrained concrete parts due to crack formation and propagation. Several classes of fibres are used by concrete industry in order to reduce crack size and crack number. In previous studies, most of these fibre types were found to be effective in reducing the number and sizes of the cracks, but not in shrinkage strain reduction. This study deals with the influence of a newly introduced type of polyethylene fibre on drying shrinkage reduction. The novel fibre is a polyethylene microfibre in a new geometry, which is proved to reduce the amount of total shrinkage in mortars. This special hydrophobic polyethylene microfibre also reduces moisture loss of mortar samples. The experimental results on short and long-term drying shrinkage as well as on several other properties are reported. The hydrophobic polyethylene microfibre showed promising improvement in shrinkage reduction even at very low concentrations (0.1% of cement weight).

  13. Hindlimb muscle architecture in non-human great apes and a comparison of methods for analysing inter-species variation

    PubMed Central

    Myatt, Julia P; Crompton, Robin H; Thorpe, Susannah K S

    2011-01-01

    By relating an animal's morphology to its functional role and the behaviours performed, we can further develop our understanding of the selective factors and constraints acting on the adaptations of great apes. Comparison of muscle architecture between different ape species, however, is difficult because only small sample sizes are ever available. Further, such samples are often comprised of different age–sex classes, so studies have to rely on scaling techniques to remove body mass differences. However, the reliability of such scaling techniques has been questioned. As datasets increase in size, more reliable statistical analysis may eventually become possible. Here we employ geometric and allometric scaling techniques, and ancovas (a form of general linear model, GLM) to highlight and explore the different methods available for comparing functional morphology in the non-human great apes. Our results underline the importance of regressing data against a suitable body size variable to ascertain the relationship (geometric or allometric) and of choosing appropriate exponents by which to scale data. ancova models, while likely to be more robust than scaling for species comparisons when sample sizes are high, suffer from reduced power when sample sizes are low. Therefore, until sample sizes are radically increased it is preferable to include scaling analyses along with ancovas in data exploration. Overall, the results obtained from the different methods show little significant variation, whether in muscle belly mass, fascicle length or physiological cross-sectional area between the different species. This may reflect relatively close evolutionary relationships of the non-human great apes; a universal influence on morphology of generalised orthograde locomotor behaviours or, quite likely, both. PMID:21507000

  14. From decimeter- to centimeter-sized mobile microrobots: the development of the MINIMAN system

    NASA Astrophysics Data System (ADS)

    Woern, Heinz; Schmoeckel, Ferdinand; Buerkle, Axel; Samitier, Josep; Puig-Vidal, Manel; Johansson, Stefan A. I.; Simu, Urban; Meyer, Joerg-Uwe; Biehl, Margit

    2001-10-01

    Based on small mobile robots the presented MINIMAN system provides a platform for micro-manipulation tasks in very different kinds of applications. Three exemplary applications demonstrate the capabilities of the system. Both the high precision assembly of an optical system consisting of three millimeter-sized parts and the positioning of single 20-μm-cells under the light microscope as well as the handling of tiny samples inside the scanning electron microscope are done by the same kind of robot. For the different tasks, the robot is equipped with appropriate tools such as micro-pipettes or grippers with force and tactile sensors. For the extension to a multi-robot system, it is necessary to further reduce the size of robots. For the above mentioned robot prototypes a slip-stick driving principle is employed. While this design proves to work very well for the described decimeter-sized robots, it is not suitable for further miniaturized robots because of their reduced inertia. Therefore, the developed centimeter-sized robot is driven by multilayered piezoactuators performing defined steps without a slipping phase. To reduce the number of connecting wires the microrobot has integrated circuits on board. They include high voltage drivers and a serial communication interface for a minimized number of wires.

  15. DNA pooling strategies for categorical (ordinal) traits

    USDA-ARS?s Scientific Manuscript database

    Despite reduced genotyping costs in recent years, obtaining genotypes for all individuals in a population may still not be feasible when sample size is large. DNA pooling provides a useful alternative to determining genotype effects. Clustering algorithms allow for grouping of individuals (observati...

  16. Methodological Issues in Curriculum-Based Reading Assessment.

    ERIC Educational Resources Information Center

    Fuchs, Lynn S.; And Others

    1984-01-01

    Three studies involving elementary students examined methodological issues in curriculum-based reading assessment. Results indicated that (1) whereas sample duration did not affect concurrent validity, increasing duration reduced performance instability and increased performance slopes and (2) domain size was related inversely to performance slope…

  17. Synthetic Control of Crystallite Size of Silver Vanadium Phosphorous Oxide (Ag 0.50VOPO 4·1.9H 2O): Impact on Electrochemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huie, Matthew M.; Marschilok, Amy C.; Takeuchi, Esther S.

    Here, this report describes a synthetic approach to control the crystallite size of silver vanadium phosphorous oxide, Ag 0.50VOPO 4·1.9H 2O, and the impact on electrochemistry in lithium based batteries. Ag 0.50VOPO 4·1.9H 2O was synthesized using a stirred hydrothermal method over a range of temperatures. X-ray diffraction (XRD) was used to confirm the crystalline phase and the crystallite size sizes of 11, 22, 38, 40, 49, and 120 nm. Particle shape was plate-like with edges <1 micron to >10 microns. Under galvanostatic reduction the samples with 22 nm crystallites and 880 nm particles produced the highest capacity, ~25% moremore » capacity than the 120 nm sample. Notably, the 11 nm sample resulted in reduced delivered capacity and higher resistance consistent with increased grain boundaries contributing to resistance. Under intermittent pulsing ohmic resistance decreased with increasing crystallite size from 11 nm to 120 nm implying that electrical conduction within a crystal is more facile than between crystallites and across grain boundaries. Finally, this systematic study of material dimension shows that crystallite size impacts deliverable capacity as well as cell resistance where both interparticle and intraparticle transport are important.« less

  18. Sampling artifacts in perspective and stereo displays

    NASA Astrophysics Data System (ADS)

    Pfautz, Jonathan D.

    2001-06-01

    The addition of stereo cues to perspective displays is generally expected to improve the perception of depth. However, the display's pixel array samples both perspective and stereo depth cues, introducing inaccuracies and inconsistencies into the representation of an object's depth. The position, size and disparity of an object will be inaccurately presented and size and disparity will be inconsistently presented across depth. These inconsistencies can cause the left and right edges of an object to appear at different stereo depths. This paper describes how these inconsistencies result in conflicts between stereo and perspective depth information. A relative depth judgement task was used to explore these conflicts. Subjects viewed two objects and reported which appeared closer. Three conflicts resulting from inconsistencies caused by sampling were examined: (1) Perspective size and location versus stereo disparity. (2) Perspective size versus perspective location and stereo disparity. (3) Left and right edge disparity versus perspective size and location. In the first two cases, subjects achieved near-perfect accuracy when perspective and disparity cues were complementary. When size and disparity were inconsistent and thus in conflict, stereo dominated perspective. Inconsistency between the disparities of the horizontal edges of an object confused the subjects, even when complementary perspective and stereo information was provided. Since stereo was the dominant cue and was ambiguous across the object, this led to significantly reduced accuracy. Edge inconsistencies also led to more complaints about visual fatigue and discomfort.

  19. Synthetic Control of Crystallite Size of Silver Vanadium Phosphorous Oxide (Ag 0.50VOPO 4·1.9H 2O): Impact on Electrochemistry

    DOE PAGES

    Huie, Matthew M.; Marschilok, Amy C.; Takeuchi, Esther S.; ...

    2017-04-12

    Here, this report describes a synthetic approach to control the crystallite size of silver vanadium phosphorous oxide, Ag 0.50VOPO 4·1.9H 2O, and the impact on electrochemistry in lithium based batteries. Ag 0.50VOPO 4·1.9H 2O was synthesized using a stirred hydrothermal method over a range of temperatures. X-ray diffraction (XRD) was used to confirm the crystalline phase and the crystallite size sizes of 11, 22, 38, 40, 49, and 120 nm. Particle shape was plate-like with edges <1 micron to >10 microns. Under galvanostatic reduction the samples with 22 nm crystallites and 880 nm particles produced the highest capacity, ~25% moremore » capacity than the 120 nm sample. Notably, the 11 nm sample resulted in reduced delivered capacity and higher resistance consistent with increased grain boundaries contributing to resistance. Under intermittent pulsing ohmic resistance decreased with increasing crystallite size from 11 nm to 120 nm implying that electrical conduction within a crystal is more facile than between crystallites and across grain boundaries. Finally, this systematic study of material dimension shows that crystallite size impacts deliverable capacity as well as cell resistance where both interparticle and intraparticle transport are important.« less

  20. [Sequential sampling plans to Orthezia praelonga Douglas (Hemiptera: Sternorrhyncha, Ortheziidae) in citrus].

    PubMed

    Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T

    2007-01-01

    The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.

  1. Accounting for treatment by center interaction in sample size determinations and the use of surrogate outcomes in the pessary for the prevention of preterm birth trial: a simulation study.

    PubMed

    Willan, Andrew R

    2016-07-05

    The Pessary for the Prevention of Preterm Birth Study (PS3) is an international, multicenter, randomized clinical trial designed to examine the effectiveness of the Arabin pessary in preventing preterm birth in pregnant women with a short cervix. During the design of the study two methodological issues regarding power and sample size were raised. Since treatment in the Standard Arm will vary between centers, it is anticipated that so too will the probability of preterm birth in that arm. This will likely result in a treatment by center interaction, and the issue of how this will affect the sample size requirements was raised. The sample size requirements to examine the effect of the pessary on the baby's clinical outcome was prohibitively high, so the second issue is how best to examine the effect on clinical outcome. The approaches taken to address these issues are presented. Simulation and sensitivity analysis were used to address the sample size issue. The probability of preterm birth in the Standard Arm was assumed to vary between centers following a Beta distribution with a mean of 0.3 and a coefficient of variation of 0.3. To address the second issue a Bayesian decision model is proposed that combines the information regarding the between-treatment difference in the probability of preterm birth from PS3 with the data from the Multiple Courses of Antenatal Corticosteroids for Preterm Birth Study that relate preterm birth and perinatal mortality/morbidity. The approach provides a between-treatment comparison with respect to the probability of a bad clinical outcome. The performance of the approach was assessed using simulation and sensitivity analysis. Accounting for a possible treatment by center interaction increased the sample size from 540 to 700 patients per arm for the base case. The sample size requirements increase with the coefficient of variation and decrease with the number of centers. Under the same assumptions used for determining the sample size requirements, the simulated mean probability that pessary reduces the risk of perinatal mortality/morbidity is 0.98. The simulated mean decreased with coefficient of variation and increased with the number of clinical sites. Employing simulation and sensitivity analysis is a useful approach for determining sample size requirements while accounting for the additional uncertainty due to a treatment by center interaction. Using a surrogate outcome in conjunction with a Bayesian decision model is an efficient way to compare important clinical outcomes in a randomized clinical trial in situations where the direct approach requires a prohibitively high sample size.

  2. Development of a Miniature Mass Spectrometer and an Automated Detector for Sampling Explosive Materials

    PubMed Central

    Hashimoto, Yuichiro

    2017-01-01

    The development of a robust ionization source using the counter-flow APCI, miniature mass spectrometer, and an automated sampling system for detecting explosives are described. These development efforts using mass spectrometry were made in order to improve the efficiencies of on-site detection in areas such as security, environmental, and industrial applications. A development team, including the author, has struggled for nearly 20 years to enhance the robustness and reduce the size of mass spectrometers to meet the requirements needed for on-site applications. This article focuses on the recent results related to the detection of explosive materials where automated particle sampling using a cyclone concentrator permitted the inspection time to be successfully reduced to 3 s. PMID:28337396

  3. A double-observer method for reducing bias in faecal pellet surveys of forest ungulates

    USGS Publications Warehouse

    Jenkins, K.J.; Manly, B.F.J.

    2008-01-01

    1. Faecal surveys are used widely to study variations in abundance and distribution of forest-dwelling mammals when direct enumeration is not feasible. The utility of faecal indices of abundance is limited, however, by observational bias and variation in faecal disappearance rates that obscure their relationship to population size. We developed methods to reduce variability in faecal surveys and improve reliability of faecal indices. 2. We used double-observer transect sampling to estimate observational bias of faecal surveys of Roosevelt elk Cervus elaphus roosevelti and Columbian black-tailed deer Odocoileus hemionus columbianus in Olympic National Park, Washington, USA. We also modelled differences in counts of faecal groups obtained from paired cleared and uncleared transect segments as a means to adjust standing crop faecal counts for a standard accumulation interval and to reduce bias resulting from variable decay rates. 3. Estimated detection probabilities of faecal groups ranged from < 0.2-1.0 depending upon the observer, whether the faecal group was from elk or deer, faecal group size, distance of the faecal group from the sampling transect, ground vegetation cover, and the interaction between faecal group size and distance from the transect. 4. Models of plot-clearing effects indicated that standing crop counts of deer faecal groups required 34% reduction on flat terrain and 53% reduction on sloping terrain to represent faeces accumulated over a standard 100-day interval, whereas counts of elk faecal groups required 0% and 46% reductions on flat and sloping terrain, respectively. 5. Synthesis and applications. Double-observer transect sampling provides a cost-effective means of reducing observational bias and variation in faecal decay rates that obscure the interpretation of faecal indices of large mammal abundance. Given the variation we observed in observational bias of faecal surveys and persistence of faeces, we emphasize the need for future researchers to account for these comparatively manageable sources of bias before comparing faecal indices spatially or temporally. Double-observer sampling methods are readily adaptable to study variations in faecal indices of large mammals at the scale of the large forest reserve, natural area, or other forested regions when direct estimation of populations is problematic. ?? 2008 The Authors.

  4. Sampling of illicit drugs for quantitative analysis--part II. Study of particle size and its influence on mass reduction.

    PubMed

    Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L

    2014-01-01

    The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit drugs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Are catchment-wide erosion rates really "Catchment-Wide"? Effects of grain size on erosion rates determined from 10Be

    NASA Astrophysics Data System (ADS)

    Reitz, M. A.; Seeber, L.; Schaefer, J. M.; Ferguson, E. K.

    2012-12-01

    Early studies pioneering the method for catchment wide erosion rates by measuring 10Be in alluvial sediment were taken at river mouths and used the sand size grain fraction from the riverbeds in order to average upstream erosion rates and measure erosion patterns. Finer particles (<0.0625 mm) were excluded to reduce the possibility of a wind-blown component of sediment and coarser particles (>2 mm) were excluded to better approximate erosion from the entire upstream catchment area (coarse grains are generally found near the source). Now that the sensitivity of 10Be measurements is rapidly increasing, we can precisely measure erosion rates from rivers eroding active tectonic regions. These active regions create higher energy drainage systems that erode faster and carry coarser sediment. In these settings, does the sand-sized fraction fully capture the average erosion of the upstream drainage area? Or does a different grain size fraction provide a more accurate measure of upstream erosion? During a study of the Neto River in Calabria, southern Italy, we took 8 samples along the length of the river, focusing on collecting samples just below confluences with major tributaries, in order to use the high-resolution erosion rate data to constrain tectonic motion. The samples we measured were sieved to either a 0.125 mm - 0.710 mm fraction or the 0.125 mm - 4 mm fraction (depending on how much of the former was available). After measuring these 8 samples for 10Be and determining erosion rates, we used the approach by Granger et al. [1996] to calculate the subcatchment erosion rates between each sample point. In the subcatchments of the river where we used grain sizes up to 4 mm, we measured very low 10Be concentrations (corresponding to high erosion rates) and calculated nonsensical subcatchment erosion rates (i.e. negative rates). We, therefore, hypothesize that the coarser grain sizes we included are preferentially sampling a smaller upstream area, and not the entire upstream catchment, which is assumed when measurements are based solely on the sand sized fraction. To test this hypothesis, we used samples with a variety of grain sizes from the Shillong Plateau. We sieved 5 samples into three grain size fractions: 0.125 mm - 710 mm, 710 mm - 4 mm, and >4 mm and measured 10Be concentrations in each fraction. Although there is some variation in the grain size fraction that yields the highest erosion rate, generally, the coarser grain size fractions have higher erosion rates. More significant are the results when calculating the subcatchment erosion rates, which suggest that even medium sized grains (710 mm - 4 mm) are sampling an area smaller than the entire upstream area; this finding is consistent with the nonsensical results from the Neto River study. This result has numerous implications for the interpretations of 10Be erosion rates: most importantly, an alluvial sample may not be averaging the entire upstream area, even when using the sand size fraction - resulting erosion rates more pertinent for that sample point rather than the entire catchment.

  6. Salmonella enteritidis surveillance by egg immunology: impact of the sampling scheme on the release of contaminated table eggs.

    PubMed

    Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie

    2011-08-01

    Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.

  7. Variation in aluminum, iron, and particle concentrations in oxic ground-water samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    USGS Publications Warehouse

    Szabo, Z.; Oden, J.H.; Gibs, J.; Rice, D.E.; Ding, Y.; ,

    2001-01-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering. Variations in concentrations aluminum and iron (1 -74 and 1-199 ug/L (micrograms per liter), respectively), common indicators of the presence of particulate-borne trace elements, were greatest in sample sets from individual wells with the greatest variations in turbidity and particle concentration. Differences in trace-element concentrations in sequentially collected unfiltered samples with variable turbidity were 5 to 10 times as great as those in concurrently collected samples that were passed through various filters. These results indicate that turbidity must be both reduced and stabilized even when low-flow sample-collection techniques are used in order to obtain water samples that do not contain considerable particulate artifacts. Currently (2001) available techniques need to be refined to ensure that the measured trace-element concentrations are representative of those that are mobile in the aquifer water.

  8. Non-Destructive Evaluation of Grain Structure Using Air-Coupled Ultrasonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belvin, A. D.; Burrell, R. K.; Cole, E.G.

    2009-08-01

    Cast material has a grain structure that is relatively non-uniform. There is a desire to evaluate the grain structure of this material non-destructively. Traditionally, grain size measurement is a destructive process involving the sectioning and metallographic imaging of the material. Generally, this is performed on a representative sample on a periodic basis. Sampling is inefficient and costly. Furthermore, the resulting data may not provide an accurate description of the entire part's average grain size or grain size variation. This project is designed to develop a non-destructive acoustic scanning technique, using Chirp waveforms, to quantify average grain size and grain sizemore » variation across the surface of a cast material. A Chirp is a signal in which the frequency increases or decreases over time (frequency modulation). As a Chirp passes through a material, the material's grains reduce the signal (attenuation) by absorbing the signal energy. Geophysics research has shown a direct correlation with Chirp wave attenuation and mean grain size in geological structures. The goal of this project is to demonstrate that Chirp waveform attenuation can be used to measure grain size and grain variation in cast metals (uranium and other materials of interest). An off-axis ultrasonic inspection technique using air-coupled ultrasonics has been developed to determine grain size in cast materials. The technique gives a uniform response across the volume of the component. This technique has been demonstrated to provide generalized trends of grain variation over the samples investigated.« less

  9. Is postural tremor size controlled by interstitial potassium concentration in muscle?

    PubMed Central

    Lakie, M; Hayes, N; Combes, N; Langford, N

    2004-01-01

    Objectives: To determine whether factors associated with postural tremor operate by altering muscle interstitial K+. Methods: An experimental approach was used to investigate the effects of procedures designed to increase or decrease interstitial K+. Postural physiological tremor was measured by conventional means. Brief periods of ischaemic muscle activity were used to increase muscle interstitial K+. Infusion of the ß2 agonist terbutaline was used to decrease plasma (and interstitial) K+. Blood samples were taken for the determination of plasma K+. Results: Ischaemia rapidly reduced tremor size, but only when the muscle was active. The ß2 agonist produced a slow and progressive rise in tremor size that was almost exactly mirrored by a slow and progressive decrease in plasma K+. Conclusions: Ischaemic reduction of postural tremor has been attributed to effects on muscle spindles or an unexplained effect on muscle. This study showed that ischaemia did not reduce tremor size unless there was accompanying muscular activity. An accumulation of K+ in the interstitium of the ischaemic active muscle may blunt the response of the muscle and reduce its fusion frequency, so that the force output becomes less pulsatile and tremor size decreases. When a ß2 agonist is infused, the rise in tremor mirrors the resultant decrease in plasma K+. Decreased plasma K+ reduces interstitial K+ concentration and may produce greater muscular force fluctuation (more tremor). Many other factors that affect postural tremor size may exert their effect by altering plasma K+ concentration, thereby changing the concentration of K+ in the interstitial fluid. PMID:15201362

  10. Alternative Fuel Reduction Treatments in the Gunflint Corridor of the Superior National Forest: Second year results and sampling recommendations

    Treesearch

    Daniel W. Gilmore; Douglas N. Kastendick; John C. Zasada; Paula J. Anderson

    2003-01-01

    Fuel loadings need to be considered in two ways: 1) the total fuel loadings of various size classes and 2) their distribution across a site. Fuel treatments in this study affected both. We conclude that 1) mechanical treatments of machine piling and salvage logging reduced fine and heavy fuel loadings and 2) prescribed fire was successful in reducing fine fuel...

  11. Reduction in growth of pole-sized ponderosa pine related to a pandora moth outbreak in Central Oregon.

    Treesearch

    P.H. Cochran

    1998-01-01

    Defoliation by pandora moth in a ponderosa pine spacing study in 1992 and 1994 generally increased as spacings increased from 2 to 5.7 meters and then decreased as spacings increased to 8 meters. Defoliation did not increase mortality during the 1990-94 period, but volume growth was reduced. Basal area increments of sample trees were reduced 25 percent the first...

  12. Sample size requirements for separating out the effects of combination treatments: randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis.

    PubMed

    Wolbers, Marcel; Heemskerk, Dorothee; Chau, Tran Thi Hong; Yen, Nguyen Thi Bich; Caws, Maxine; Farrar, Jeremy; Day, Jeremy

    2011-02-02

    In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 x 2 factorial design. We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 x 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Current Controlled Trials ISRCTN61649292.

  13. Alterations of intrinsic tongue muscle properties with aging.

    PubMed

    Cullins, Miranda J; Connor, Nadine P

    2017-12-01

    Age-related decline in the intrinsic lingual musculature could contribute to swallowing disorders, yet the effects of age on these muscles is unknown. We hypothesized there is reduced muscle fiber size and shifts to slower myosin heavy chain (MyHC) fiber types with age. Intrinsic lingual muscles were sampled from 8 young adult (9 months) and 8 old (32 months) Fischer 344/Brown Norway rats. Fiber size and MyHC were determined by fluorescent immunohistochemistry. Age was associated with a reduced number of rapidly contracting muscle fibers, and more slowly contracting fibers. Decreased fiber size was found only in the transverse and verticalis muscles. Shifts in muscle composition from faster to slower MyHC fiber types may contribute to age-related changes in swallowing duration. Decreasing muscle fiber size in the protrusive transverse and verticalis muscles may contribute to reductions in maximum isometric tongue pressure found with age. Differences among regions and muscles may be associated with different functional demands. Muscle Nerve 56: E119-E125, 2017. © 2017 Wiley Periodicals, Inc.

  14. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    NASA Astrophysics Data System (ADS)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling curves before explaining differences in diversity.

  15. The Effects of Popping Popcorn Under Reduced Pressure

    NASA Astrophysics Data System (ADS)

    Quinn, Paul; Cooper, Amanda

    2008-03-01

    In our experiments, we model the popping of popcorn as an adiabatic process and develop a process for improving the efficiency of popcorn production. By lowering the pressure of the popcorn during the popping process, we induce an increase in popcorn size, while decreasing the number of remaining unpopped kernels. In this project we run numerous experiments using three of the most common popping devices, a movie popcorn maker, a stove pot, and a microwave. We specifically examine the effects of varying the pressure on total sample size, flake size and waste. An empirical relationship is found between these variables and the pressure.

  16. Si-Ge Nano-Structured with Tungsten Silicide Inclusions

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    Traditional silicon germanium high temperature thermoelectrics have potential for improvements in figure of merit via nano-structuring with a silicide phase. A second phase of nano-sized silicides can theoretically reduce the lattice component of thermal conductivity without significantly reducing the electrical conductivity. However, experimentally achieving such improvements in line with the theory is complicated by factors such as control of silicide size during sintering, dopant segregation, matrix homogeneity, and sintering kinetics. Samples are prepared using powder metallurgy techniques; including mechanochemical alloying via ball milling and spark plasma sintering for densification. In addition to microstructural development, thermal stability of thermoelectric transport properties are reported, as well as couple and device level characterization.

  17. WSi2 in Si(1-x)Ge(x) Composites: Processing and Thermoelectric Properties

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan A.; Sehirlioglu, Alp; Dynys, Fred

    2015-01-01

    Traditional SiGe thermoelectrics have potential for enhanced figure of merit (ZT) via nano-structuring with a silicide phase, such as WSi2. A second phase of nano-sized silicides can theoretically reduce the lattice component of thermal conductivity without significantly reducing the electrical conductivity. However, experimentally achieving such improvements in line with the theory is complicated by factors such as control of silicide size during sintering, dopant segregation, matrix homogeneity, and sintering kinetics. Samples were prepared using powder metallurgy techniques; including mechano-chemical alloying, via ball milling, and spark plasma sintering for densification. Processing, micro-structural development, and thermoelectric properties will be discussed. Additionally, couple and device level characterization will be introduced.

  18. Technical note: Alternatives to reduce adipose tissue sampling bias.

    PubMed

    Cruz, G D; Wang, Y; Fadel, J G

    2014-10-01

    Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.

  19. Image Steganography In Securing Sound File Using Arithmetic Coding Algorithm, Triple Data Encryption Standard (3DES) and Modified Least Significant Bit (MLSB)

    NASA Astrophysics Data System (ADS)

    Nasution, A. B.; Efendi, S.; Suwilo, S.

    2018-04-01

    The amount of data inserted in the form of audio samples that use 8 bits with LSB algorithm, affect the value of PSNR which resulted in changes in image quality of the insertion (fidelity). So in this research will be inserted audio samples using 5 bits with MLSB algorithm to reduce the number of data insertion where previously the audio sample will be compressed with Arithmetic Coding algorithm to reduce file size. In this research will also be encryption using Triple DES algorithm to better secure audio samples. The result of this research is the value of PSNR more than 50dB so it can be concluded that the image quality is still good because the value of PSNR has exceeded 40dB.

  20. Thermomechanical treatment for improved neutron irradiation resistance of austenitic alloy (Fe-21Cr-32Ni)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L. Tan; J. T. Busby; H. J. M. Chichester

    2013-06-01

    An optimized thermomechanical treatment (TMT) applied to austenitic alloy 800H (Fe-21Cr-32Ni) had shown significant improvements in corrosion resistance and basic mechanical properties. This study examined its effect on radiation resistance by irradiating both the solution-annealed (SA) and TMT samples at 500 degrees C for 3 dpa. Microstructural characterization using transmission electron microscopy revealed that the radiation-induced Frank loops, voids, and y'-Ni3(Ti,Al) precipitates had similar sizes between the SA and TMT samples. The amounts of radiation-induced defects and more significantly y' precipitates, however, were reduced in the TMT samples. These reductions would approximately reduce by 40.9% the radiation hardening compared tomore » the SA samples. This study indicates that optimized-TMT is an economical approach for effective overall property improvements.« less

  1. Hexagonal platelet-like magnetite as a biosignature of thermophilic iron-reducing bacteria and its applications to the exploration of the modern deep, hot biosphere and the emergence of iron-reducing bacteria in early precambrian oceans.

    PubMed

    Li, Yi-Liang

    2012-12-01

    Dissimilatory iron-reducing bacteria are able to enzymatically reduce ferric iron and couple to the oxidation of organic carbon. This mechanism induces the mineralization of fine magnetite crystals characterized by a wide distribution in size and irregular morphologies that are indistinguishable from authigenic magnetite. Thermoanaerobacter are thermophilic iron-reducing bacteria that predominantly inhabit terrestrial hot springs or deep crusts and have the capacity to transform amorphous ferric iron into magnetite with a size up to 120 nm. In this study, I first characterize the formation of hexagonal platelet-like magnetite of a few hundred nanometers in cultures of Thermoanaerobacter spp. strain TOR39. Biogenic magnetite with such large crystal sizes and unique morphology has never been observed in abiotic or biotic processes and thus can be considered as a potential biosignature for thermophilic iron-reducing bacteria. The unique crystallographic features and strong ferrimagnetic properties of these crystals allow easy and rapid screening for the previous presence of iron-reducing bacteria in deep terrestrial crustal samples that are unsuitable for biological detection methods and, also, the search for biogenic magnetite in banded iron formations that deposited only in the first 2 billion years of Earth with evidence of life.

  2. The program structure does not reliably recover the correct population structure when sampling is uneven: subsampling and new estimators alleviate the problem.

    PubMed

    Puechmaille, Sebastien J

    2016-05-01

    Inferences of population structure and more precisely the identification of genetically homogeneous groups of individuals are essential to the fields of ecology, evolutionary biology and conservation biology. Such population structure inferences are routinely investigated via the program structure implementing a Bayesian algorithm to identify groups of individuals at Hardy-Weinberg and linkage equilibrium. While the method is performing relatively well under various population models with even sampling between subpopulations, the robustness of the method to uneven sample size between subpopulations and/or hierarchical levels of population structure has not yet been tested despite being commonly encountered in empirical data sets. In this study, I used simulated and empirical microsatellite data sets to investigate the impact of uneven sample size between subpopulations and/or hierarchical levels of population structure on the detected population structure. The results demonstrated that uneven sampling often leads to wrong inferences on hierarchical structure and downward-biased estimates of the true number of subpopulations. Distinct subpopulations with reduced sampling tended to be merged together, while at the same time, individuals from extensively sampled subpopulations were generally split, despite belonging to the same panmictic population. Four new supervised methods to detect the number of clusters were developed and tested as part of this study and were found to outperform the existing methods using both evenly and unevenly sampled data sets. Additionally, a subsampling strategy aiming to reduce sampling unevenness between subpopulations is presented and tested. These results altogether demonstrate that when sampling evenness is accounted for, the detection of the correct population structure is greatly improved. © 2016 John Wiley & Sons Ltd.

  3. The sample handling system for the Mars Icebreaker Life mission: from dirt to data.

    PubMed

    Davé, Arwen; Thompson, Sarah J; McKay, Christopher P; Stoker, Carol R; Zacny, Kris; Paulsen, Gale; Mellerowicz, Bolek; Glass, Brian J; Willson, David; Bonaccorsi, Rosalba; Rask, Jon

    2013-04-01

    The Mars Icebreaker Life mission will search for subsurface life on Mars. It consists of three payload elements: a drill to retrieve soil samples from approximately 1 m below the surface, a robotic sample handling system to deliver the sample from the drill to the instruments, and the instruments themselves. This paper will discuss the robotic sample handling system. Collecting samples from ice-rich soils on Mars in search of life presents two challenges: protection of that icy soil--considered a "special region" with respect to planetary protection--from contamination from Earth, and delivery of the icy, sticky soil to spacecraft instruments. We present a sampling device that meets these challenges. We built a prototype system and tested it at martian pressure, drilling into ice-cemented soil, collecting cuttings, and transferring them to the inlet port of the SOLID2 life-detection instrument. The tests successfully demonstrated that the Icebreaker drill, sample handling system, and life-detection instrument can collectively operate in these conditions and produce science data that can be delivered via telemetry--from dirt to data. Our results also demonstrate the feasibility of using an air gap to prevent forward contamination. We define a set of six analog soils for testing over a range of soil cohesion, from loose sand to basalt soil, with angles of repose of 27° and 39°, respectively. Particle size is a key determinant of jamming of mechanical parts by soil particles. Jamming occurs when the clearance between moving parts is equal in size to the most common particle size or equal to three of these particles together. Three particles acting together tend to form bridges and lead to clogging. Our experiments show that rotary-hammer action of the Icebreaker drill influences the particle size, typically reducing particle size by ≈ 100 μm.

  4. Accounting for response misclassification and covariate measurement error improves power and reduces bias in epidemiologic studies.

    PubMed

    Cheng, Dunlei; Branscum, Adam J; Stamey, James D

    2010-07-01

    To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies. 2010 Elsevier Inc. All rights reserved.

  5. Characterizing the size distribution of particles in urban stormwater by use of fixed-point sample-collection methods

    USGS Publications Warehouse

    Selbig, William R.; Bannerman, Roger T.

    2011-01-01

    The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.

  6. Laser Surface Modification of H13 Die Steel using Different Laser Spot Sizes

    NASA Astrophysics Data System (ADS)

    Aqida, S. N.; Naher, S.; Brabazon, D.

    2011-05-01

    This paper presents a laser surface modification process of AISI H13 tool steel using three sizes of laser spot with an aim to achieve reduced grain size and surface roughness. A Rofin DC-015 diffusion-cooled CO2 slab laser was used to process AISI H13 tool steel samples. Samples of 10 mm diameter were sectioned to 100 mm length in order to process a predefined circumferential area. The parameters selected for examination were laser peak power, overlap percentage and pulse repetition frequency (PRF). Metallographic study and image analysis were done to measure the grain size and the modified surface roughness was measured using two-dimensional surface profilometer. From metallographic study, the smallest grain sizes measured by laser modified surface were between 0.51 μm and 2.54 μm. The minimum surface roughness, Ra, recorded was 3.0 μm. This surface roughness of the modified die steel is similar to the surface quality of cast products. The grain size correlation with hardness followed the findings correlate with Hall-Petch relationship. The potential found for increase in surface hardness represents an important method to sustain tooling life.

  7. Microstructural and optical properties of Mn doped NiO nanostructures synthesized via sol-gel method

    NASA Astrophysics Data System (ADS)

    Shah, Shamim H.; Khan, Wasi; Naseem, Swaleha; Husain, Shahid; Nadeem, M.

    2018-04-01

    Undoped and Mn(0, 5%, 10% and 15%) doped NiO nanostructures were synthesized by sol-gel method. Structure, morphology and optical properties were investigated through XRD, FTIR, SEM/EDS and UV-visible absorption spectroscopy techniques. XRD data analysis reveals the single phase nature with cubic crystal symmetry of the samples and the average crystallite size decreases with the doping of Mn ions upto 10%. FTIR spectra further confirmed the purity and composition of the synthesized samples. The non-spherical shape of the nanostructures was observed from SEM micrographs and gain size of the nanostructures reduces with Mn doping in NiO, whereas agglomeration increases in doped sample. Optical band gap was estimated using Tauc'srelation and found to increase on incorporation of Mn upto 10% in host lattice and then decreases for further doping.

  8. Variable aperture-based ptychographical iterative engine method

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  9. Comparative microstructure study of oil palm fruit bunch fibre, mesocarp and kernels after microwave pre-treatment

    NASA Astrophysics Data System (ADS)

    Chang, Jessie S. L.; Chan, Y. S.; Law, M. C.; Leo, C. P.

    2017-07-01

    The implementation of microwave technology in palm oil processing offers numerous advantages; besides elimination of polluted palm oil mill effluent, it also reduces energy consumption, processing time and space. However, microwave exposure could damage a material’s microstructure which affected the quality of fruit that can be related to its physical structure including the texture and appearance. In this work, empty fruit bunches, mesocarp and kernel was microwave dried and their respective microstructures were examined. The microwave pretreatments were conducted at 100W and 200W and the microstructure investigation of both treated and untreated samples were evaluated using scanning electron microscope. The micrographs demonstrated that microwave does not significantly influence kernel and mesocarp but noticeable change was found on the empty fruit bunches where the sizes of the granular starch were reduced and a small portion of the silica bodies were disrupted. From the experimental data, the microwave irradiation was shown to be efficiently applied on empty fruit bunches followed by mesocarp and kernel as significant weight loss and size reduction was observed after the microwave treatments. The current work showed that microwave treatment did not change the physical surfaces of samples but sample shrinkage is observed.

  10. Microstructure and Mechanical Behavior of Porous Ti–6Al–4V Processed by Spherical Powder Sintering

    PubMed Central

    Reig, Lucía; Tojal, Concepción; Busquets, David J.; Amigó, Vicente

    2013-01-01

    Reducing the stiffness of titanium is an important issue to improve the behavior of this material when working together with bone, which can be achieved by generating a porous structure. The aim of this research was to analyze the porosity and mechanical behavior of Ti–6Al–4V porous samples developed by spherical powder sintering. Four different microsphere sizes were sintered at temperatures ranging from 1300 to 1400 °C for 2, 4 and 8 h. An open, interconnected porosity was obtained, with mean pore sizes ranging from 54.6 to 140 µm. The stiffness of the samples diminished by as much as 40% when compared to that of solid material and the mechanical properties were affected mainly by powder particles size. Bending strengths ranging from 48 to 320 MPa and compressive strengths from 51 to 255 MPa were obtained. PMID:28788365

  11. Microstructure and Mechanical Behavior of Porous Ti-6Al-4V Processed by Spherical Powder Sintering.

    PubMed

    Reig, Lucía; Tojal, Concepción; Busquets, David J; Amigó, Vicente

    2013-10-23

    Reducing the stiffness of titanium is an important issue to improve the behavior of this material when working together with bone, which can be achieved by generating a porous structure. The aim of this research was to analyze the porosity and mechanical behavior of Ti-6Al-4V porous samples developed by spherical powder sintering. Four different microsphere sizes were sintered at temperatures ranging from 1300 to 1400 °C for 2, 4 and 8 h. An open, interconnected porosity was obtained, with mean pore sizes ranging from 54.6 to 140 µm. The stiffness of the samples diminished by as much as 40% when compared to that of solid material and the mechanical properties were affected mainly by powder particles size. Bending strengths ranging from 48 to 320 MPa and compressive strengths from 51 to 255 MPa were obtained.

  12. Measurement of Vibrated Bulk Density of Coke Particle Blends Using Image Texture Analysis

    NASA Astrophysics Data System (ADS)

    Azari, Kamran; Bogoya-Forero, Wilinthon; Duchesne, Carl; Tessier, Jayson

    2017-09-01

    A rapid and nondestructive machine vision sensor was developed for predicting the vibrated bulk density (VBD) of petroleum coke particles based on image texture analysis. It could be used for making corrective adjustments to a paste plant operation to reduce green anode variability (e.g., changes in binder demand). Wavelet texture analysis (WTA) and gray level co-occurrence matrix (GLCM) algorithms were used jointly for extracting the surface textural features of coke aggregates from images. These were correlated with the VBD using partial least-squares (PLS) regression. Coke samples of several sizes and from different sources were used to test the sensor. Variations in the coke surface texture introduced by coke size and source allowed for making good predictions of the VBD of individual coke samples and mixtures of them (blends involving two sources and different sizes). Promising results were also obtained for coke blends collected from an industrial-baked carbon anode manufacturer.

  13. Molecular-Size-Separated Brown Carbon Absorption for Biomass-Burning Aerosol at Multiple Field Sites.

    PubMed

    Di Lorenzo, Robert A; Washenfelder, Rebecca A; Attwood, Alexis R; Guo, Hongyu; Xu, Lu; Ng, Nga L; Weber, Rodney J; Baumann, Karsten; Edgerton, Eric; Young, Cora J

    2017-03-21

    Biomass burning is a known source of brown carbon aerosol in the atmosphere. We collected filter samples of biomass-burning emissions at three locations in Canada and the United States with transport times of 10 h to >3 days. We analyzed the samples with size-exclusion chromatography coupled to molecular absorbance spectroscopy to determine absorbance as a function of molecular size. The majority of absorption was due to molecules >500 Da, and these contributed an increasing fraction of absorption as the biomass-burning aerosol aged. This suggests that the smallest molecular weight fraction is more susceptible to processes that lead to reduced light absorption, while larger-molecular-weight species may represent recalcitrant brown carbon. We calculate that these large-molecular-weight species are composed of more than 20 carbons with as few as two oxygens and would be classified as extremely low volatility organic compounds (ELVOCs).

  14. Electrochemical Behavior Assessment of Micro- and Nano-Grained Commercial Pure Titanium in H2SO4 Solutions

    NASA Astrophysics Data System (ADS)

    Fattah-alhosseini, Arash; Ansari, Ali Reza; Mazaheri, Yousef; Karimi, Mohsen

    2017-02-01

    In this study, the electrochemical behavior of commercial pure titanium with both coarse-grained (annealed sample with the average grain size of about 45 µm) and nano-grained microstructure was compared by potentiodynamic polarization, electrochemical impedance spectroscopy (EIS), and Mott-Schottky analysis. Nano-grained Ti, which typically has a grain size of about 90 nm, is successfully made by six-cycle accumulative roll-bonding process at room temperature. Potentiodynamic polarization plots and impedance measurements revealed that as a result of grain refinement, the passive behavior of the nano-grained sample was improved compared to that of annealed pure Ti in H2SO4 solutions. Mott-Schottky analysis indicated that the passive films behaved as n-type semiconductors in H2SO4 solutions and grain refinement did not change the semiconductor type of passive films. Also, Mott-Schottky analysis showed that the donor densities decreased as the grain size of the samples reduced. Finally, all electrochemical tests showed that the electrochemical behavior of the nano-grained sample was improved compared to that of annealed pure Ti, mainly due to the formation of thicker and less defective oxide film.

  15. Bacterial contamination of boar semen affects the litter size.

    PubMed

    Maroto Martín, Luis O; Muñoz, Eduardo Cruz; De Cupere, Françoise; Van Driessche, Edilbert; Echemendia-Blanco, Dannele; Rodríguez, José M Machado; Beeckmans, Sonia

    2010-07-01

    One hundred and fifteen semen samples were collected from 115 different boars from two farms in Cuba. The boars belonged to five different breeds. Evaluation of the semen sample characteristics (volume, pH, colour, smell, motility of sperm cells) revealed that they meet international standards. The samples were also tested for the presence of agglutinated sperm cells and for bacterial contamination. Seventy five percent of the ejaculates were contaminated with at least one type of bacteria and E. coli was by far the major contaminant, being present in 79% of the contaminated semen samples (n=68). Other contaminating bacteria belonged to the genera Proteus (n=31), Serratia (n=31), Enterobacter (n=24), Klebsiella (n=12), Staphylococcus (n=10), Streptococcus (n=8) and Pseudomonas (n=7). Only in one sample anaerobic bacteria were detected. Pearson's analysis of the data revealed that there is a positive correlation between the presence of E. coli and sperm agglutination, and a negative correlation between sperm agglutination and litter size. One-way ANOVA and post hoc Tukey analysis of 378 litters showed that the litter size is significantly reduced when semen is used that is contaminated with spermagglutinating E. coli above a threshold value of 3.5x10(3)CFU/ml. Copyright 2010 Elsevier B.V. All rights reserved.

  16. Impact of cloud horizontal inhomogeneity and directional sampling on the retrieval of cloud droplet size by the POLDER instrument

    NASA Astrophysics Data System (ADS)

    Shang, H.; Chen, L.; Bréon, F. M.; Letu, H.; Li, S.; Wang, Z.; Su, L.

    2015-11-01

    The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR (~ 1.5 μm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (> 15 μm) and to reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data from June 2008, and the new CDR results are compared with the operational CDRs. The comparison shows that the operational CDRs tend to be underestimated for large droplets because the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Finally, a sub-grid-scale retrieval case demonstrates that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size distribution parameters from POLDER measurements.

  17. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    USGS Publications Warehouse

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  18. Evaluating the Wald Test for Item-Level Comparison of Saturated and Reduced Models in Cognitive Diagnosis

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Lee, Young-Sun

    2013-01-01

    This article used the Wald test to evaluate the item-level fit of a saturated cognitive diagnosis model (CDM) relative to the fits of the reduced models it subsumes. A simulation study was carried out to examine the Type I error and power of the Wald test in the context of the G-DINA model. Results show that when the sample size is small and a…

  19. Microplastics, Macroproblems?

    NASA Astrophysics Data System (ADS)

    Greene, V.; Adams, S.; Adams, A.

    2017-12-01

    Microplastics and plastics have polluted water all over the world including the Great Lakes. Microplastics can result from plastics that have broken up into smaller pieces or they can be purposely made and used in a variety of products. The size of a microplastic is less than 5 mm in length. These plastics can cause problems because they are non-biodegradable. Animals that have ingested these plastics have had reduced reproductive rates, health problems, and have even died from malnutrition. Our goal is to learn more about this issue. To do this, we will take water samples from different areas along the Gulf of Mexico and inland bays along the Florida coastline and compare the amount of microplastics found in each area. To analyze our samples we will vacuum filter water samples using gridded filter paper. We will then organize these samples by size and color. The control for our experiment will be filtered water. Our hypothesis is that Gulf of Mexico water samples will have more microplastics than the Bay water samples. We want to research this topic because microplastics can harm our ecosystems by affecting the health of marine animals.

  20. Time and expected value of sample information wait for no patient.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2008-01-01

    The expected value of sample information (EVSI) from prospective trials has previously been modeled as the product of EVSI per patient, and the number of patients across the relevant time horizon less those "used up" in trials. However, this implicitly assumes the eligible patient population to which information from a trial can be applied across a time horizon are independent of time for trial accrual, follow-up and analysis. This article demonstrates that in calculating the EVSI of a trial, the number of patients who benefit from trial information should be reduced by those treated outside as well as within the trial over the time until trial evidence is updated, including time for accrual, follow-up and analysis. Accounting for time is shown to reduce the eligible patient population: 1) independent of the size of trial in allowing for time of follow-up and analysis, and 2) dependent on the size of trial for time of accrual, where the patient accrual rate is less than incidence. Consequently, the EVSI and expected net gain (ENG) at any given trial size are shown to be lower when accounting for time, with lower ENG reinforced in the case of trials undertaken while delaying decisions by additional opportunity costs of time. Appropriately accounting for time reduces the EVSI of trial design and increase opportunity costs of trials undertaken with delay, leading to lower likelihood of trialing being optimal and smaller trial designs where optimal.

  1. Effect of particle size on band gap and DC electrical conductivity of TiO2 nanomaterial

    NASA Astrophysics Data System (ADS)

    Avinash, B. S.; Chaturmukha, V. S.; Jayanna, H. S.; Naveen, C. S.; Rajeeva, M. P.; Harish, B. M.; Suresh, S.; Lamani, Ashok R.

    2016-05-01

    Materials reduced to the Nano scale can exhibit different properties compared to what they exhibit on a micro scale, enabling unique applications. When TiO2 is reduced to Nano scale it shows unique properties, of which the electrical aspect is highly important. This paper presents increase in the energy gap and decrease in conductivity with decrease in particle size of pure Nano TiO2 synthesized by hydrolysis and peptization of titanium isopropoxide. Aqueous solution with various pH and peptizing the resultant suspension will form Nano TiO2 at different particle sizes. As the pH of the solution is made acidic reduction in the particle size is observed. And it is confirmed from XRD using Scherer formula and SEM, as prepared samples are studied for UV absorbance, and DC conductivity from room temperature to 400°C. From the tauc plot it was observed, and calculated the energy band gap increases as the particle size decreases and shown TiO2 is direct band gap. From Arrhenius plot clearly we encountered, decrease in the conductivity for the decrease in particle size due to hopping of charge carriers and it is evident that, we can tailor the band gap by varying particle size.

  2. Using meta-analysis to inform the design of subsequent studies of diagnostic test accuracy.

    PubMed

    Hinchliffe, Sally R; Crowther, Michael J; Phillips, Robert S; Sutton, Alex J

    2013-06-01

    An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial evidence to underpin reliable clinical decision-making. Very few investigators consider any sample size calculations when designing a new diagnostic accuracy study. However, it is important to consider the number of subjects in a new study in order to achieve a precise measure of accuracy. Sutton et al. have suggested previously that when designing a new therapeutic trial, it could be more beneficial to consider the power of the updated meta-analysis including the new trial rather than of the new trial itself. The methodology involves simulating new studies for a range of sample sizes and estimating the power of the updated meta-analysis with each new study added. Plotting the power values against the range of sample sizes allows the clinician to make an informed decision about the sample size of a new trial. This paper extends this approach from the trial setting and applies it to diagnostic accuracy studies. Several meta-analytic models are considered including bivariate random effects meta-analysis that models the correlation between sensitivity and specificity. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Proteoglycan depletion and size reduction in lesions of early grade chondromalacia of the patella.

    PubMed

    Väätäinen, U; Häkkinen, T; Kiviranta, I; Jaroma, H; Inkinen, R; Tammi, M

    1995-10-01

    To determine the content and molecular size of proteoglycans (PGs) in patellar chondromalacia (CM) and control cartilages as a first step in investigating the role of matrix alterations in the pathogenesis of this disease. Chondromalacia tissue from 10 patients was removed with a surgical knife. Using identical techniques, apparently healthy cartilage of the same site was obtained from 10 age matched cadavers (mean age 31 years in both groups). Additional pathological cartilage was collected from 67 patients with grades II-IV CM (classified according to Outerbridge) using a motorised shaver under arthroscopic control. The shaved cartilage chips were collected with a dense net from the irrigation fluid of the shaver. The content of tissue PGs was determined by Safranin O precipitation or uronic acid content, and the molecular size by mobility on agarose gel electrophoresis. The mean PG content of the CM tissue samples with a knife was dramatically reduced, being only 15% of that in controls. The cartilage chips collected from shaving operations of grades II, III, and IV CM showed a decreasing PG content: 9%, 5%, and 1% of controls, respectively. Electrophoretic analysis of PGs extracted with guanidium chloride from the shaved tissue samples suggested a significantly reduced size of aggrecans in the mild (grade II) lesions. These data show that there is already a dramatic and progressive depletion of PGs in CM grade II lesions. This explains the softening of cartilage, a typical finding in the arthroscopic examination of CM. The PG size reduction observed in grade II implicates proteolytic attack as a factor in the pathogenesis of CM.

  4. Pore-scale simulations of drainage in granular materials: Finite size effects and the representative elementary volume

    NASA Astrophysics Data System (ADS)

    Yuan, Chao; Chareyre, Bruno; Darve, Félix

    2016-09-01

    A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the microstructure require frequent updates of the pore network.

  5. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  6. Adaptive significance of small body size: strength and motor performance of school children in Mexico and Papua New Guinea.

    PubMed

    Malina, R M; Little, B B; Shoup, R F; Buschang, P H

    1987-08-01

    The postulated superior functional efficiency in association with reduced body size under conditions of chronic protein-energy undernutrition was considered in school children from rural Mexico and coastal Papua New Guinea. Grip strength and three measures of motor performance were measured in cross-sectional samples of children 6-16 years of age from a rural agricultural community in Oaxaca, Mexico, and from the coastal community Pere on Manus Island, Papua New Guinea. The strength and performance of a mixed-longitudinal sample of well nourished children from Philadelphia was used as a reference. The Oaxaca and Pere children are significantly shorter and lighter and are not as strong as the well nourished children. Motor performances of Pere children compare favorably to those of the better-nourished Philadelphia children, whereas those of the Oaxaca children are poorer. Throwing performance is more variable. When expressed relative to body size, strength is similar in the three samples, but the running and jumping performances of Pere children per unit body size are better than the relative performances of Oaxaca and Philadelphia children. Throwing performance per unit body size is better in the undernourished children. The influence of age, stature, and weight on the performance of Oaxaca and Pere children is generally similar to that for well nourished children. These results suggest that the hypothesized adaptive significance of small body size for the functional efficiency of populations living under conditions of chronic undernutrition varies between populations and with performance tasks.

  7. The effectiveness of increased apical enlargement in reducing intracanal bacteria.

    PubMed

    Card, Steven J; Sigurdsson, Asgeir; Orstavik, Dag; Trope, Martin

    2002-11-01

    It has been suggested that the apical portion of a root canal is not adequately disinfected by typical instrumentation regimens. The purpose of this study was to determine whether instrumentation to sizes larger than typically used would more effectively remove culturable bacteria from the canal. Forty patients with clinical and radiographic evidence of apical periodontitis were recruited from the endodontic clinic. Mandibular cuspids (n = 2), bicuspids (n = 11), and molars (mesial roots) (n = 27) were selected for the study. Bacterial sampling was performed upon access and after each of two consecutive instrumentations. The first instrumentation utilized 1% NaOCI and 0.04 taper ProFile rotary files. The cuspid and bicuspid canals were instrumented to a #8 size and the molar canals to a #7 size. The second instrumentation utilized LightSpeed files and 1% NaOCl irrigation for further enlargement of the apical third. Typically, molars were instrumented to size 60 and cuspid/bicuspid canals to size 80. Our findings show that 100% of the cuspid/bicuspid canals and 81.5% of the molar canals were rendered bacteria-free after the first instrumentation sizes. The molar results improved to 89% after the second instrumentation. Of the (59.3%) molar mesial canals without a clinically detectable communication, 93% were rendered bacteria-free with the first instrumentation. Using a Wilcoxon rank sum test, statistically significant differences (p < 0.0001) were found between the initial sample and the samples after the first and second instrumentations. The differences between the samples that followed the two instrumentation regimens were not significant (p = 0.0617). It is concluded that simple root canal systems (without multiple canal communications) may be rendered bacteria-free when preparation of this type is utilized.

  8. POWER AND SAMPLE SIZE CALCULATIONS FOR LINEAR HYPOTHESES ASSOCIATED WITH MIXTURES OF MANY COMPONENTS USING FIXED-RATIO RAY DESIGNS

    EPA Science Inventory

    Response surface methodology, often supported by factorial designs, is the classical experimental approach that is widely accepted for detecting and characterizing interactions among chemicals in a mixture. In an effort to reduce the experimental effort as the number of compound...

  9. Quantifying viruses and bacteria in wastewater - results, quality control, and interpretation methods

    USDA-ARS?s Scientific Manuscript database

    Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes large enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bac...

  10. Coffee Stirrers and Drinking Straws as Disposable Spatulas

    ERIC Educational Resources Information Center

    Turano, Morgan A.; Lobuono, Cinzia; Kirschenbaum, Louis J.

    2015-01-01

    Although metal spatulas are damaged through everyday use and become discolored and corroded by chemical exposure, plastic drinking straws are inexpensive, sterile, and disposable, reducing the risk of cross-contamination during laboratory procedures. Drinking straws are also useful because they come in a variety of sizes; narrow sample containers…

  11. Size Matters: FTIR Spectral Analysis of Apollo Regolith Samples Exhibits Grain Size Dependence.

    NASA Astrophysics Data System (ADS)

    Martin, Dayl; Joy, Katherine; Pernet-Fisher, John; Wogelius, Roy; Morlok, Andreas; Hiesinger, Harald

    2017-04-01

    The Mercury Thermal Infrared Spectrometer (MERTIS) on the upcoming BepiColombo mission is designed to analyse the surface of Mercury in thermal infrared wavelengths (7-14 μm) to investigate the physical properties of the surface materials [1]. Laboratory analyses of analogue materials are useful for investigating how various sample properties alter the resulting infrared spectrum. Laboratory FTIR analysis of Apollo fine (<1mm) soil samples 14259,672, 15401,147, and 67481,96 have provided an insight into how grain size, composition, maturity (i.e., exposure to space weathering processes), and proportion of glassy material affect their average infrared spectra. Each of these samples was analysed as a bulk sample and five size fractions: <25, 25-63, 63-125, 125-250, and <250 μm. Sample 14259,672 is a highly mature highlands regolith with a large proportion of agglutinates [2]. The high agglutinate content (>60%) causes a 'flattening' of the spectrum, with reduced reflectance in the Reststrahlen Band region (RB) as much as 30% in comparison to samples that are dominated by a high proportion of crystalline material. Apollo 15401,147 is an immature regolith with a high proportion of volcanic glass pyroclastic beads [2]. The high mafic mineral content results in a systematic shift in the Christiansen Feature (CF - the point of lowest reflectance) to longer wavelength: 8.6 μm. The glass beads dominate the spectrum, displaying a broad peak around the main Si-O stretch band (at 10.8 μm). As such, individual mineral components of this sample cannot be resolved from the average spectrum alone. Apollo 67481,96 is a sub-mature regolith composed dominantly of anorthite plagioclase [2]. The CF position of the average spectrum is shifted to shorter wavelengths (8.2 μm) due to the higher proportion of felsic minerals. Its average spectrum is dominated by anorthite reflectance bands at 8.7, 9.1, 9.8, and 10.8 μm. The average reflectance is greater than the other samples due to a lower proportion of glassy material. In each soil, the smallest fractions (0-25 and 25-63 μm) have CF positions 0.1-0.4 μm higher than the larger grain sizes. Also, the bulk-sample spectra mostly closely resemble the 0-25 μm sieved size fraction spectrum, indicating that this size fraction of each sample dominates the bulk spectrum regardless of other physical properties. This has implications for surface analyses of other Solar System bodies where some mineral phases or components could be concentrated in a particular size fraction. For example, the anorthite grains in 67481,96 are dominantly >25 μm in size and therefore may not contribute proportionally to the bulk average spectrum (compared to the <25 μm fraction). The resulting bulk spectrum of 67481,96 has a CF position 0.2 μm higher than all size fractions >25 microns and therefore does not represent a true average composition of the sample. Further investigation of how grain size and composition alters the average spectrum is required to fully understand infrared spectra of planetary surfaces. [1] - Hiesinger H., Helbert J., and MERTIS Co-I Team. (2010). The Mercury Radiometer and Thermal Infrared Spectrometer (MERTIS) for the BepiColombo Mission. Planetary and Space Science. 58, 144-165. [2] - NASA Lunar Sample Compendium. https://curator.jsc.nasa.gov/lunar/lsc/

  12. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.

  13. Growth of group II-VI semiconductor quantum dots with strong quantum confinement and low size dispersion

    NASA Astrophysics Data System (ADS)

    Pandey, Praveen K.; Sharma, Kriti; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2003-11-01

    CdTe quantum dots embedded in glass matrix are grown using two-step annealing method. The results for the optical transmission characterization are analysed and compared with the results obtained from CdTe quantum dots grown using conventional single-step annealing method. A theoretical model for the absorption spectra is used to quantitatively estimate the size dispersion in the two cases. In the present work, it is established that the quantum dots grown using two-step annealing method have stronger quantum confinement, reduced size dispersion and higher volume ratio as compared to the single-step annealed samples. (

  14. Removal of Non-metallic Inclusions from Nickel Base Superalloys by Electromagnetic Levitation Melting in a Slag

    NASA Astrophysics Data System (ADS)

    Manjili, Mohsen Hajipour; Halali, Mohammad

    2018-02-01

    Samples of INCONEL 718 were levitated and melted in a slag by the application of an electromagnetic field. The effects of temperature, time, and slag composition on the inclusion content of the samples were studied thoroughly. Samples were compared with the original alloy to study the effect of the process on inclusions. Size, shape, and chemical composition of remaining non-metallic inclusions were investigated. The samples were prepared by Standard Guide for Preparing and Evaluating Specimens for Automatic Inclusion Assessment of Steel (ASTM E 768-99) method and the results were reported by means of the Standard Test Methods for Determining the Inclusion Content of Steel (ASTM E 45-97). Results indicated that by increasing temperature and processing time, greater level of cleanliness could be achieved, and numbers and size of the remaining inclusions decreased significantly. It was also observed that increasing calcium fluoride content of the slag helped reduce inclusion content.

  15. Development of a depth-integrated sample arm (DISA) to reduce solids stratification bias in stormwater sampling

    USGS Publications Warehouse

    Selbig, William R.; ,; Roger T. Bannerman,

    2011-01-01

    A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.

  16. Development of a depth-integrated sample arm to reduce solids stratification bias in stormwater sampling.

    PubMed

    Selbig, William R; Bannerman, Roger T

    2011-04-01

    A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.

  17. Development of a depth-integrated sample arm to reduce solids stratification bias in stormwater sampling

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.T.

    2011-01-01

    A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water. ?? 2010 Publishing Technology.

  18. Variable aperture-based ptychographical iterative engine method.

    PubMed

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  19. Raman and dielectric studies of GdMnO3 bulk ceramics synthesized from nano powders

    NASA Astrophysics Data System (ADS)

    Samantaray, S.; Mishra, D. K.; Roul, B. K.

    2017-05-01

    Nanocrystalline GdMnO3 (GMO) powders has been synthesized by a simple chemical route i. e. pyrophoric reaction technique and then sintered in the form of bulk pellet at 850°C for 24 hours by adopting slow step sintering schedule. It is observed that by reducing the particles size, chemical route enhances the mixing process as well as decreasing the sintering temperature to get single phase material system in compared to the polycrystalline sample prepared directly from the micron sized commercial powder. Raman spectroscopic studies confirm that the sample is in single phase without any detectable impurity. Frequency dependent dielectric properties i.e., dielectric constant (K) and dielectric loss (tanδ) of GMO ceramics sintered at 850°C for 24 hours were studied at room temperature. The sample showed high K value (˜2736) in the frequency of 100 Hz at room temperature.

  20. Study on effect of microparticle's size on cavitation erosion in solid-liquid system

    NASA Astrophysics Data System (ADS)

    Chen, Haosheng; Liu, Shihan; Wang, Jiadao; Chen, Darong

    2007-05-01

    Five different solutions containing microparticles in different sizes were tested in a vibration cavitation erosion experiment. After the experiment, the number of erosion pits on sample surfaces, free radicals HO• in solutions, and mass loss all show that the cavitation erosion strength is strongly related to the particle size, and 500nm particles cause more severe cavitation erosion than other smaller or larger particles do. A model is presented to explain such result considering both nucleation and bubble-particle collision effects. Particle of a proper size will increase the number of heterogeneous nucleation and at the same time reduce the number of bubble-particle combinations, which results in more free bubbles in the solution to generate stronger cavitation erosion.

  1. Thermal decomposition of wood: influence of wood components and cellulose crystallite size.

    PubMed

    Poletto, Matheus; Zattera, Ademir J; Forte, Maria M C; Santana, Ruth M C

    2012-04-01

    The influence of wood components and cellulose crystallinity on the thermal degradation behavior of different wood species has been investigated using thermogravimetry, chemical analysis and X-ray diffraction. Four wood samples, Pinus elliottii (PIE), Eucalyptus grandis (EUG), Mezilaurus itauba (ITA) and Dipteryx odorata (DIP) were used in this study. The results showed that higher extractives contents associated with lower crystallinity and lower cellulose crystallite size can accelerate the degradation process and reduce the wood thermal stability. On the other hand, the thermal decomposition of wood shifted to higher temperatures with increasing wood cellulose crystallinity and crystallite size. These results indicated that the cellulose crystallite size affects the thermal degradation temperature of wood species. Copyright © 2012. Published by Elsevier Ltd.

  2. Is it appropriate to composite fish samples for mercury trend monitoring and consumption advisories?

    PubMed

    Gandhi, Nilima; Bhavsar, Satyendra P; Gewurtz, Sarah B; Drouillard, Ken G; Arhonditsis, George B; Petro, Steve

    2016-03-01

    Monitoring mercury levels in fish can be costly because variation by space, time, and fish type/size needs to be captured. Here, we explored if compositing fish samples to decrease analytical costs would reduce the effectiveness of the monitoring objectives. Six compositing methods were evaluated by applying them to an existing extensive dataset, and examining their performance in reproducing the fish consumption advisories and temporal trends. The methods resulted in varying amount (average 34-72%) of reductions in samples, but all (except one) reproduced advisories very well (96-97% of the advisories did not change or were one category more restrictive compared to analysis of individual samples). Similarly, the methods performed reasonably well in recreating temporal trends, especially when longer-term and frequent measurements were considered. The results indicate that compositing samples within 5cm fish size bins or retaining the largest/smallest individuals and compositing in-between samples in batches of 5 with decreasing fish size would be the best approaches. Based on the literature, the findings from this study are applicable to fillet, muscle plug and whole fish mercury monitoring studies. The compositing methods may also be suitable for monitoring Persistent Organic Pollutants (POPs) in fish. Overall, compositing fish samples for mercury monitoring could result in a substantial savings (approximately 60% of the analytical cost) and should be considered in fish mercury monitoring, especially in long-term programs or when study cost is a concern. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  3. Public Acceptability in the UK and USA of Nudging to Reduce Obesity: The Example of Reducing Sugar-Sweetened Beverages Consumption

    PubMed Central

    Petrescu, Dragos C.; Hollands, Gareth J.; Couturier, Dominique-Laurent; Ng, Yin-Lam; Marteau, Theresa M.

    2016-01-01

    Background “Nudging”—modifying environments to change people’s behavior, often without their conscious awareness—can improve health, but public acceptability of nudging is largely unknown. Methods We compared acceptability, in the United Kingdom (UK) and the United States of America (USA), of government interventions to reduce consumption of sugar-sweetened beverages. Three nudge interventions were assessed: i. reducing portion Size, ii. changing the Shape of the drink containers, iii. changing their shelf Location; alongside two traditional interventions: iv. Taxation and v. Education. We also tested the hypothesis that describing interventions as working through non-conscious processes decreases their acceptability. Predictors of acceptability, including perceived intervention effectiveness, were also assessed. Participants (n = 1093 UK and n = 1082 USA) received a description of each of the five interventions which varied, by randomisation, in how the interventions were said to affect behaviour: (a) via conscious processes; (b) via non-conscious processes; or (c) no process stated. Acceptability was derived from responses to three items. Results Levels of acceptability for four of the five interventions did not differ significantly between the UK and US samples; reducing portion size was less accepted by the US sample. Within each country, Education was rated as most acceptable and Taxation the least, with the three nudge-type interventions rated between these. There was no evidence to support the study hypothesis: i.e. stating that interventions worked via non-conscious processes did not decrease their acceptability in either the UK or US samples. Perceived effectiveness was the strongest predictor of acceptability for all interventions across the two samples. Conclusion In conclusion, nudge interventions to reduce consumption of sugar-sweetened beverages seem similarly acceptable in the UK and USA, being more acceptable than taxation, but less acceptable than education. Contrary to prediction, we found no evidence that highlighting the non-conscious processes by which nudge interventions may work decreases their acceptability. However, highlighting the effectiveness of all interventions has the potential to increase their acceptability. PMID:27276222

  4. Job strain and shift work influences on biomarkers and subclinical heart disease indicators: a pilot study.

    PubMed

    Wong, Imelda S; Ostry, Aleck S; Demers, Paul A; Davies, Hugh W

    2012-01-01

    This pilot study is one of the first to examine the impact of job strain and shift work on both the autonomic nervous system (ANS) and the hypothalamic-pituitary-adrenal (HPA) axis using two salivary stress biomarkers and two subclinical heart disease indicators. This study also tested the feasibility of a rigorous biological sampling protocol in a busy workplace setting. Paramedics (n = 21) self-collected five salivary samples over 1 rest and 2 workdays. Samples were analyzed for α-amylase and cortisol diurnal slopes and daily production. Heart rate variability (HRV) was logged over 2 workdays with the Polar RS800 Heart Rate monitors. Endothelial functioning was measured using fingertip peripheral arterial tonometry. Job strain was ascertained using a paramedic-specific survey. The effects of job strain and shift work were examined by comparing paramedic types (dispatchers vs. ambulance attendants) and shift types (daytime vs. rotating day/night). Over 90% of all expected samples were collected and fell within expected normal ranges. Workday samples were significantly different from rest day samples. Dispatchers reported higher job strain than ambulance paramedics and exhibited reduced daily alpha-amylase production, elevated daily cortisol production, and reduced endothelial function. In comparison with daytime-only workers, rotating shift workers reported higher job strain, exhibited flatter α-amylase and cortisol diurnal slopes, reduced daily α-amylase production, elevated daily cortisol production, and reduced HRV and endothelial functioning. Despite non-statistically significant differences between group comparisons, the consistency of the overall trend in subjective and objective measures suggests that exposure to work stressors may lead to dysregulation in neuroendocrine activity and, over the long-term, to early signs of heart disease. Results suggest that further study is warranted in this population. Power calculations based on effect sizes in the shift type comparison suggest a study size of n = 250 may result in significant differences at p = 0.05. High compliance among paramedics to complete the intensive protocol suggests this study will be feasible in a larger population.

  5. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  6. High-concentration zeta potential measurements using light-scattering techniques

    PubMed Central

    Kaszuba, Michael; Corbett, Jason; Watson, Fraser Mcneil; Jones, Andrew

    2010-01-01

    Zeta potential is the key parameter that controls electrostatic interactions in particle dispersions. Laser Doppler electrophoresis is an accepted method for the measurement of particle electrophoretic mobility and hence zeta potential of dispersions of colloidal size materials. Traditionally, samples measured by this technique have to be optically transparent. Therefore, depending upon the size and optical properties of the particles, many samples will be too concentrated and will require dilution. The ability to measure samples at or close to their neat concentration would be desirable as it would minimize any changes in the zeta potential of the sample owing to dilution. However, the ability to measure turbid samples using light-scattering techniques presents a number of challenges. This paper discusses electrophoretic mobility measurements made on turbid samples at high concentration using a novel cell with reduced path length. Results are presented on two different sample types, titanium dioxide and a polyurethane dispersion, as a function of sample concentration. For both of the sample types studied, the electrophoretic mobility results show a gradual decrease as the sample concentration increases and the possible reasons for these observations are discussed. Further, a comparison of the data against theoretical models is presented and discussed. Conclusions and recommendations are made from the zeta potential values obtained at high concentrations. PMID:20732896

  7. sGD: software for estimating spatially explicit indices of genetic diversity.

    PubMed

    Shirk, A J; Cushman, S A

    2011-09-01

    Anthropogenic landscape changes have greatly reduced the population size, range and migration rates of many terrestrial species. The small local effective population size of remnant populations favours loss of genetic diversity leading to reduced fitness and adaptive potential, and thus ultimately greater extinction risk. Accurately quantifying genetic diversity is therefore crucial to assessing the viability of small populations. Diversity indices are typically calculated from the multilocus genotypes of all individuals sampled within discretely defined habitat patches or larger regional extents. Importantly, discrete population approaches do not capture the clinal nature of populations genetically isolated by distance or landscape resistance. Here, we introduce spatial Genetic Diversity (sGD), a new spatially explicit tool to estimate genetic diversity based on grouping individuals into potentially overlapping genetic neighbourhoods that match the population structure, whether discrete or clinal. We compared the estimates and patterns of genetic diversity using patch or regional sampling and sGD on both simulated and empirical populations. When the population did not meet the assumptions of an island model, we found that patch and regional sampling generally overestimated local heterozygosity, inbreeding and allelic diversity. Moreover, sGD revealed fine-scale spatial heterogeneity in genetic diversity that was not evident with patch or regional sampling. These advantages should provide a more robust means to evaluate the potential for genetic factors to influence the viability of clinal populations and guide appropriate conservation plans. © 2011 Blackwell Publishing Ltd.

  8. Characterization and Beneficiation Studies of a Low Grade Bauxite Ore

    NASA Astrophysics Data System (ADS)

    Rao, D. S.; Das, B.

    2014-10-01

    A low grade bauxite sample of central India was thoroughly characterized with the help of stereomicroscope, reflected light microscope and electron microscope using QEMSCAN. A few hand picked samples were collected from different places of the mine and were subjected to geochemical characterization studies. The geochemical studies indicated that most of the samples contain high silica and low alumina, except a few which are high grade. Mineralogically the samples consist of bauxite (gibbsite and boehmite), ferruginous mineral phases (goethite and hematite), clay and silicate (quartz), and titanium bearing minerals like rutile and ilmenite. Majority of the gibbsite, boehmite and gibbsitic oolites contain clay, quartz and iron and titanium mineral phases within the sample as inclusions. The sample on an average contains 39.1 % Al2O3 and 12.3 % SiO2, and 20.08 % of Fe2O3. Beneficiation techniques like size classification, sorting, scrubbing, hydrocyclone and magnetic separation were employed to reduce the silica content suitable for Bayer process. The studies indicated that, 50 % by weight with 41 % Al2O3 containing less than 5 % SiO2 could be achieved. The finer sized sample after physical beneficiation still contains high silica due to complex mineralogical associations.

  9. The Universal Multizone Crystallizator (UMC) Furnace: An International Cooperative Agreement

    NASA Technical Reports Server (NTRS)

    Watring, D. A.; Su, C.-H.; Gillies, D.; Roosz, T.; Babcsan, N.

    1996-01-01

    The Universal Multizone Crystallizator (UMC) is a special apparatus for crystal growth under terrestrial and microgravity conditions. The use of twenty-five zones allows the UMC to be used for several normal freezing growth techniques. The thermal profile is electronically translated along the stationary sample by systematically reducing the power to the control zones. Elimination of mechanical translation devices increases the systems reliability while simultaneously reducing the size and weight. This paper addresses the UMC furnace design, sample cartridge and typical thermal profiles and corresponding power requirements necessary for the dynamic gradient freeze crystal growth technique. Results from physical vapor transport and traveling heater method crystal growth experiments are also discussed.

  10. Tooth Size Variation Related to Age in Amboseli Baboons

    PubMed Central

    Galbany, Jordi; Dotras, Laia; Alberts, Susan C.; Pérez-Pérez, Alejandro

    2011-01-01

    We measured the molar size from a single population of wild baboons from Amboseli (Kenya), both females (n = 57) and males (n = 50). All the females were of known age; the males represented a mix of known-age individuals (n = 31) and individuals with ages estimated to within 2 years (n = 19). The results showed a significant reduction in the mesiodistal length of teeth in both sexes as a function of age. Overall patterns of age-related change in tooth size did not change whether we included or excluded the individuals of estimated age, but patterns of statistical significance changed as a result of changed sample sizes. Our results demonstrate that tooth length is directly related to age due to interproximal wearing caused by M2 and M3 compression loads. Dental studies in primates, including both fossil and extant species, are mostly based on specimens obtained from osteological collections of varying origins, for which the age at death of each individual in the sample is not known. Researchers should take into account the phenomenon of interproximal attrition leading to reduced tooth size when measuring tooth length for ondontometric purposes. PMID:21325862

  11. The use of group sequential, information-based sample size re-estimation in the design of the PRIMO study of chronic kidney disease.

    PubMed

    Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi

    2011-04-01

    Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.

  12. Sample size requirements for separating out the effects of combination treatments: Randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis

    PubMed Central

    2011-01-01

    Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Trial registration Current Controlled Trials ISRCTN61649292 PMID:21288326

  13. Size effects on the magnetic properties of LaCoO3 nanoparticles

    NASA Astrophysics Data System (ADS)

    Wei, Q.; Zhang, T.; Wang, X. P.; Fang, Q. F.

    2012-02-01

    Magnetic properties of LaCoO3 nanoparticles prepared by a sol-gel method with average particle size (D) ranging from 20 to 500 nm are investigated. All samples exhibit obvious ferromagnetic transition. With decreasing particle size from 500 to 120 nm, the transition temperature Tc decreases slightly from 85 K, however Tc decreases dramatically when D ≤ 85 nm. Low-field magnetic moment at 10 K decreases with reduction of particle size, while the high-field magnetization exhibits a converse behavior, which is different with previous reports. The coercivity Hc decreases as the particle size is reduced. It is different with other nanosystems that no exchange bias effect is observed in nanosized LaCoO3 particles. These interesting results arise from the surface effect induced by sized effect and the structure change in LaCoO3 nanoparticles.

  14. Morphological diversity of Trichuris spp. eggs observed during an anthelminthic drug trial in Yunnan, China, and relative performance of parasitologic diagnostic tools.

    PubMed

    Steinmann, Peter; Rinaldi, Laura; Cringoli, Giuseppe; Du, Zun-Wei; Marti, Hanspeter; Jiang, Jin-Yong; Zhou, Hui; Zhou, Xiao-Nong; Utzinger, Jürg

    2015-01-01

    The presence of large Trichuris spp. eggs in human faecal samples is occasionally reported. Such eggs have been described as variant Trichuris trichiura or Trichuris vulpis eggs. Within the frame of a randomised controlled trial, faecal samples collected from 115 Bulang individuals from Yunnan, People's Republic of China were subjected to the Kato-Katz technique (fresh stool samples) and the FLOTAC and ether-concentration techniques (sodium acetate-acetic acid-formalin (SAF)-fixed stool samples). Large Trichuris spp. eggs were noted in faecal samples with a prevalence of 6.1% before and 21.7% after anthelminthic drug administration. The observed prevalence of standard-sized T. trichiura eggs was reduced from 93.0% to 87.0% after treatment. Considerably more cases of large Trichuris spp. eggs and slightly more cases with normal-sized T. trichiura eggs were identified by FLOTAC compared to the ether-concentration technique. No large Trichuris spp. eggs were observed on the Kato-Katz thick smears. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Invasive Australian Acacia seed banks: Size and relationship with stem diameter in the presence of gall-forming biological control agents.

    PubMed

    Strydom, Matthys; Veldtman, Ruan; Ngwenya, Mzabalazo Z; Esler, Karen J

    2017-01-01

    Australian Acacia are invasive in many parts of the world. Despite significant mechanical and biological efforts to control their invasion and spread, soil-stored seed banks prevent their effective and sustained removal. In response South Africa has had a strong focus on employing seed reducing biological control agents to deal with Australian Acacia invasion, a programme that is considered as being successful. To provide a predictive understanding for their management, seed banks of four invasive Australian acacia species (Acacia longifolia, A. mearnsii, A. pycnantha and A. saligna) were studied in the Western Cape of South Africa. Across six to seven sites for each species, seed bank sizes were estimated from dense, monospecific stands by collecting 30 litter and soil samples. Average estimated seed bank size was large (1017 to 17261 seed m-2) as was annual input into the seed bank, suggesting that these seed banks are not residual but are replenished in size annually. A clear relationship between seed bank size and stem diameter was established indicating that mechanical clearing should be conducted shortly after fire-stimulated recruitment events or within old populations when seed banks are small. In dense, monospecific stands seed-feeding biological control agents are not effective in reducing seed bank size.

  16. The relevance of grain dissection for grain size reduction in polar ice: insights from numerical models and ice core microstructure analysis

    NASA Astrophysics Data System (ADS)

    Steinbach, Florian; Kuiper, Ernst-Jan N.; Eichler, Jan; Bons, Paul D.; Drury, Martyn R.; Griera, Albert; Pennock, Gill M.; Weikusat, Ilka

    2017-09-01

    The flow of ice depends on the properties of the aggregate of individual ice crystals, such as grain size or lattice orientation distributions. Therefore, an understanding of the processes controlling ice micro-dynamics is needed to ultimately develop a physically based macroscopic ice flow law. We investigated the relevance of the process of grain dissection as a grain-size-modifying process in natural ice. For that purpose, we performed numerical multi-process microstructure modelling and analysed microstructure and crystallographic orientation maps from natural deep ice-core samples from the North Greenland Eemian Ice Drilling (NEEM) project. Full crystallographic orientations measured by electron backscatter diffraction (EBSD) have been used together with c-axis orientations using an optical technique (Fabric Analyser). Grain dissection is a feature of strain-induced grain boundary migration. During grain dissection, grain boundaries bulge into a neighbouring grain in an area of high dislocation energy and merge with the opposite grain boundary. This splits the high dislocation-energy grain into two parts, effectively decreasing the local grain size. Currently, grain size reduction in ice is thought to be achieved by either the progressive transformation from dislocation walls into new high-angle grain boundaries, called subgrain rotation or polygonisation, or bulging nucleation that is assisted by subgrain rotation. Both our time-resolved numerical modelling and NEEM ice core samples show that grain dissection is a common mechanism during ice deformation and can provide an efficient process to reduce grain sizes and counter-act dynamic grain-growth in addition to polygonisation or bulging nucleation. Thus, our results show that solely strain-induced boundary migration, in absence of subgrain rotation, can reduce grain sizes in polar ice, in particular if strain energy gradients are high. We describe the microstructural characteristics that can be used to identify grain dissection in natural microstructures.

  17. Topological Analysis and Gaussian Decision Tree: Effective Representation and Classification of Biosignals of Small Sample Size.

    PubMed

    Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong

    2017-09-01

    Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.

  18. Transition from Forward Smoldering to Flaming in Small Polyurethane Foam Samples

    NASA Technical Reports Server (NTRS)

    Bar-Ilan, A.; Putzeys, O.; Rein, G.; Fernandez-Pello, A. C.

    2004-01-01

    Experimental observations are presented of the effect of the flow velocity and oxygen concentration, and of a thermal radiant flux, on the transition from smoldering to flaming in forward smoldering of small samples of polyurethane foam with a gas/solid interface. The experiments are part of a project studying the transition from smolder to flaming under conditions encountered in spacecraft facilities, i.e., microgravity, low velocity variable oxygen concentration flows. Because the microgravity experiments are planned for the International Space Station, the foam samples had to be limited in size for safety and launch mass reasons. The feasible sample size is too small for smolder to self propagate because of heat losses to the surrounding environment. Thus, the smolder propagation and the transition to flaming had to be assisted by reducing the heat losses to the surroundings and increasing the oxygen concentration. The experiments are conducted with small parallelepiped samples vertically placed in a wind tunnel. Three of the sample lateral-sides are maintained at elevated temperature and the fourth side is exposed to an upward flow and to a radiant flux. It is found that decreasing the flow velocity and increasing its oxygen concentration, and/or increasing the radiant flux enhances the transition to flaming, and reduces the delay time to transition. Limiting external ambient conditions for the transition to flaming are reported for the present experimental set-up. The results show that smolder propagation and the transition to flaming can occur in relatively small fuel samples if the external conditions are appropriate. The results also indicate that transition to flaming occurs in the char left behind by the smolder reaction, and it has the characteristics of a gas-phase ignition induced by the smolder reaction, which acts as the source of both gaseous fuel and heat.

  19. Size effects on electrical properties of chemically grown zinc oxide nanoparticles

    NASA Astrophysics Data System (ADS)

    Rathod, K. N.; Joshi, Zalak; Dhruv, Davit; Gadani, Keval; Boricha, Hetal; Joshi, A. D.; Solanki, P. S.; Shah, N. A.

    2018-03-01

    In the present article, we study ZnO nanoparticles grown by cost effective sol–gel technique for various electrical properties. Structural studies performed by x-ray diffraction (XRD) revealed hexagonal unit cell phase with no observed impurities. Transmission electron microscopy (TEM) and particle size analyzer showed increased average particle size due to agglomeration effect with higher sintering. Dielectric constant (ε‧) decreases with increase in frequency because of the disability of dipoles to follow higher electric field. With higher sintering, dielectric constant reduced owing to the important role of increased formation of oxygen vacancy defects. Universal dielectric response (UDR) was verified by straight line fitting of log (fε‧) versus log (f) plots. All samples exhibit UDR behavior and with higher sintering more contribution from crystal cores. Impedance studies suggest an important role of boundary density while Cole–Cole (Z″ versus Z‧) plots have been studied for the relaxation behavior of the samples. Average normalized change (ANC) in impedance has been studied for all the samples wherein boundaries play an important role. Frequency dependent electrical conductivity has been understood on the basis of Jonscher’s universal power law. Jonscher’s law fits suggest that conduction of charge carrier is possible in the context of correlated barrier hopping (CBH) mechanism for lower temperature sintered sample while for higher temperature sintered ZnO samples, Maxwell–Wagner (M–W) relaxation process has been determined.

  20. Alternative sample sizes for verification dose experiments and dose audits

    NASA Astrophysics Data System (ADS)

    Taylor, W. A.; Hansen, J. M.

    1999-01-01

    ISO 11137 (1995), "Sterilization of Health Care Products—Requirements for Validation and Routine Control—Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2). This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report.

  1. The Peroxidation of Leukocytes Index Ratio Reveals the Prooxidant Effect of Green Tea Extract

    PubMed Central

    Manafikhi, Husseen; Raguzzini, Anna; Longhitano, Yaroslava; Reggi, Raffaella; Zanza, Christian

    2016-01-01

    Despite tea increased plasma nonenzymatic antioxidant capacity, the European Food Safety Administration (EFSA) denied claims related to tea and its protection from oxidative damage. Furthermore, the Supplement Information Expert Committee (DSI EC) expressed some doubts on the safety of green tea extract (GTE). We performed a pilot study in order to evaluate the effect of a single dose of two capsules of a GTE supplement (200 mg × 2) on the peroxidation of leukocytes index ratio (PLIR) in relation to uric acid (UA) and ferric reducing antioxidant potential (FRAP), as well as the sample size to reach statistical significance. GTE induced a prooxidant effect on leukocytes, whereas FRAP did not change, in agreement with the EFSA and the DSI EC conclusions. Besides, our results confirm the primary role of UA in the antioxidant defences. The ratio based calculation of the PLIR reduced the sample size to reach statistical significance, compared to the resistance to an exogenous oxidative stress and to the functional capacity of oxidative burst. Therefore, PLIR could be a sensitive marker of redox status. PMID:28101300

  2. The Peroxidation of Leukocytes Index Ratio Reveals the Prooxidant Effect of Green Tea Extract.

    PubMed

    Peluso, Ilaria; Manafikhi, Husseen; Raguzzini, Anna; Longhitano, Yaroslava; Reggi, Raffaella; Zanza, Christian; Palmery, Maura

    2016-01-01

    Despite tea increased plasma nonenzymatic antioxidant capacity, the European Food Safety Administration (EFSA) denied claims related to tea and its protection from oxidative damage. Furthermore, the Supplement Information Expert Committee (DSI EC) expressed some doubts on the safety of green tea extract (GTE). We performed a pilot study in order to evaluate the effect of a single dose of two capsules of a GTE supplement (200 mg × 2) on the peroxidation of leukocytes index ratio (PLIR) in relation to uric acid (UA) and ferric reducing antioxidant potential (FRAP), as well as the sample size to reach statistical significance. GTE induced a prooxidant effect on leukocytes, whereas FRAP did not change, in agreement with the EFSA and the DSI EC conclusions. Besides, our results confirm the primary role of UA in the antioxidant defences. The ratio based calculation of the PLIR reduced the sample size to reach statistical significance, compared to the resistance to an exogenous oxidative stress and to the functional capacity of oxidative burst. Therefore, PLIR could be a sensitive marker of redox status.

  3. Suspended sediments from upstream tributaries as the source of downstream river sites

    NASA Astrophysics Data System (ADS)

    Haddadchi, Arman; Olley, Jon

    2014-05-01

    Understanding the efficiency with which sediment eroded from different sources is transported to the catchment outlet is a key knowledge gap that is critical to our ability to accurately target and prioritise management actions to reduce sediment delivery. Sediment fingerprinting has proven to be an efficient approach to determine the sources of sediment. This study examines the suspended sediment sources from Emu Creek catchment, south eastern Queensland, Australia. In addition to collect suspended sediments from different sites of the streams after the confluence of tributaries and outlet of the catchment, time integrated suspended samples from upper tributaries were used as the source of sediment, instead of using hillslope and channel bank samples. Totally, 35 time-integrated samplers were used to compute the contribution of suspended sediments from different upstream waterways to the downstream sediment sites. Three size fractions of materials including fine sand (63-210 μm), silt (10-63 μm) and fine silt and clay (<10 μm) were used to find the effect of particle size on the contribution of upper sediments as the sources of sediment after river confluences. And then samples were analysed by ICP-MS and -OES to find 41 sediment fingerprints. According to the results of Student's T-distribution mixing model, small creeks in the middle and lower part of the catchment were major source in different size fractions, especially in silt (10-63 μm) samples. Gowrie Creek as covers southern-upstream part of the catchment was a major contributor at the outlet of the catchment in finest size fraction (<10 μm) Large differences between the contributions of suspended sediments from upper tributaries in different size fractions necessitate the selection of appropriate size fraction on sediment tracing in the catchment and also major effect of particle size on the movement and deposition of sediments.

  4. Bayesian selective response-adaptive design using the historical control.

    PubMed

    Kim, Mi-Ok; Harun, Nusrat; Liu, Chunyan; Khoury, Jane C; Broderick, Joseph P

    2018-06-13

    High quality historical control data, if incorporated, may reduce sample size, trial cost, and duration. A too optimistic use of the data, however, may result in bias under prior-data conflict. Motivated by well-publicized two-arm comparative trials in stroke, we propose a Bayesian design that both adaptively incorporates historical control data and selectively adapt the treatment allocation ratios within an ongoing trial responsively to the relative treatment effects. The proposed design differs from existing designs that borrow from historical controls. As opposed to reducing the number of subjects assigned to the control arm blindly, this design does so adaptively to the relative treatment effects only if evaluation of cumulated current trial data combined with the historical control suggests the superiority of the intervention arm. We used the effective historical sample size approach to quantify borrowed information on the control arm and modified the treatment allocation rules of the doubly adaptive biased coin design to incorporate the quantity. The modified allocation rules were then implemented under the Bayesian framework with commensurate priors addressing prior-data conflict. Trials were also more frequently concluded earlier in line with the underlying truth, reducing trial cost, and duration and yielded parameter estimates with smaller standard errors. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons, Ltd.

  5. Size-induced variations in bulk/surface structures and their impact on photoluminescence properties of GdVO4:Eu3+ nanoparticles.

    PubMed

    Yang, Liusai; Li, Liping; Zhao, Minglei; Li, Guangshe

    2012-07-28

    This work explores the size-induced lattice modification and its relevance to photoluminescence properties of tetragonal zircon-type GdVO(4):Eu(3+) nanostructures. GdVO(4):Eu(3+) nanoparticles with crystallite sizes ranging from 14.4 to 24.7 nm were synthesized by a hydrothermal method using sodium citrate as a capping agent. Regardless of the reaction temperatures, all samples retained an ellipsoidal-like morphology. Nevertheless, as the crystallite size reduces, there appears a tensile strain and lattice distortion, which is accompanied by a lattice expansion and a decreased symmetry of structural units. These lattice modifications could be associated with the changes in the interior chemical bonding due to the interactions of surface defect dipoles that have imposed an increased negative pressure with crystallite size reduction. Furthermore, crystallite size reduction also led to a significant increase in the amounts of surface hydroxyl groups and citric species, as well as the concentration of the surface Eu(3+) ions. When Eu(3+) was taken as a structural probe, it was found that the asymmetric ratio (I(02)/I(01)) of Eu(3+) gradually declined to show a remarkable decrease in color chromaticity as crystallite size reduces, which could be interpreted as due to the change of local environments of Eu(3+) ions from the interior to the surface of the nanoparticles.

  6. The analysis and rationale behind the upgrading of existing standard definition thermal imagers to high definition

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.

    2016-05-01

    With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and "soon to be" commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor's overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.

  7. Effects of niobium additions on the structure, depth, and austenite grain size of the case of carburized 0.07% C steels

    NASA Astrophysics Data System (ADS)

    Islam, M. A.; Bepari, M. M. A.

    1996-10-01

    Carbon (0.07%) steel samples containing about 0.04% Nb singly and in combination with nitrogen were carburized in a natural Titas gas atmosphere at a temperature of 1223 K (950 °C) and a pressure of about 0.10 MPa for 1/2 to 4 h, followed by slow cooling in the furnace. Their microstructures were studied by optical microscopy. The austenite grain size of the case and the case depths were determined on baseline samples of low-carbon steels and also on niobium and (Nb + N) microalloyed steel samples. It was found that, when compared to the baseline steel, niobium alone or in combination with nitrogen decreased the thickness of cementite network near the surface of the carburized case of the steels. However, niobium in combination with nitrogen was more effective than niobium in reducing the thickness of cementite network. Niobium with or without nitrogen inhibited the formation of Widmanstätten cementite plates at grain boundaries and within the grains near the surface in the hypereutectoid zone of the case. It was also revealed that, when compared to the baseline steel, niobium decreased the case depth of the carburized steels, but that niobium with nitrogen is more effective than niobium alone in reducing the case depth. Niobium as niobium carbide (NbC) and niobium in the presence of nitrogen as niobium carbonitride, [Nb(C,N)] particles refined the austenite grain size of the carburized case, but Nb(C,N) was more effective than NbC in inhibiting austenite grain growth.

  8. Improved Method for Determination of Respiring Individual Microorganisms in Natural Waters

    PubMed Central

    Tabor, Paul S.; Neihof, Rex A.

    1982-01-01

    A method is reported that combines the microscopic determinations of specific, individual, respiring microorganisms by the detection of electron transport system activity and the total number of organisms of an estuarine population by epifluorescence microscopy. An active cellular electron transport system specifically reduces 2-(p-iodophenyl)-3-(p-nitrophenyl)-5-phenyl tetrazolium chloride (INT) to INT-formazan, which is recognized as opaque intracellular deposits in microorganisms stained with acridine orange. In a comparison of previously described sample preparation techniques, a loss of >70% of the counts of INT-reducing microorganisms was shown to be due to the dissolution of INT-formazan deposits by immersion oil (used in microscopy). In addition, significantly fewer fluorescing microorganisms and INT-formazan deposits, both ≤0.2 μm in size, were found for sample preparations that included a Nuclepore filter. Visual clarity was enhanced, and significantly greater direct counts and counts of INT-reducing microorganisms were recognized by transferring microorganisms from a filter to a gelatin film on a cover glass, followed by coating the sample with additional gelatin to produce a transparent matrix. With this method, the number of INT-reducing microorganisms determined for a Chesapeake Bay water sample was 2-to 10-fold greater than the number of respiring organisms reported previously for marine or freshwater samples. INT-reducing microorganisms constituted 61% of the total direct counts determined for a Chesapeake Bay water sample. This is the highest percentage of metabolically active microorganisms of any aquatic population reported using a method which determines both total counts and specific activity. PMID:16346025

  9. Improved method for determination of respiring individual microorganisms in natural waters.

    PubMed

    Tabor, P S; Neihof, R A

    1982-06-01

    A method is reported that combines the microscopic determinations of specific, individual, respiring microorganisms by the detection of electron transport system activity and the total number of organisms of an estuarine population by epifluorescence microscopy. An active cellular electron transport system specifically reduces 2-(p-iodophenyl)-3-(p-nitrophenyl)-5-phenyl tetrazolium chloride (INT) to INT-formazan, which is recognized as opaque intracellular deposits in microorganisms stained with acridine orange. In a comparison of previously described sample preparation techniques, a loss of >70% of the counts of INT-reducing microorganisms was shown to be due to the dissolution of INT-formazan deposits by immersion oil (used in microscopy). In addition, significantly fewer fluorescing microorganisms and INT-formazan deposits, both

  10. Proteoglycan depletion and size reduction in lesions of early grade chondromalacia of the patella.

    PubMed Central

    Väätäinen, U; Häkkinen, T; Kiviranta, I; Jaroma, H; Inkinen, R; Tammi, M

    1995-01-01

    OBJECTIVE--To determine the content and molecular size of proteoglycans (PGs) in patellar chondromalacia (CM) and control cartilages as a first step in investigating the role of matrix alterations in the pathogenesis of this disease. METHODS--Chondromalacia tissue from 10 patients was removed with a surgical knife. Using identical techniques, apparently healthy cartilage of the same site was obtained from 10 age matched cadavers (mean age 31 years in both groups). Additional pathological cartilage was collected from 67 patients with grades II-IV CM (classified according to Outerbridge) using a motorised shaver under arthroscopic control. The shaved cartilage chips were collected with a dense net from the irrigation fluid of the shaver. The content of tissue PGs was determined by Safranin O precipitation or uronic acid content, and the molecular size by mobility on agarose gel electrophoresis. RESULTS--The mean PG content of the CM tissue samples with a knife was dramatically reduced, being only 15% of that in controls. The cartilage chips collected from shaving operations of grades II, III, and IV CM showed a decreasing PG content: 9%, 5%, and 1% of controls, respectively. Electrophoretic analysis of PGs extracted with guanidium chloride from the shaved tissue samples suggested a significantly reduced size of aggrecans in the mild (grade II) lesions. CONCLUSION--These data show that there is already a dramatic and progressive depletion of PGs in CM grade II lesions. This explains the softening of cartilage, a typical finding in the arthroscopic examination of CM. The PG size reduction observed in grade II implicates proteolytic attack as a factor in the pathogenesis of CM. Images PMID:7492223

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aznar, Alexandra; Day, Megan; Doris, Elizabeth

    The report analyzes and presents information learned from a sample of 20 cities across the United States, from New York City to Park City, Utah, including a diverse sample of population size, utility type, region, annual greenhouse gas reduction targets, vehicle use, and median household income. The report compares climate, sustainability, and energy plans to better understand where cities are taking energy-related actions and how they are measuring impacts. Some common energy-related goals focus on reducing city-wide carbon emissions, improving energy efficiency across sectors, increasing renewable energy, and increasing biking and walking.

  12. X-ray diffraction analysis of Nb-3Ge and NbGe alloys

    NASA Technical Reports Server (NTRS)

    Davis, J. H.; House, K. W.

    1983-01-01

    Of all the A-15 samples of NbGe alloy examined, DT 094 is unique in that it was at least 99% pure A-15 phase. Also its diffraction peaks were noisy as if there were about a one percent compositional variation on this phase. DT 094, however, was only a large fragment of the drop tube drop, and thus its small sample size may have reduced the intensity, thus enhancing fluctuations enough to explain some of the loss of peak resolution.

  13. Ceramic Technology for Advanced Heat Engines Project Semiannual Progress Report for Period October 1985 Through March 1986

    DTIC Science & Technology

    1986-08-01

    materials (2.2 w/o and 3.0 w/o MgO). The other two batches (2.8 w/o and 3.1 w/o MgO), of higher purity, were made using E-10 zirconia powder from...CID) powders Two methods have been used for the coprecipitation of doped zirconia powders from solutions of chemical precursors. (4) Method I, for...of powder, approximate sample size 3.2 Kg (6.4 Kg for zirconia powder ); 342 3. Random selection of sample; 4. Partial drying of sample to reduce caking

  14. Increasing point-count duration increases standard error

    USGS Publications Warehouse

    Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.

    1998-01-01

    We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.

  15. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  16. Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data

    PubMed Central

    Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming

    2015-01-01

    As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924

  17. A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data

    PubMed Central

    Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming

    2018-01-01

    The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks. PMID:29706880

  18. A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data.

    PubMed

    Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming

    2018-01-01

    The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks.

  19. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  20. Noise effect in an improved conjugate gradient algorithm to invert particle size distribution and the algorithm amendment.

    PubMed

    Wei, Yongjie; Ge, Baozhen; Wei, Yaolin

    2009-03-20

    In general, model-independent algorithms are sensitive to noise during laser particle size measurement. An improved conjugate gradient algorithm (ICGA) that can be used to invert particle size distribution (PSD) from diffraction data is presented. By use of the ICGA to invert simulated data with multiplicative or additive noise, we determined that additive noise is the main factor that induces distorted results. Thus the ICGA is amended by introduction of an iteration step-adjusting parameter and is used experimentally on simulated data and some samples. The experimental results show that the sensitivity of the ICGA to noise is reduced and the inverted results are in accord with the real PSD.

  1. Fabrication of low thermal expansion SiC/ZrW2O8 porous ceramics

    NASA Astrophysics Data System (ADS)

    Poowancum, A.; Matsumaru, K.; Juárez-Ramírez, I.; Torres-Martínez, L. M.; Fu, Z. Y.; Lee, S. W.; Ishizaki, K.

    2011-03-01

    Low or zero thermal expansion porous ceramics are required for several applications. In this work near zero thermal expansion porous ceramics were fabricated by using SiC and ZrW2O8 as positive and negative thermal expansion materials, respectively, bonded by soda lime glass. The mixture of SiC, ZrW2O8 and soda lime glass was sintered by Pulsed Electric Current Sintering (PECS, or sometimes called Spark Plasma Sintering, SPS) at 700 °C. Sintered samples with ZrW2O8 particle size smaller than 25 μm have high thermal expansion coefficient, because ZrW2O8 has the reaction with soda lime glass to form Na2ZrW3O12 during sintering process. The reaction between soda lime glass and ZrW2O8 is reduced by increasing particle size of ZrW2O8. Sintered sample with ZrW2O8 particle size 45-90 μm shows near zero thermal expansion.

  2. Effect of freezing temperature in thermally induced phase separation method in hydroxyapatite/chitosan-based bone scaffold biomaterial

    NASA Astrophysics Data System (ADS)

    Albab, Muh Fadhil; Yuwono, Akhmad Herman; Sofyan, Nofrijon; Ramahdita, Ghiska

    2017-02-01

    In the current study, hydroxyapatite (HA)/chitosan-based bone scaffold has been fabricated using Thermally Induced Phase Separation (TIPS) method under freezing temperature variation of -20, -30, -40 and -80 °C. The samples with weight percent ratio of 70% HA and 30% chitosan were homogeneously mixed and subsequently dissolved in 2% acetic acid. The synthesized samples were further characterized using Fourier transform infrared (FTIR), compressive test and scanning electron microscope (SEM). The investigation results showed that low freezing temperature reduced the pore size and increased the compressive strength of the scaffold. In the freezing temperature of -20 °C, the pore size was 133.93 µm with the compressive strength of 5.9 KPa, while for -80 °C, the pore size declined to 60.55 µm with the compressive strength 29.8 KPa. Considering the obtained characteristics, HA/chitosan obtained in this work has potential to be applied as a bone scaffold.

  3. Development of sampling plans for cotton bolls injured by stink bugs (Hemiptera: Pentatomidae).

    PubMed

    Reay-Jones, F P F; Toews, M D; Greene, J K; Reeves, R B

    2010-04-01

    Cotton, Gossypium hirsutum L., bolls were sampled in commercial fields for stink bug (Hemiptera: Pentatomidae) injury during 2007 and 2008 in South Carolina and Georgia. Across both years of this study, boll-injury percentages averaged 14.8 +/- 0.3 (SEM). At average boll injury treatment levels of 10, 20, 30, and 50%, the percentage of samples with at least one injured boll was 82, 97, 100, and 100%, respectively. Percentage of field-sampling date combinations with average injury < 10, 20, 30, and 50% was 35, 80, 95, and 99%, respectively. At the average of 14.8% boll injury or 2.9 injured bolls per 20-boll sample, 112 samples at Dx = 0.1 (within 10% of the mean) were required for population estimation, compared with only 15 samples at Dx = 0.3. Using a sample size of 20 bolls, our study indicated that, at the 10% threshold and alpha = beta = 0.2 (with 80% confidence), control was not needed when <1.03 bolls were injured. The sampling plan required continued sampling for a range of 1.03-3.8 injured bolls per 20-boll sample. Only when injury was > 3.8 injured bolls per 20-boll sample was a control measure needed. Sequential sampling plans were also determined for thresholds of 20, 30, and 50% injured bolls. Sample sizes for sequential sampling plans were significantly reduced when compared with a fixed sampling plan (n=10) for all thresholds and error rates.

  4. The use of mini-samples in palaeomagnetism

    NASA Astrophysics Data System (ADS)

    Böhnel, Harald; Michalk, Daniel; Nowaczyk, Norbert; Naranjo, Gildardo Gonzalez

    2009-10-01

    Rock cores of ~25 mm diameter are widely used in palaeomagnetism. Occasionally smaller diameters have been used as well which represents distinct advantages in terms of throughput, weight of equipment and core collections. How their orientation precision compares to 25 mm cores, however, has not been evaluated in detail before. Here we compare the site mean directions and their statistical parameters for 12 lava flows sampled with 25 mm cores (standard samples, typically 8 cores per site) and with 12 mm drill cores (mini-samples, typically 14 cores per site). The site-mean directions for both sample sizes appear to be indistinguishable in most cases. For the mini-samples, site dispersion parameters k on average are slightly lower than for the standard samples reflecting their larger orienting and measurement errors. Applying the Wilcoxon signed-rank test the probability that k or α95 have the same distribution for both sizes is acceptable only at the 17.4 or 66.3 per cent level, respectively. The larger mini-core numbers per site appears to outweigh the lower k values yielding also slightly smaller confidence limits α95. Further, both k and α95 are less variable for mini-samples than for standard size samples. This is interpreted also to result from the larger number of mini-samples per site, which better averages out the detrimental effect of undetected abnormal remanence directions. Sampling of volcanic rocks with mini-samples therefore does not present a disadvantage in terms of the overall obtainable uncertainty of site mean directions. Apart from this, mini-samples do present clear advantages during the field work, as about twice the number of drill cores can be recovered compared to 25 mm cores, and the sampled rock unit is then more widely covered, which reduces the contribution of natural random errors produced, for example, by fractures, cooling joints, and palaeofield inhomogeneities. Mini-samples may be processed faster in the laboratory, which is of particular advantage when carrying out palaeointensity experiments.

  5. How to improve the standardization and the diagnostic performance of the fecal egg count reduction test?

    PubMed

    Levecke, Bruno; Kaplan, Ray M; Thamsborg, Stig M; Torgerson, Paul R; Vercruysse, Jozef; Dobson, Robert J

    2018-04-15

    Although various studies have provided novel insights into how to best design, analyze and interpret a fecal egg count reduction test (FECRT), it is still not straightforward to provide guidance that allows improving both the standardization and the analytical performance of the FECRT across a variety of both animal and nematode species. For example, it has been suggested to recommend a minimum number of eggs to be counted under the microscope (not eggs per gram of feces), but we lack the evidence to recommend any number of eggs that would allow a reliable assessment of drug efficacy. Other aspects that need further research are the methodology of calculating uncertainty intervals (UIs; confidence intervals in case of frequentist methods and credible intervals in case of Bayesian methods) and the criteria of classifying drug efficacy into 'normal', 'suspected' and 'reduced'. The aim of this study is to provide complementary insights into the current knowledge, and to ultimately provide guidance in the development of new standardized guidelines for the FECRT. First, data were generated using a simulation in which the 'true' drug efficacy (TDE) was evaluated by the FECRT under varying scenarios of sample size, analytic sensitivity of the diagnostic technique, and level of both intensity and aggregation of egg excretion. Second, the obtained data were analyzed with the aim (i) to verify which classification criteria allow for reliable detection of reduced drug efficacy, (ii) to identify the UI methodology that yields the most reliable assessment of drug efficacy (coverage of TDE) and detection of reduced drug efficacy, and (iii) to determine the required sample size and number of eggs counted under the microscope that optimizes the detection of reduced efficacy. Our results confirm that the currently recommended criteria for classifying drug efficacy are the most appropriate. Additionally, the UI methodologies we tested varied in coverage and ability to detect reduced drug efficacy, thus a combination of UI methodologies is recommended to assess the uncertainty across all scenarios of drug efficacy estimates. Finally, based on our model estimates we were able to determine the required number of eggs to count for each sample size, enabling investigators to optimize the probability of correctly classifying a theoretical TDE while minimizing both financial and technical resources. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Iron Mineralogy and Speciation in Clay-Sized Fractions of Chinese Desert Sediments

    NASA Astrophysics Data System (ADS)

    Lu, Wanyi; Zhao, Wancang; Balsam, William; Lu, Huayu; Liu, Pan; Lu, Zunli; Ji, Junfeng

    2017-12-01

    Iron released from Asian desert dust may be an important source of bioavailable iron for the North Pacific Ocean and thereby may stimulate primary productivity. However, the Fe species of the fine dusts from this source region are poorly characterized. Here we investigate iron species and mineralogy in the clay-sized fractions (<2 μm), the size fraction most prone to long-distance transport as dust. Samples were analyzed by sequential chemical extraction, X-ray diffraction, and diffuse reflectance spectrometry. Our results show that Fe dissolved from easily reducible iron phases (ferrihydrite and lepidocrocite) and reducible iron oxides (dominated by goethite) are 0.81 wt % and 2.39 wt %, respectively, and Fe dissolved from phyllosilicates extracted by boiling HCl (dominated by chlorite) is 3.15 wt %. Dusts originating from deserts in northwestern China, particularly the Taklimakan desert, are relatively enriched in easily reducible Fe phases, probably due to abundant Fe contained in fresh weathering products resulting from the rapid erosion associated with active uplift of mountains to the west. Data about Fe speciation and mineralogy in Asian dust sources will be useful for improving the quantification of soluble Fe supplied to the oceans, especially in dust models.

  7. Towards integrated drug substance and drug product design for an active pharmaceutical ingredient using particle engineering.

    PubMed

    Kougoulos, Eleftherios; Smales, Ian; Verrier, Hugh M

    2011-03-01

    A novel experimental approach describing the integration of drug substance and drug production design using particle engineering techniques such as sonocrystallization, high shear wet milling (HSWM) and dry impact (hammer) milling were used to manufacture samples of an active pharmaceutical ingredient (API) with diverse particle size and size distributions. The API instability was addressed using particle engineering and through judicious selection of excipients to reduce degradation reactions. API produced using a conventional batch cooling crystallization process resulted in content uniformity issues. Hammer milling increased fine particle formation resulting in reduced content uniformity and increased degradation compared to sonocrystallized and HSWM API in the formulation. To ensure at least a 2-year shelf life based on predictions using an Accelerated Stability Assessment Program, this API should have a D [v, 0.1] of 55 μm and a D [v, 0.5] of 140 μm. The particle size of the chief excipient in the drug product formulation needed to be close to that of the API to avoid content uniformity and stability issues but large enough to reduce lactam formation. The novel methodology described here has potential for application to other APIs. © 2011 American Association of Pharmaceutical Scientists

  8. Operationalizing hippocampal volume as an enrichment biomarker for amnestic MCI trials: effect of algorithm, test-retest variability and cut-point on trial cost, duration and sample size

    PubMed Central

    Yu, P.; Sun, J.; Wolz, R.; Stephenson, D.; Brewer, J.; Fox, N.C.; Cole, P.E.; Jack, C.R.; Hill, D.L.G.; Schwarz, A.J.

    2014-01-01

    Objective To evaluate the effect of computational algorithm, measurement variability and cut-point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). Methods We used normal control and amnestic MCI subjects from ADNI-1 as normative reference and screening cohorts. We evaluated the enrichment performance of four widely-used hippocampal segmentation algorithms (FreeSurfer, HMAPS, LEAP and NeuroQuant) in terms of two-year changes in MMSE, ADAS-Cog and CDR-SB. We modeled the effect of algorithm, test-retest variability and cut-point on sample size, screen fail rates and trial cost and duration. Results HCV-based patient selection yielded not only reduced sample sizes (by ~40–60%) but also lower trial costs (by ~30–40%) across a wide range of cut-points. Overall, the dependence on the cut-point value was similar for the three clinical instruments considered. Conclusion These results provide a guide to the choice of HCV cut-point for aMCI clinical trials, allowing an informed trade-off between statistical and practical considerations. PMID:24211008

  9. Taking Costs and Diagnostic Test Accuracy into Account When Designing Prevalence Studies: An Application to Childhood Tuberculosis Prevalence.

    PubMed

    Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence

    2017-11-01

    When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.

  10. Portfolio of automated trading systems: complexity and learning set size issues.

    PubMed

    Raudys, Sarunas

    2013-03-01

    In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.

  11. Investigating the Effect of Impurities on Macromolecule Crystal Growth in Microgravity

    NASA Technical Reports Server (NTRS)

    Snell, Edward H.; Judge, Russell A.; Crawford, Lisa; Forsythe, Elizabeth L.; Pusey, Marc L.; Sportiello, Michael; Todd, Paul; Bellamy, Henry; Lovelace, Jeff; Cassanto, John M.; hide

    2001-01-01

    Chicken egg-white lysozyme (CEWL) crystals were grown in microgravity and on the ground in the presence of various amounts of a naturally occurring lysozyme dimer impurity. No significant favorable differences in impurity incorporation between microgravity and ground crystal samples were observed. At low impurity concentration the microgravity crystals preferentially incorporated the dimer. The presence of the dimer in the crystallization solutions in microgravity reduced crystal size, increased mosaicity and reduced the signal to noise ratio of the X-ray data. Microgravity samples proved more sensitive to impurity. Accurate indexing of the reflections proved critical to the X-ray analysis. The largest crystals with the best X-ray diffraction properties were grown from pure solution in microgravity.

  12. Finite element model study of the effect of corner rounding on detectability of corner cracks using bolt hole eddy current

    NASA Astrophysics Data System (ADS)

    Underhill, P. R.; Krause, T. W.

    2017-02-01

    Recent work has shown that the detectability of corner cracks in bolt-holes is compromised when rounding of corners arises, as might occur during bolt-hole removal. Probability of Detection (POD) studies normally require a large number of samples of both fatigue cracks and electric discharge machined notches. In the particular instance of rounding of bolt-hole corners the generation of such a large set of samples representing the full spectrum of potential rounding would be prohibitive. In this paper, the application of Finite Element Method (FEM) modeling is used to supplement the study of detection of cracks forming at the rounded corners of bolt-holes. FEM models show that rounding of the corner of the bolt-hole reduces the size of the response to a corner crack to a greater extent than can be accounted for by loss of crack area. This reduced sensitivity can be ascribed to a lower concentration of eddy currents at the rounded corner surface and greater lift-off of pick-up coils relative to that of a straight-edge corner. A rounding with a radius of 0.4 mm (.016 inch) showed a 20% reduction in the strength of the crack signal. Assuming linearity of the crack signal with crack size, this would suggest an increase in the minimum detectable size by 25%.

  13. Electrical conductivity enhancement in heterogeneously doped scandia-stabilized zirconia

    NASA Astrophysics Data System (ADS)

    Varanasi, Chakrapani; Juneja, Chetan; Chen, Christina; Kumar, Binod

    Composites of 6 mol% scandia-stabilized zirconia materials (6ScSZ) and nanosize Al 2O 3 powder (0-30 wt.%) were prepared and characterized for electrical conductivity by the ac impedance method at various temperatures ranging from 300 to 950 °C. All the composites characterized showed improved conductivity at higher temperatures compared to the undoped ScSZ. An average conductivity of 0.12 S cm -1 was measured at 850 °C for 6ScSZ + 30 wt.% Al 2O 3 composite samples, an increase in conductivity up to 20% compared to the undoped 6ScSZ specimen at this temperature. Microstructural evaluation using scanning electron microscopy revealed that the ScSZ grain size was relatively unchanged up to 10 wt.% of Al 2O 3 additions. However, the grain size was reduced in samples with higher (20 and 30 wt.%) additions of Al 2O 3. Small grain size, reduced quantity of the 6ScSZ material (only 70%), and improved conductivity makes these ScSZ + 30 wt.% Al 2O 3 composites very attractive as electrolyte materials in view of their collective mechanical and electrical properties and cost requirements. The observed increase in conductivity values with the additions of an insulating Al 2O 3 phase is explained in light of the space charge regions at the 6ScSZ-Al 2O 3 grain boundaries.

  14. Integrity of nuclear genomic deoxyribonucleic acid in cooked meat: Implications for food traceability.

    PubMed

    Aslan, O; Hamill, R M; Sweeney, T; Reardon, W; Mullen, A M

    2009-01-01

    It is essential to isolate high-quality DNA from muscle tissue for PCR-based applications in traceability of animal origin. We wished to examine the impact of cooking meat to a range of core temperatures on the quality and quantity of subsequently isolated genomic (specifically, nuclear) DNA. Triplicate steak samples were cooked in a water bath (100 degrees C) until their final internal temperature was 75, 80, 85, 90, 95, or 100 degrees C, and DNA was extracted. Deoxyribonucleic acid quantity was significantly reduced in cooked meat samples compared with raw (6.5 vs. 56.6 ng/microL; P < 0.001), but there was no relationship with cooking temperature. Quality (A(260)/A(280), i.e., absorbance at 260 and 280 nm) was also affected by cooking (P < 0.001). For all 3 genes, large PCR amplicons (product size >800 bp) were observed only when using DNA from raw meat and steak cooked to lower core temperatures. Small amplicons (<200 bp) were present for all core temperatures. Cooking meat to high temperatures thus resulted in a reduced overall yield and probable fragmentation of DNA to sizes less than 800 bp. Although nuclear DNA is preferable to mitochondrial DNA for food authentication, it is less abundant, and results suggest that analyses should be designed to use small amplicon sizes for meat cooked to high core temperatures.

  15. Feedback Augmented Sub-Ranging (FASR) Quantizer

    NASA Technical Reports Server (NTRS)

    Guilligan, Gerard

    2012-01-01

    This innovation is intended to reduce the size, power, and complexity of pipeline analog-to-digital converters (ADCs) that require high resolution and speed along with low power. Digitizers are important components in any application where analog signals (such as light, sound, temperature, etc.) need to be digitally processed. The innovation implements amplification of a sampled residual voltage in a switched capacitor amplifier stage that does not depend on charge redistribution. The result is less sensitive to capacitor mismatches that cause gain errors, which are the main limitation of such amplifiers in pipeline ADCs. The residual errors due to mismatch are reduced by at least a factor of 16, which is equivalent to at least 4 bits of improvement. The settling time is also faster because of a higher feedback factor. In traditional switched capacitor residue amplifiers, closed-loop amplification of a sampled and held residue signal is achieved by redistributing sampled charge onto a feedback capacitor around a high-gain transconductance amplifier. The residual charge that was sampled during the acquisition or sampling phase is stored on two or more capacitors, often equal in value or integral multiples of each other. During the hold or amplification phase, all of the charge is redistributed onto one capacitor in the feedback loop of the amplifier to produce an amplified voltage. The key error source is the non-ideal ratios of feedback and input capacitors caused by manufacturing tolerances, called mismatches. The mismatches cause non-ideal closed-loop gain, leading to higher differential non-linearity. Traditional solutions to the mismatch errors are to use larger capacitor values (than dictated by thermal noise requirements) and/or complex calibration schemes, both of which increase the die size and power dissipation. The key features of this innovation are (1) the elimination of the need for charge redistribution to achieve an accurate closed-loop gain of two, (2) a higher feedback factor in the amplifier stage giving a higher closed-loop bandwidth compared to the prior art, and (3) reduced requirement for calibration. The accuracy of the new amplifier is mainly limited by the sampling networks parasitic capacitances, which should be minimized in relation to the sampling capacitors.

  16. Improvement of sampling plans for Salmonella detection in pooled table eggs by use of real-time PCR.

    PubMed

    Pasquali, Frédérique; De Cesare, Alessandra; Valero, Antonio; Olsen, John Emerdhal; Manfreda, Gerardo

    2014-08-01

    Eggs and egg products have been described as the most critical food vehicles of salmonellosis. The prevalence and level of contamination of Salmonella on table eggs are low, which severely affects the sensitivity of sampling plans applied voluntarily in some European countries, where one to five pools of 10 eggs are tested by the culture based reference method ISO 6579:2004. In the current study we have compared the testing-sensitivity of the reference culture method ISO 6579:2004 and an alternative real-time PCR method on Salmonella contaminated egg-pool of different sizes (4-9 uninfected eggs mixed with one contaminated egg) and contamination levels (10°-10(1), 10(1)-10(2), 10(2)-10(3)CFU/eggshell). Two hundred and seventy samples corresponding to 15 replicates per pool size and inoculum level were tested. At the lowest contamination level real-time PCR detected Salmonella in 40% of contaminated pools vs 12% using ISO 6579. The results were used to estimate the lowest number of sample units needed to be tested in order to have a 95% certainty not falsely to accept a contaminated lot by Monte Carlo simulation. According to this simulation, at least 16 pools of 10 eggs each are needed to be tested by ISO 6579 in order to obtain this confidence level, while the minimum number of pools to be tested was reduced to 8 pools of 9 eggs each, when real-time PCR was applied as analytical method. This result underlines the importance of including analytical methods with higher sensitivity in order to improve the efficiency of sampling and reduce the number of samples to be tested. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Effects of substrate microstructure on the formation of oriented oxide nanotube arrays on Ti and Ti alloys

    NASA Astrophysics Data System (ADS)

    Ferreira, C. P.; Gonçalves, M. C.; Caram, R.; Bertazzoli, R.; Rodrigues, C. A.

    2013-11-01

    The formation of nanotubular oxide layers on Ti and Ti alloys has been widely investigated for the photocatalytic degradation of organic compounds due to their excellent catalytic efficiency, chemical stability, and low cost and toxicity. Aiming to improve the photocatalytic efficiency of this nanostructured oxide, this work investigated the influence of substrate grain size on the growth of nanotubular oxide layers. Ti and Ti alloys (Ti-6Al, Ti-6Al-7Nb) were produced by arc melting with non-consumable tungsten electrode and water-cooled copper hearth under argon atmosphere. Some of the ingots were heat-treated at 1000 °C for 12 and 24 h in argon atmosphere, followed by slow cooling rates to reduce crystalline defects and increase the grain size of their microstructures. Three types of samples were anodized: commercial substrate, as-prepared and heat-treated samples. The anodization was performed using fluoride solution and a cell potential of 20 V. The samples were characterized by optical microscopy, field-emission scanning electron microscopy and X-ray diffraction. The heat treatment preceding the anodization process increased the grain size of pure Ti and Ti alloys and promoted the formation of Widmanstätten structures in Ti6Al7Nb. The nanotubes layers grown on smaller grain and thermally untreated samples were more regular and homogeneous. In the case of Ti-6Al-7Nb alloy, which presents a α + β phase microstructure, the morphology of nanotubes nucleated on α matrix was more regular than those of nanotubes nucleated on β phase. After the annealing process, the Ti-6Al-7Nb alloy presented full diffusion process and the growth of equilibrium phases resulting in the appearance of regions containing higher concentrations of Nb, i.e. beta phase. In those regions the dissolution rate of Nb2O5 is lower than that of TiO2, resulting in a nanoporous layer. In general, heat treating reduces crystalline defects and promotes the increasing of the grain sizes, not favoring the process of nanotube nucleation and growth on the metallic surface.

  18. Reduced Data Dualscale Entropy Analysis of HRV Signals for Improved Congestive Heart Failure Detection

    NASA Astrophysics Data System (ADS)

    Kuntamalla, Srinivas; Lekkala, Ram Gopal Reddy

    2014-10-01

    Heart rate variability (HRV) is an important dynamic variable of the cardiovascular system, which operates on multiple time scales. In this study, Multiscale entropy (MSE) analysis is applied to HRV signals taken from Physiobank to discriminate Congestive Heart Failure (CHF) patients from healthy young and elderly subjects. The discrimination power of the MSE method is decreased as the amount of the data reduces and the lowest amount of the data at which there is a clear discrimination between CHF and normal subjects is found to be 4000 samples. Further, this method failed to discriminate CHF from healthy elderly subjects. In view of this, the Reduced Data Dualscale Entropy Analysis method is proposed to reduce the data size required (as low as 500 samples) for clearly discriminating the CHF patients from young and elderly subjects with only two scales. Further, an easy to interpret index is derived using this new approach for the diagnosis of CHF. This index shows 100 % accuracy and correlates well with the pathophysiology of heart failure.

  19. Thermoelectric technique to precisely control hyperthermic exposures of human whole blood.

    PubMed

    DuBose, D A; Langevin, R C; Morehouse, D H

    1996-12-01

    The need in military research to avoid exposing humans to harsh environments and reduce animal use requires the development of in vitro models for the study of hyperthermic injury. A thermoelectric module (TEM) system was employed to heat human whole blood (HWB) in a manner similar to that experienced by heat-stroked rats. This system precisely and accurately replicated mild, moderate, and extreme heat-stress exposures. Temperature changes could be monitored without the introduction of a test sample thermistor, which reduced contamination problems. HWB with hematocrits of 45 or 50% had similar heating curves, indicating that the system compensated for differences in sample character. The unit's size permitted its containment within a standard carbon dioxide incubator to further control sample environment. These results indicate that the TEM system can precisely control temperature change in this heat stress in vitro model employing HWB. Information obtained from such a model could contribute to military preparedness.

  20. Effective population sizes of a major vector of human diseases, Aedes aegypti.

    PubMed

    Saarman, Norah P; Gloria-Soria, Andrea; Anderson, Eric C; Evans, Benjamin R; Pless, Evlyn; Cosme, Luciano V; Gonzalez-Acosta, Cassandra; Kamgang, Basile; Wesson, Dawn M; Powell, Jeffrey R

    2017-12-01

    The effective population size ( N e ) is a fundamental parameter in population genetics that determines the relative strength of selection and random genetic drift, the effect of migration, levels of inbreeding, and linkage disequilibrium. In many cases where it has been estimated in animals, N e is on the order of 10%-20% of the census size. In this study, we use 12 microsatellite markers and 14,888 single nucleotide polymorphisms (SNPs) to empirically estimate N e in Aedes aegypti , the major vector of yellow fever, dengue, chikungunya, and Zika viruses. We used the method of temporal sampling to estimate N e on a global dataset made up of 46 samples of Ae. aegypti that included multiple time points from 17 widely distributed geographic localities. Our N e estimates for Ae. aegypti fell within a broad range (~25-3,000) and averaged between 400 and 600 across all localities and time points sampled. Adult census size (N c ) estimates for this species range between one and five thousand, so the N e / N c ratio is about the same as for most animals. These N e values are lower than estimates available for other insects and have important implications for the design of genetic control strategies to reduce the impact of this species of mosquito on human health.

  1. Effect of bismuth substitution in strontium hexaferrite

    NASA Astrophysics Data System (ADS)

    Sahoo, M. R.; Kuila, S.; Sweta, K.; Barik, A.; Vishwakarma, P. N.

    2018-05-01

    Bismuth (Bi) substituted M-type strontium hexaferrite (Sr1-xBix Fe12O19, x=0 and 0.02) are synthesized by sol-gel auto combustion method. Powder X-ray diffraction (XRD) and field emission scanning electron microscopy (FESEM) shows increase in lattice parameter and particle size (500 nm to 3 micron) respectively, for Bi substituted sample. Magnetization via M-H shows decrease in magnetic hardness for Bi substituted samples. M-T data for parent (x=0) sample shows an antiferromagnetic transition in the ZFC plot at 495 °C. This antiferromagnetic transition is replaced by a ferromagnetic transition for FCW measurement. Similar behavior is displayed by the Bi substituted sample with transition temperature reduced to 455 °C.

  2. Investigating the characteristic strength of flocs formed from crude and purified Hibiscus extracts in water treatment.

    PubMed

    Jones, Alfred Ndahi; Bridgeman, John

    2016-10-15

    The growth, breakage and re-growth of flocs formed using crude and purified seed extracts of Okra (OK), Sabdariffa (SB) and Kenaf (KE) as coagulants and coagulant aids was assessed. The results showed floc size increased from 300 μm when aluminium sulphate (AS) was used as a coagulant to between 696 μm and 722 μm with the addition of 50 mg/l of OK, KE and SB crude samples as coagulant aids. Similarly, an increase in floc size was observed when each of the purified proteins was used as coagulant aid at doses of between 0.123 and 0.74 mg/l. The largest floc sizes of 741 μm, 460 μm and 571 μm were obtained with a 0.123 mg/l dose of purified Okra protein (POP), purified Sabdariffa (PSP) and purified Kenaf (PKP) respectively. Further coagulant aid addition from 0.123 to 0.74 mg/l resulted in a decrease in floc size and strength in POP and PSP. However, an increase in floc strength and reduced d50 size was observed in PKP at a dose of 0.74 mg/l. Flocs produced when using purified and crude extract samples as coagulant aids exhibited high recovery factors and strength. However, flocs exhibited greater recovery post-breakage when the extracts were used as a primary coagulant. It was observed that the combination of purified proteins and AS improved floc size, strength and recovery factors. Therefore, the applications of Hibiscus seeds in either crude or purified form increases floc growth, strength, recoverability and can also reduce the cost associated with the import of AS in developing countries. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  3. Advancements in the safe identification of explosives using a Raman handheld instrument (ACE-ID)

    NASA Astrophysics Data System (ADS)

    Arnó, Josep; Frunzi, Michael; Kittredge, Marina; Sparano, Brian

    2014-05-01

    Raman spectroscopy is the technology of choice to identify bulk solid and liquid phase unknown samples without the need to contact the substance. Materials can be identified through transparent and semi-translucent containers such as plastic and glass. ConOps in emergency response and military field applications require the redesign of conventional laboratory units for: field portability; shock, thermal and chemical attack resistance; easy and intuitive use in restrictive gear; reduced size, weight, and power. This article introduces a new handheld instrument (ACE-IDTM) designed to take Raman technology to the next level in terms of size, safety, speed, and analytical performance. ACE-ID is ruggedized for use in severe climates and terrains. It is lightweight and can be operated with just one hand. An intuitive software interface guides users through the entire identification process, making it easy-to-use by personnel of different skill levels including military explosive ordinance disposal technicians, civilian bomb squads and hazmat teams. Through the use of embedded advanced algorithms, the instrument is capable of providing fluorescence correction and analysis of binary mixtures. Instrument calibration is performed automatically upon startup without requiring user intervention. ACE-ID incorporates an optical rastering system that diffuses the laser energy over the sample. This important innovation significantly reduces the heat induced in dark samples and the probability of ignition of susceptible explosive materials. In this article, the explosives identification performance of the instrument will be provided in addition to a quantitative evaluation of the safety improvements derived from the reduced ignition probabilities.

  4. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  5. Solution of reduced graphene oxide synthesized from coconut shells and its optical properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mas’udah, Kusuma Wardhani, E-mail: masudahkusuma@ymail.com; Faculty of Mathematics and Natural Sciences, Univesitas Pesantren Tinggi Darul Ulum, PP. Darul ‘Ulum Tromol Pos 10 Peterongan Jombang 61481; Nugraha, I Made Ananta, E-mail: anantanugraha25@gmail.com

    Reduced graphene oxide (r-GO)powder has been prepared from coconut shells by carbonization process at 400°C for 3, 4 and 5 hours.Theresulted sample mass was reduced to be 60% relativelycompared to the starting material. The longer heating duration has also led to the rGO with reduced crystalinity according to the X-ray diffractometry data and TEM. The rGO solution was prepared by adding powders of 5, 10 and 15 grams into 50 ml destiled water and then centrifused at 6000 rpm for 30 minutes.The resulted solutions were seen to be varied form clear transparant, light and dark yellow to black. Measurement using particle sizemore » analyser shows that the individual rGO particles tends to be agglomerating each others to form bigger size clustering, manifested by the observed bigger size particles for the increasing amount of soluted rGO powders in water.The varying UV-visible spectra of these rGO solutions together with their optical bandgaps will also be discussed in this study.« less

  6. Observation of the sweating in lipstick by scanning electron microscopy.

    PubMed

    Seo, S Y; Lee, I S; Shin, H Y; Choi, K Y; Kang, S H; Ahn, H J

    1999-06-01

    The relationship between the wax matrix in lipstick and sweating has been investigated by observing the change of size and shape of the wax matrix due to sweating by Scanning Electron Microscopy (SEM). For observation by SEM, a lipstick sample was frozen in liquid nitrogen. The oil in the lipstick was then extracted in cold isopropanol (-70 degrees C) for 1-3 days. After the isopropanol was evaporated, the sample was sputtered with gold and examined by SEM. The change of wax matrix underneath the surface from fine, uniform structure to coarse, nonuniform structure resulted from the caking of surrounding wax matrix. The oil underneath the surface migrated to the surface of lipstick with sweating; consequently the wax matrix in that region was rearranged into the coarse matrix. In case of flamed lipstick, sweating was delayed and the wax matrix was much coarser than that of the unflamed one. The larger wax matrix at the surface region was good for including oil. The effect of molding temperature on sweating was also studied. As the molding temperature rose, sweating was greatly reduced and the size of the wax matrix increased. It was found that sweating was influenced by the compatibility of wax and oil. A formula consisting of wax and oil that have good compatibility has a tendency to reduce sweating and increase the size of the wax matrix. When pigments were added to wax and oil, the size of the wax matrix was changed, but in all cases sweating was increased due to the weakening of the binding force between wax and oil. On observing the thick membrane of wax at the surface of lipstick a month after molding it was also found that sweating was influenced by ageing. In conclusion, the structure of the wax matrix at the surface region of lipstick was changed with the process of flaming, molding temperature, compatibility of wax and oil, addition of pigment, and ageing. In most cases, as the size of the wax matrix was increased, sweating was reduced and delayed.

  7. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  8. Improved radiation dose efficiency in solution SAXS using a sheath flow sample environment

    PubMed Central

    Kirby, Nigel; Cowieson, Nathan; Hawley, Adrian M.; Mudie, Stephen T.; McGillivray, Duncan J.; Kusel, Michael; Samardzic-Boban, Vesna; Ryan, Timothy M.

    2016-01-01

    Radiation damage is a major limitation to synchrotron small-angle X-ray scattering analysis of biomacromolecules. Flowing the sample during exposure helps to reduce the problem, but its effectiveness in the laminar-flow regime is limited by slow flow velocity at the walls of sample cells. To overcome this limitation, the coflow method was developed, where the sample flows through the centre of its cell surrounded by a flow of matched buffer. The method permits an order-of-magnitude increase of X-ray incident flux before sample damage, improves measurement statistics and maintains low sample concentration limits. The method also efficiently handles sample volumes of a few microlitres, can increase sample throughput, is intrinsically resistant to capillary fouling by sample and is suited to static samples and size-exclusion chromatography applications. The method unlocks further potential of third-generation synchrotron beamlines to facilitate new and challenging applications in solution scattering. PMID:27917826

  9. Evolution of porous structure and texture in nanoporous SiO2/Al2O3 materials during calcination

    NASA Astrophysics Data System (ADS)

    Glazkova, Elena A.; Bakina, Olga V.

    2016-11-01

    The study focuses on the evolution of porous structure and texture of silica/alumina xerogels during calcination in the temperature range from 500 to 1200°C. The xerogel was prepared via sol-gel method using subcritical drying. The silica/alumina xerogels were examined using transmission electron microscopy-energy dispersive spectroscopy (TEM-EDS), Brunauer Emmett Teller-Barrett Joyner Halenda (BET-BJH), differential scanning calorimetry (DSC), and Fourier transform infrared (FTIR) spectroscopy. SiO2 primary particles of size about 10 nm are connected with each other to form a porous xerogel structure. Alumina is uniformly distributed over the xerogel volume. The changes of textural characteristics under heat treatment of samples are radical; the specific surface area and pore size attain their maximum at 500-700°C. The heat treatment of samples causes dehydroxylation of the xerogel surface, and at 1200°C the sample is sintered, loses mesoporosity, and its specific surface area reduces considerably down to 78 m2/g.

  10. 'Mitominis': multiplex PCR analysis of reduced size amplicons for compound sequence analysis of the entire mtDNA control region in highly degraded samples.

    PubMed

    Eichmann, Cordula; Parson, Walther

    2008-09-01

    The traditional protocol for forensic mitochondrial DNA (mtDNA) analyses involves the amplification and sequencing of the two hypervariable segments HVS-I and HVS-II of the mtDNA control region. The primers usually span fragment sizes of 300-400 bp each region, which may result in weak or failed amplification in highly degraded samples. Here we introduce an improved and more stable approach using shortened amplicons in the fragment range between 144 and 237 bp. Ten such amplicons were required to produce overlapping fragments that cover the entire human mtDNA control region. These were co-amplified in two multiplex polymerase chain reactions and sequenced with the individual amplification primers. The primers were carefully selected to minimize binding on homoplasic and haplogroup-specific sites that would otherwise result in loss of amplification due to mis-priming. The multiplexes have successfully been applied to ancient and forensic samples such as bones and teeth that showed a high degree of degradation.

  11. Structural and magnetic analysis of La0.67Ca0.33MnO3 nanoparticles thermally treated: Acoustic detection of the magnetocaloric effect

    NASA Astrophysics Data System (ADS)

    Pena, C. F.; Soffner, M. E.; Mansanares, A. M.; Sampaio, J. A.; Gandra, F. C. G.; da Silva, E. C.; Vargas, H.

    2017-10-01

    Nanoparticles of La0.67Ca0.33MnO3 were synthesized via the sol-gel method, thermally treated and characterized using X-ray diffraction, magnetization, electron spin resonance and magnetoacoustic experiments. The formation of the desired perovskite structure was verified and the average size of the nanoparticles was also determined. An increase of the particle size by rising the treatment temperature was observed. The Curie temperature and the isothermal entropy variation of the samples were obtained from the magnetization data. The isothermal entropy change, produced under the application of an external magnetic field, which expresses the magnetocaloric effect, became significantly larger for the samples treated at higher temperatures. These results are in good agreement with those obtained by magnetoacoustics, based on the direct and contactless measurement of the temperature change, validating the ability of the technique to study the magnetocaloric effect in reduced mass and nanoparticles samples.

  12. Impact of minimum catch size on the population viability of Strombus gigas (Mesogastropoda: Strombidae) in Quintana Roo, Mexico.

    PubMed

    Peel, Joanne R; Mandujano, María del Carmen

    2014-12-01

    The queen conch Strombus gigas represents one of the most important fishery resources of the Caribbean but heavy fishing pressure has led to the depletion of stocks throughout the region, causing the inclusion of this species into CITES Appendix II and IUCN's Red-List. In Mexico, the queen conch is managed through a minimum fishing size of 200 mm shell length and a fishing quota which usually represents 50% of the adult biomass. The objectives of this study were to determine the intrinsic population growth rate of the queen conch population of Xel-Ha, Quintana Roo, Mexico, and to assess the effects of a regulated fishing impact, simulating the extraction of 50% adult biomass on the population density. We used three different minimum size criteria to demonstrate the effects of minimum catch size on the population density and discuss biological implications. Demographic data was obtained through capture-mark-recapture sampling, collecting all animals encountered during three hours, by three divers, at four different sampling sites of the Xel-Ha inlet. The conch population was sampled each month between 2005 and 2006, and bimonthly between 2006 and 2011, tagging a total of 8,292 animals. Shell length and lip thickness were determined for each individual. The average shell length for conch with formed lip in Xel-Ha was 209.39 ± 14.18 mm and the median 210 mm. Half of the sampled conch with lip ranged between 200 mm and 219 mm shell length. Assuming that the presence of the lip is an indicator for sexual maturity, it can be concluded that many animals may form their lip at greater shell lengths than 200 mm and ought to be considered immature. Estimation of relative adult abundance and densities varied greatly depending on the criteria employed for adult classification. When using a minimum fishing size of 200 mm shell length, between 26.2% and up to 54.8% of the population qualified as adults, which represented a simulated fishing impact of almost one third of the population. When conch extraction was simulated using a classification criteria based on lip thickness, it had a much smaller impact on the population density. We concluded that the best management strategy for S. gigas is a minimum fishing size based on a lip thickness, since it has lower impact on the population density, and given that selective fishing pressure based on size may lead to the appearance of small adult individuals with reduced fecundity. Furthermore, based on the reproductive biology and the results of the simulated fishing, we suggest a minimum lip thickness of ≥ 15 mm, which ensures the protection of reproductive stages, reduces the risk of overfishing, leading to non-viable density reduction.

  13. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  14. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  15. Strategies for informed sample size reduction in adaptive controlled clinical trials

    NASA Astrophysics Data System (ADS)

    Arandjelović, Ognjen

    2017-12-01

    Clinical trial adaptation refers to any adjustment of the trial protocol after the onset of the trial. The main goal is to make the process of introducing new medical interventions to patients more efficient. The principal challenge, which is an outstanding research problem, is to be found in the question of how adaptation should be performed so as to minimize the chance of distorting the outcome of the trial. In this paper, we propose a novel method for achieving this. Unlike most of the previously published work, our approach focuses on trial adaptation by sample size adjustment, i.e. by reducing the number of trial participants in a statistically informed manner. Our key idea is to select the sample subset for removal in a manner which minimizes the associated loss of information. We formalize this notion and describe three algorithms which approach the problem in different ways, respectively, using (i) repeated random draws, (ii) a genetic algorithm, and (iii) what we term pair-wise sample compatibilities. Experiments on simulated data demonstrate the effectiveness of all three approaches, with a consistently superior performance exhibited by the pair-wise sample compatibilities-based method.

  16. Influence of preservative and mounting media on the size and shape of monogenean sclerites.

    PubMed

    Fankoua, Severin-Oscar; Bitja Nyom, Arnold R; Bahanak, Dieu Ne Dort; Bilong Bilong, Charles F; Pariselle, Antoine

    2017-08-01

    Based on Cichlidogyrus sp. (Monogenea, Ancyrocephalidae) specimens from Hemichromis sp. hosts, we tested the influence of different methods to fix/preserve samples/specimens [frozen material, alcohol or formalin preserved, museum process for fish preservation (fixed in formalin and preserved in alcohol)] and different media used to mount the slides [tap water, glycerin ammonium picrate (GAP), Hoyer's one (HM)] on the size/shape of sclerotized parts of monogenean specimens. The results show that the use of HM significantly increases the size of haptoral sclerites [marginal hooks I, II, IV, V, and VI; dorsal bar length, width, distance between auricles and auricle length, ventral bar length and width], and changes their shape [angle opening between shaft and guard (outer and inner roots) in both ventral and dorsal anchors, ventral bar much wider, dorsal one less curved]. This influence seems to be reduced when specimens/samples are fixed in formalin. The systematics of Monogenea being based on the size and shape of their sclerotized parts, to prevent misidentifications or description of invalid new species, we recommend the use of GAP as mounting medium; Hoyer's one should be restricted to monogenean specimens fixed for a long time which are more shrunken.

  17. Infraocclusion: Dental development and associated dental variations in singletons and twins.

    PubMed

    Odeh, Ruba; Townsend, Grant; Mihailidis, Suzanna; Lähdesmäki, Raija; Hughes, Toby; Brook, Alan

    2015-09-01

    The aim of this study was to investigate the prevalence of selected dental variations in association with infraocclusion, as well as determining the effects of infraocclusion on dental development and tooth size, in singletons and twins. Two samples were analysed. The first sample comprised 1454 panoramic radiographs of singleton boys and girls aged 8-11 years. The second sample comprised dental models of 202 pairs of monozygotic and dizygotic twins aged 8-11 years. Adobe Photoshop CS5 was used to construct reference lines and measure the extent of infraocclusion (in mm) of primary molars on the panoramic radiographs and on 2D images obtained from the dental models. The panoramic radiographs were examined for the presence of selected dental variations and to assess dental development following the Demirjian and Willems systems. The twins' dental models were measured to assess mesiodistal crown widths. In the singleton sample there was a significant association of canines in an altered position during eruption and the lateral incisor complex (agenesis and/or small tooth size) with infraocclusion (P<0.001), but there was no significant association between infraocclusion and agenesis of premolars. Dental age assessment revealed that dental development was delayed in individuals with infraocclusion compared to controls. The primary mandibular canines were significantly smaller in size in the infraoccluded group (P<0.05). The presence of other dental variations in association with infraocclusion, as well as delayed dental development and reduced tooth size, suggests the presence of a pleiotropic effect. The underlying aetiological factors may be genetic and/or epigenetic. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. OLT-centralized sampling frequency offset compensation scheme for OFDM-PON.

    PubMed

    Chen, Ming; Zhou, Hui; Zheng, Zhiwei; Deng, Rui; Chen, Qinghui; Peng, Miao; Liu, Cuiwei; He, Jing; Chen, Lin; Tang, Xionggui

    2017-08-07

    We propose an optical line terminal (OLT)-centralized sampling frequency offset (SFO) compensation scheme for adaptively-modulated OFDM-PON systems. By using the proposed SFO scheme, the phase rotation and inter-symbol interference (ISI) caused by SFOs between OLT and multiple optical network units (ONUs) can be centrally compensated in the OLT, which reduces the complexity of ONUs. Firstly, the optimal fast Fourier transform (FFT) size is identified in the intensity-modulated and direct-detection (IMDD) OFDM system in the presence of SFO. Then, the proposed SFO compensation scheme including phase rotation modulation (PRM) and length-adaptive OFDM frame has been experimentally demonstrated in the downlink transmission of an adaptively modulated optical OFDM with the optimal FFT size. The experimental results show that up to ± 300 ppm SFO can be successfully compensated without introducing any receiver performance penalties.

  19. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  20. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  1. Structure and properties of carbon black particles

    NASA Astrophysics Data System (ADS)

    Xu, Wei

    Structure and properties of carbon black particles were investigated using atomic force microscopy, gas adsorption, Raman spectroscopy, and X-ray diffraction. Supplementary information was obtained using TEM and neutron scattering. The AFM imaging of carbon black aggregates provided qualitative visual information on their morphology, complementary to that obtained by 3-D modeling based on TEM images. Our studies showed that carbon black aggregates were relatively flat. The surface of all untreated carbon black particles was found to be rough and its fractal dimension was 2.2. Heating reduced the roughness and fractal dimension for all samples heat treated at above 1300 K to 2.0. Once the samples were heat treated rapid cooling did not affect the surface roughness. However, rapid cooling reduced crystallite sizes, and different Raman spectra were obtained for carbon blacks of various history of heat treatment. By analyzing the Raman spectra we determined the crystallite sizes and identified amorphous carbon. The concentration of amorphous carbon depends on hydrogen content. Once hydrogen was liberated at increased temperature, the concentration of amorphous carbon was reduced and crystallites started to grow. Properties of carbon blacks at high pressure were also studied. Hydrostatic pressure did not affect the size of the crystallites in carbon black particles. The pressure induced shift in Raman frequency of the graphitic component was a result of increased intermolecular forces and not smaller crystallites. Two methods of determining the fractal dimension, the FHH model and the yardstick technique based on the BET theory were used in the literature. Our study proved that the FHH model is sensitive to numerous assumptions and leads to wrong conclusions. On the other hand the yardstick method gave correct results, which agreed with the AFM results.

  2. Is using multiple imputation better than complete case analysis for estimating a prevalence (risk) difference in randomized controlled trials when binary outcome observations are missing?

    PubMed

    Mukaka, Mavuto; White, Sarah A; Terlouw, Dianne J; Mwapasa, Victor; Kalilani-Phiri, Linda; Faragher, E Brian

    2016-07-22

    Missing outcomes can seriously impair the ability to make correct inferences from randomized controlled trials (RCTs). Complete case (CC) analysis is commonly used, but it reduces sample size and is perceived to lead to reduced statistical efficiency of estimates while increasing the potential for bias. As multiple imputation (MI) methods preserve sample size, they are generally viewed as the preferred analytical approach. We examined this assumption, comparing the performance of CC and MI methods to determine risk difference (RD) estimates in the presence of missing binary outcomes. We conducted simulation studies of 5000 simulated data sets with 50 imputations of RCTs with one primary follow-up endpoint at different underlying levels of RD (3-25 %) and missing outcomes (5-30 %). For missing at random (MAR) or missing completely at random (MCAR) outcomes, CC method estimates generally remained unbiased and achieved precision similar to or better than MI methods, and high statistical coverage. Missing not at random (MNAR) scenarios yielded invalid inferences with both methods. Effect size estimate bias was reduced in MI methods by always including group membership even if this was unrelated to missingness. Surprisingly, under MAR and MCAR conditions in the assessed scenarios, MI offered no statistical advantage over CC methods. While MI must inherently accompany CC methods for intention-to-treat analyses, these findings endorse CC methods for per protocol risk difference analyses in these conditions. These findings provide an argument for the use of the CC approach to always complement MI analyses, with the usual caveat that the validity of the mechanism for missingness be thoroughly discussed. More importantly, researchers should strive to collect as much data as possible.

  3. Improvements for retrieval of cloud droplet size by the POLDER instrument

    NASA Astrophysics Data System (ADS)

    Shang, H.; Husi, L.; Bréon, F. M.; Ma, R.; Chen, L.; Wang, Z.

    2017-12-01

    The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR ( 1.5µm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (>15 µm) and to reduce the uncertainties caused by cloud heterogeneity. A premium resoltion of 0.8° is determined by considering successful retrievals and cloud horizontal homogeneity. The improved algorithm is applied to measurements of POLDER in 2008, and we further compared our retrievals with cloud effective radii estimations of Moderate Resolution Imaging Spectroradiometer (MODIS). The results indicate that in global scale, the cloud effective radii and effective variance is larger in the central ocean than inland and coast areas. Over heavy polluted regions, the cloud droplets has small effective radii and narraw distribution due to the influence of aerosol particles.

  4. Concentration comparison of selected constituents between groundwater samples collected within the Missouri River alluvial aquifer using purge and pump and grab-sampling methods, near the city of Independence, Missouri, 2013

    USGS Publications Warehouse

    Krempa, Heather M.

    2015-10-29

    Relative percent differences between methods were greater than 10 percent for most analyzed trace elements. Barium, cobalt, manganese, and boron had concentrations that were significantly different between sampling methods. Barium, molybdenum, boron, and uranium method concentrations indicate a close association between pump and grab samples based on bivariate plots and simple linear regressions. Grab sample concentrations were generally larger than pump concentrations for these elements and may be because of using a larger pore sized filter for grab samples. Analysis of zinc blank samples suggests zinc contamination in filtered grab samples. Variations of analyzed trace elements between pump and grab samples could reduce the ability to monitor temporal changes and potential groundwater contamination threats. The degree of precision necessary for monitoring potential groundwater threats and application objectives need to be considered when determining acceptable variation amounts.

  5. A laboratory assessment of the Waveband Integrated Bioaerosol Sensor (WIBS-4) using individual samples of pollen and fungal spore material

    NASA Astrophysics Data System (ADS)

    Healy, David A.; O'Connor, David J.; Burke, Aoife M.; Sodeau, John R.

    2012-12-01

    A Bioaerosol sensing instrument referred to as WIBS-4, designed to continuously monitor ambient bioaerosols on-line, has been used to record a multiparameter “signature” from each of a number of Primary Biological Aerosol Particulate (PBAP) samples found in air. These signatures were obtained in a controlled laboratory environment and are based on the size, asymmetry (“shape”) and auto-fluorescence of the particles. Fifteen samples from two separate taxonomic ranks (kingdoms), Plantae (×8) and Fungi (×7) were individually introduced to the WIBS-4 for measurement along with two non-fluorescing chemical solids, common salt and chalk. Over 2000 individual-particle measurements were recorded for each sample type and the ability of the WIBS spectroscopic technique to distinguish between chemicals, pollen and fungal spore material was examined by identifying individual PBAP signatures. The results obtained show that WIBS-4 could potentially be a very useful analytical tool for distinguishing between natural airborne PBAP samples, such as the fungal spores and may potentially play an important role in detecting and discriminating the toxic fungal spore, Aspergillus fumigatus, from others in real-time. If the sizing range of the commercial instrument was customarily increased and permitted to operate simultaneously in its two sizing ranges, pollen and spores could potentially be discriminated between. The data also suggest that the gain setting sensitivity on the detector would also have to be reduced by a factor >5, to routinely detect, in-range fluorescence measurements for pollen samples.

  6. 3D data processing with advanced computer graphics tools

    NASA Astrophysics Data System (ADS)

    Zhang, Song; Ekstrand, Laura; Grieve, Taylor; Eisenmann, David J.; Chumbley, L. Scott

    2012-09-01

    Often, the 3-D raw data coming from an optical profilometer contains spiky noises and irregular grid, which make it difficult to analyze and difficult to store because of the enormously large size. This paper is to address these two issues for an optical profilometer by substantially reducing the spiky noise of the 3-D raw data from an optical profilometer, and by rapidly re-sampling the raw data into regular grids at any pixel size and any orientation with advanced computer graphics tools. Experimental results will be presented to demonstrate the effectiveness of the proposed approach.

  7. Very sensitive α-Al2O3:C polycrystals for thermoluminescent dosimetry.

    PubMed

    Fontainha, Críssia Carem Paiva; Alves, Neriene; Ferraz, Wilmar Barbosa; de Faria, Luiz Oliveira

    2018-05-07

    New materials have been widely investigated for ionizing radiation dosimetry for medical procedures. Carbon-doped alumina (α-Al 2 O 3 :C) have been reported to be excellent thermoluminescent (TL) and optically stimulated luminescence (OSL) radiation dosimeters. In the present study, we have synthetized nano and micro-sized α-Al 2 O 3 :C polycrystals, doped with different percentages of carbon atoms aiming to compare their efficiency as TL dosimeters. The dosimetric characteristics for X ray and gamma fields were investigated. Samples doped with different amounts of carbon atoms were sintered under different atmosphere conditions, at temperatures ranging from 1300 °C to 1750 °C. Among the investigated samples, the micro-sized alumina doped with 0.01% of carbon and sintered at 1700 °C under reducing atmosphere, has presented a very high TL output. The main TL peak is centered at 250 °C and has a linear behavior with photon dose in the dose range of 0.02-to-5000 mGy, with correlation coefficient very close to one (0.99991). Samples produced by using nanosized alumina have shown much lower TL output when compared to the samples with microsized alumina. The micro-sized alumina obtained by the methodology used in this work is a suitable candidate to be explored for application in X and Gamma radiation dosimetry. Copyright © 2018. Published by Elsevier Ltd.

  8. Scaling ice microstructures from the laboratory to nature: cryo-EBSD on large samples.

    NASA Astrophysics Data System (ADS)

    Prior, David; Craw, Lisa; Kim, Daeyeong; Peyroux, Damian; Qi, Chao; Seidemann, Meike; Tooley, Lauren; Vaughan, Matthew; Wongpan, Pat

    2017-04-01

    Electron backscatter diffraction (EBSD) has extended significantly our ability to conduct detailed quantitative microstructural investigations of rocks, metals and ceramics. EBSD on ice was first developed in 2004. Techniques have improved significantly in the last decade and EBSD is now becoming more common in the microstructural analysis of ice. This is particularly true for laboratory-deformed ice where, in some cases, the fine grain sizes exclude the possibility of using a thin section of the ice. Having the orientations of all axes (rather than just the c-axis as in an optical method) yields important new information about ice microstructure. It is important to examine natural ice samples in the same way so that we can scale laboratory observations to nature. In the case of ice deformation, higher strain rates are used in the laboratory than those seen in nature. These are achieved by increasing stress and/or temperature and it is important to assess that the microstructures produced in the laboratory are comparable with those observed in nature. Natural ice samples are coarse grained. Glacier and ice sheet ice has a grain size from a few mm up to several cm. Sea and lake ice has grain sizes of a few cm to many metres. Thus extending EBSD analysis to larger sample sizes to include representative microstructures is needed. The chief impediments to working on large ice samples are sample exchange, limitations on stage motion and temperature control. Large ice samples cannot be transferred through a typical commercial cryo-transfer system that limits sample sizes. We transfer through a nitrogen glove box that encloses the main scanning electron microscope (SEM) door. The nitrogen atmosphere prevents the cold stage and the sample from becoming covered in frost. Having a long optimal working distance for EBSD (around 30mm for the Otago cryo-EBSD facility) , by moving the camera away from the pole piece, enables the stage to move without crashing into either the EBSD camera or the SEM pole piece (final lens). In theory a sample up to 100mm perpendicular to the tilt axis by 150mm parallel to the tilt axis can be analysed. In practice, the motion of our stage is restricted to maximum dimensions of 100 by 50mm by a conductive copper braid on our cold stage. Temperature control becomes harder as the samples become larger. If the samples become too warm then they will start to sublime and the quality of EBSD data will reduce. Large samples need to be relatively thin ( 5mm or less) so that conduction of heat to the cold stage is more effective at keeping the surface temperature low. In the Otago facility samples of up to 40mm by 40mm present little problem and can be analysed for several hours without significant sublimation. Larger samples need more care, e.g. fast sample transfer to keep the sample very cold. The largest samples we work on routinely are 40 by 60mm in size. We will show examples of EBSD data from glacial ice and sea ice from Antarctica and from large laboratory ice samples.

  9. A Statistical Analysis of the Economic Drivers of Battery Energy Storage in Commercial Buildings: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Matthew; Simpkins, Travis; Cutler, Dylan

    There is significant interest in using battery energy storage systems (BESS) to reduce peak demand charges, and therefore the life cycle cost of electricity, in commercial buildings. This paper explores the drivers of economic viability of BESS in commercial buildings through statistical analysis. A sample population of buildings was generated, a techno-economic optimization model was used to size and dispatch the BESS, and the resulting optimal BESS sizes were analyzed for relevant predictor variables. Explanatory regression analyses were used to demonstrate that peak demand charges are the most significant predictor of an economically viable battery, and that the shape ofmore » the load profile is the most significant predictor of the size of the battery.« less

  10. A Statistical Analysis of the Economic Drivers of Battery Energy Storage in Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Matthew; Simpkins, Travis; Cutler, Dylan

    There is significant interest in using battery energy storage systems (BESS) to reduce peak demand charges, and therefore the life cycle cost of electricity, in commercial buildings. This paper explores the drivers of economic viability of BESS in commercial buildings through statistical analysis. A sample population of buildings was generated, a techno-economic optimization model was used to size and dispatch the BESS, and the resulting optimal BESS sizes were analyzed for relevant predictor variables. Explanatory regression analyses were used to demonstrate that peak demand charges are the most significant predictor of an economically viable battery, and that the shape ofmore » the load profile is the most significant predictor of the size of the battery.« less

  11. Effectiveness of three best management practices for highway-runoff quality along the Southeast Expressway, Boston, Massachusetts

    USGS Publications Warehouse

    Smith, Kirk P.

    2002-01-01

    Best management practices (BMPs) near highways are designed to reduce the amount of suspended sediment and associated constituents, including debris and litter, discharged from the roadway surface. The effectiveness of a deep-sumped hooded catch basin, three 2-chambered 1,500-gallon oil-grit separators, and mechanized street sweeping in reducing sediment and associated constituents was examined along the Southeast Expressway (Interstate Route 93) in Boston, Massachusetts. Repeated observations of the volume and distribution of bottom material in the oil-grit separators, including data on particle-size distributions, were compared to data from bottom material deposited during the initial 3 years of operation. The performance of catch-basin hoods and the oil-grit separators in reducing floating debris was assessed by examining the quantity of material retained by each structural BMP compared to the quantity of material retained by and discharged from the oil-grit separators, which received flow from the catch basins. The ability of each structural BMP to reduce suspended-sediment loads was assessed by examining (a) the difference in the concentrations of suspended sediment in samples collected simultaneously from the inlet and outlet of each BMP, and (b) the difference between inlet loads and outlet loads during a 14-month monitoring period for the catch basin and one separator, and a 10-month monitoring period for the second separator. The third separator was not monitored continuously; instead, samples were collected from it during three visits separated in time by several months. Suspended-sediment loads for the entire study area were estimated on the basis of the long-term average annual precipitation and the estimated inlet and outlet loads of two of the separators. The effects of mechanized street sweeping were assessed by evaluating the differences between suspended-sediment loads before and after street sweeping, relative to storm precipitation totals, and by comparing the particle-size distributions of sediment samples collected from the sweepers to bottom-material samples collected from the structural BMPs. A mass-balance calculation was used to quantify the accuracy of the estimated sediment-removal efficiency for each structural BMP. The ability of each structural BMP to reduce concentrations of inorganic and organic constituents was assessed by determining the differences in concentrations between the inlets and outlets of the BMPs for four storms. The inlet flows of the separators were sampled during five storms for analysis of fecal-indicator bacteria. The particle-size distribution of bottom material found in the first and second chambers of the separators was similar for all three separators. Consistent collection of floatable debris at the outlet of one separator during 12 storms suggests that floatable debris were not indefinitely retained.Concentrations of suspended sediment in discrete samples of runoff collected from the inlets of the two separators ranged from 8.5 to 7,110 mg/L. Concentrations of suspended sediment in discrete samples of runoff collected from the outlets of the separators ranged from 5 to 2,170 mg/L. The 14-month sediment-removal efficiency was 35 percent for one separator, and 28 percent for the second separator. In the combined-treatment system in this study, where catch basins provided primary suspended-sediment treatment, the separators reduced the mass of the suspended sediment from the pavement by about an additional 18 percent. The concentrations of suspended sediment in discrete samples of runoff collected from the inlet of the catch basin ranged from 32 to 13,600 mg/L. Concentrations of suspended sediment in discrete samples of runoff collected from the outlet of the catch basin ranged from 25.7 to 7,030 mg/L. The sediment-removal efficiency for individual storms during the 14-month monitoring period for the deep-sumped hooded catch basin was 39 percent.The concentrations of 29 in

  12. Research and development of a luminol-carbon monoxide flow system

    NASA Technical Reports Server (NTRS)

    Thomas, R. R.

    1977-01-01

    Adaption of the luminol-carbon monoxide injection system to a flowing type system is reported. Analysis of actual wastewater samples was carried out and revealed that bacteria can be associated with particles greater than 10 microns in size in samples such as mixed liquor. Research into the luminol reactive oxidation state indicates that oxidized iron porphyrins, cytochrome-c in particular, produce more luminol chemiluminescence than the reduced form. Correlation exists between the extent of porphyrin oxidation and relative chemiluminescence. In addition, the porphyrin nucleus is apparently destroyed under the current chemiluminescent reaction conditions.

  13. Chemically stabilized reduced graphene oxide/zirconia nanocomposite: synthesis and characterization

    NASA Astrophysics Data System (ADS)

    Sagadevan, Suresh; Zaman Chowdhury, Zaira; Enamul Hoque, Md; Podder, Jiban

    2017-11-01

    In this research, chemical method was used to fabricate reduced graphene oxide/zirconia (rGO/ZrO2) nanocomposite. X-ray Diffraction analysis (XRD) was carried out to examine the crystalline structure of the nanocomposites. The nanocomposite prepared here has average crystallite size of 14 nm. The surface morphology was observed using scanning electron microscopic analysis (SEM) coupled with electron dispersion spectroscopy (EDS) to detect the chemical element over the surface of the nanocomposites. High-resolution Transmission electron microscopic analysis (HR-TEM) was carried out to determine the particle size and shape of the nanocomposites. The optical property of the prepared samples was determined using UV-visible absorption spectrum. The functional groups were identified using FTIR and Raman spectroscopic analysis. Efficient, cost effective and properly optimized synthesis process of rGO/ZrO2 nanocomposite can ensure the presence of infiltrating graphene network inside the ZrO2 matrix to enhance the electrical properties of the hybrid composites up to a greater scale. Thus the dielectric constant, dielectric loss and AC conductivity of the prepared sample was measured at various frequencies and temperatures. The analytical results obtained here confirmed the homogeneous dispersion of ZrO2 nanostructures over the surface of reduced graphene oxide nanosheets. Overall, the research demonstrated that the rGO/ZrO2 nano-hybrid structure fabricated here can be considered as a promising candidate for applications in nanoelectronics and optoelectronics.

  14. Cluster randomised crossover trials with binary data and unbalanced cluster sizes: application to studies of near-universal interventions in intensive care.

    PubMed

    Forbes, Andrew B; Akram, Muhammad; Pilcher, David; Cooper, Jamie; Bellomo, Rinaldo

    2015-02-01

    Cluster randomised crossover trials have been utilised in recent years in the health and social sciences. Methods for analysis have been proposed; however, for binary outcomes, these have received little assessment of their appropriateness. In addition, methods for determination of sample size are currently limited to balanced cluster sizes both between clusters and between periods within clusters. This article aims to extend this work to unbalanced situations and to evaluate the properties of a variety of methods for analysis of binary data, with a particular focus on the setting of potential trials of near-universal interventions in intensive care to reduce in-hospital mortality. We derive a formula for sample size estimation for unbalanced cluster sizes, and apply it to the intensive care setting to demonstrate the utility of the cluster crossover design. We conduct a numerical simulation of the design in the intensive care setting and for more general configurations, and we assess the performance of three cluster summary estimators and an individual-data estimator based on binomial-identity-link regression. For settings similar to the intensive care scenario involving large cluster sizes and small intra-cluster correlations, the sample size formulae developed and analysis methods investigated are found to be appropriate, with the unweighted cluster summary method performing well relative to the more optimal but more complex inverse-variance weighted method. More generally, we find that the unweighted and cluster-size-weighted summary methods perform well, with the relative efficiency of each largely determined systematically from the study design parameters. Performance of individual-data regression is adequate with small cluster sizes but becomes inefficient for large, unbalanced cluster sizes. When outcome prevalences are 6% or less and the within-cluster-within-period correlation is 0.05 or larger, all methods display sub-nominal confidence interval coverage, with the less prevalent the outcome the worse the coverage. As with all simulation studies, conclusions are limited to the configurations studied. We confined attention to detecting intervention effects on an absolute risk scale using marginal models and did not explore properties of binary random effects models. Cluster crossover designs with binary outcomes can be analysed using simple cluster summary methods, and sample size in unbalanced cluster size settings can be determined using relatively straightforward formulae. However, caution needs to be applied in situations with low prevalence outcomes and moderate to high intra-cluster correlations. © The Author(s) 2014.

  15. Sample Design, Sample Augmentation, and Estimation for Wave 2 of the NSHAP

    PubMed Central

    English, Ned; Pedlow, Steven; Kwok, Peter K.

    2014-01-01

    Objectives. The sample for the second wave (2010) of National Social Life, Health, and Aging Project (NSHAP) was designed to increase the scientific value of the Wave 1 (2005) data set by revisiting sample members 5 years after their initial interviews and augmenting this sample where possible. Method. There were 2 important innovations. First, the scope of the study was expanded by collecting data from coresident spouses or romantic partners. Second, to maximize the representativeness of the Wave 2 data, nonrespondents from Wave 1 were again approached for interview in the Wave 2 sample. Results. The overall unconditional response rate for the Wave 2 panel was 74%; the conditional response rate of Wave 1 respondents was 89%; the conditional response rate of partners was 84%; and the conversion rate for Wave 1 nonrespondents was 26%. Discussion. The inclusion of coresident partners enhanced the study by allowing the examination of how intimate, household relationships are related to health trajectories and by augmenting the size of the NSHAP sample size for this and future waves. The uncommon strategy of returning to Wave 1 nonrespondents reduced potential bias by ensuring that to the extent possible the whole of the original sample forms the basis for the field effort. NSHAP Wave 2 achieved its field objectives of consolidating the panel, recruiting their resident spouses or romantic partners, and converting a significant proportion of Wave 1 nonrespondents. PMID:25360016

  16. Estimating the Latent Number of Types in Growing Corpora with Reduced Cost-Accuracy Trade-Off

    ERIC Educational Resources Information Center

    Hidaka, Shohei

    2016-01-01

    The number of unique words in children's speech is one of most basic statistics indicating their language development. We may, however, face difficulties when trying to accurately evaluate the number of unique words in a child's growing corpus over time with a limited sample size. This study proposes a novel technique to estimate the latent number…

  17. A Time-Domain CMOS Oscillator-Based Thermostat with Digital Set-Point Programming

    PubMed Central

    Chen, Chun-Chi; Lin, Shih-Hao

    2013-01-01

    This paper presents a time-domain CMOS oscillator-based thermostat with digital set-point programming [without a digital-to-analog converter (DAC) or external resistor] to achieve on-chip thermal management of modern VLSI systems. A time-domain delay-line-based thermostat with multiplexers (MUXs) was used to substantially reduce the power consumption and chip size, and can benefit from the performance enhancement due to the scaling down of fabrication processes. For further cost reduction and accuracy enhancement, this paper proposes a thermostat using two oscillators that are suitable for time-domain curvature compensation instead of longer linear delay lines. The final time comparison was achieved using a time comparator with a built-in custom hysteresis to generate the corresponding temperature alarm and control. The chip size of the circuit was reduced to 0.12 mm2 in a 0.35-μm TSMC CMOS process. The thermostat operates from 0 to 90 °C, and achieved a fine resolution better than 0.05 °C and an improved inaccuracy of ± 0.6 °C after two-point calibration for eight packaged chips. The power consumption was 30 μW at a sample rate of 10 samples/s. PMID:23385403

  18. Searching for microbial protein over-expression in a complex matrix using automated high throughput MS-based proteomics tools.

    PubMed

    Akeroyd, Michiel; Olsthoorn, Maurien; Gerritsma, Jort; Gutker-Vermaas, Diana; Ekkelkamp, Laurens; van Rij, Tjeerd; Klaassen, Paul; Plugge, Wim; Smit, Ed; Strupat, Kerstin; Wenzel, Thibaut; van Tilborg, Marcel; van der Hoeven, Rob

    2013-03-10

    In the discovery of new enzymes genomic and cDNA expression libraries containing thousands of differential clones are generated to obtain biodiversity. These libraries need to be screened for the activity of interest. Removing so-called empty and redundant clones significantly reduces the size of these expression libraries and therefore speeds up new enzyme discovery. Here, we present a sensitive, generic workflow for high throughput screening of successful microbial protein over-expression in microtiter plates containing a complex matrix based on mass spectrometry techniques. MALDI-LTQ-Orbitrap screening followed by principal component analysis and peptide mass fingerprinting was developed to obtain a throughput of ∼12,000 samples per week. Alternatively, a UHPLC-MS(2) approach including MS(2) protein identification was developed for microorganisms with a complex protein secretome with a throughput of ∼2000 samples per week. TCA-induced protein precipitation enhanced by addition of bovine serum albumin is used for protein purification prior to MS detection. We show that this generic workflow can effectively reduce large expression libraries from fungi and bacteria to their minimal size by detection of successful protein over-expression using MS. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Efficient mitigation strategies for epidemics in rural regions.

    PubMed

    Scoglio, Caterina; Schumm, Walter; Schumm, Phillip; Easton, Todd; Roy Chowdhury, Sohini; Sydney, Ali; Youssef, Mina

    2010-07-13

    Containing an epidemic at its origin is the most desirable mitigation. Epidemics have often originated in rural areas, with rural communities among the first affected. Disease dynamics in rural regions have received limited attention, and results of general studies cannot be directly applied since population densities and human mobility factors are very different in rural regions from those in cities. We create a network model of a rural community in Kansas, USA, by collecting data on the contact patterns and computing rates of contact among a sampled population. We model the impact of different mitigation strategies detecting closely connected groups of people and frequently visited locations. Within those groups and locations, we compare the effectiveness of random and targeted vaccinations using a Susceptible-Exposed-Infected-Recovered compartmental model on the contact network. Our simulations show that the targeted vaccinations of only 10% of the sampled population reduced the size of the epidemic by 34.5%. Additionally, if 10% of the population visiting one of the most popular locations is randomly vaccinated, the epidemic size is reduced by 19%. Our results suggest a new implementation of a highly effective strategy for targeted vaccinations through the use of popular locations in rural communities.

  20. Practical Advice on Calculating Confidence Intervals for Radioprotection Effects and Reducing Animal Numbers in Radiation Countermeasure Experiments

    PubMed Central

    Landes, Reid D.; Lensing, Shelly Y.; Kodell, Ralph L.; Hauer-Jensen, Martin

    2014-01-01

    The dose of a substance that causes death in P% of a population is called an LDP, where LD stands for lethal dose. In radiation research, a common LDP of interest is the radiation dose that kills 50% of the population by a specified time, i.e., lethal dose 50 or LD50. When comparing LD50 between two populations, relative potency is the parameter of interest. In radiation research, this is commonly known as the dose reduction factor (DRF). Unfortunately, statistical inference on dose reduction factor is seldom reported. We illustrate how to calculate confidence intervals for dose reduction factor, which may then be used for statistical inference. Further, most dose reduction factor experiments use hundreds, rather than tens of animals. Through better dosing strategies and the use of a recently available sample size formula, we also show how animal numbers may be reduced while maintaining high statistical power. The illustrations center on realistic examples comparing LD50 values between a radiation countermeasure group and a radiation-only control. We also provide easy-to-use spreadsheets for sample size calculations and confidence interval calculations, as well as SAS® and R code for the latter. PMID:24164553

  1. Spatial structure, sampling design and scale in remotely-sensed imagery of a California savanna woodland

    NASA Technical Reports Server (NTRS)

    Mcgwire, K.; Friedl, M.; Estes, J. E.

    1993-01-01

    This article describes research related to sampling techniques for establishing linear relations between land surface parameters and remotely-sensed data. Predictive relations are estimated between percentage tree cover in a savanna environment and a normalized difference vegetation index (NDVI) derived from the Thematic Mapper sensor. Spatial autocorrelation in original measurements and regression residuals is examined using semi-variogram analysis at several spatial resolutions. Sampling schemes are then tested to examine the effects of autocorrelation on predictive linear models in cases of small sample sizes. Regression models between image and ground data are affected by the spatial resolution of analysis. Reducing the influence of spatial autocorrelation by enforcing minimum distances between samples may also improve empirical models which relate ground parameters to satellite data.

  2. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Studies of cosmogenic in-situ 14CO and 14CO2 produced in terrestrial and extraterrestrial samples: experimental procedures and applications

    NASA Astrophysics Data System (ADS)

    Lal, D.; Jull, A. J. T.

    1994-06-01

    We have developed an experimental procedure for quantitative extraction of cosmogenic in-situ 14C produced in terrestrial and extraterrestrial samples, in the two chemical forms 14CO and 14CO2 in which it is found to be present in these samples. The technique is based on wet digestion of the sample in vacuo with hydrofluoric acid at 60-80°C in a Kel-F® vessel. Kel-F is a homo-polymer (chlortrifluorethylene). The procedures and the digestion vessel sizes used allow convenient extraction of 14C activity from samples of 50 mg to 50 g weight. Procedure blanks were reduced considerably by the experience gained with the system, and can be reduced further. We determined that most of the in-situ 14C activity was present in the CO phase (> 60%) in the case of both terrestrial quartz and in bulk samples of meteorites, analogous to the case of in-situ production of 14C in ice. Some results of measurements of 14C activities in meteorites and in terrestrial samples are presented. The latter include several samples which have been studied earlier for in-situ 10Be (and 26Al) concentrations, and allow us to determine relative 14C and 10Be production rates in quartz.

  4. Using Sustainable Development as a Competitive Strategy

    NASA Astrophysics Data System (ADS)

    Spearman, Pat

    Sustainable development reduces construction waste by 43%, generating 50% cost savings. Residential construction executives lacking adequate knowledge regarding the benefits of sustainable development practices are at a competitive disadvantage. Drawing from the diffusion of innovation theory, the purpose of this qualitative case study was to explore knowledge acquisition within the bounds of sustainable residential construction. The purposive sample size of 11 executive decision makers fulfilled the sample size requirements and enabled the extraction of meaningful data. Participants were members of the National Home Builders Association and had experience of a minimum of 5 years in residential construction. The research question addressed how to improve knowledge acquisition relating to the cost benefits of building green homes and increase the adoption rate of sustainable development among residential builders. Data were collected via semistructured telephone interviews, field observation, and document analysis. Transcribed data were validated via respondent validation, coded into 5 initial categories aligned to the focus of the research, then reduced to 3 interlocking themes of environment, competitive advantage, and marketing. Recommendations include developing comprehensive public policies, horizontal and vertical communications networks, and green banks to capitalize sustainable development programs to improve the diffusion of green innovation as a competitive advantage strategy. Business leaders could benefit from this data by integrating sustainable development practices into their business processes. Sustainable development reduces operational costs, increases competitive advantage for builders, and reduces greenhouse gas emissions. Implications for social change increase energy independence through conservation and developing a legislative policy template for comprehensive energy strategies. A comprehensive energy strategy promotes economic development, technological gains in all business sectors within the energy industry, and reduces energy costs for consumers.

  5. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  6. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  7. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  8. A Feedfordward Adaptive Controller to Reduce the Imaging Time of Large-Sized Biological Samples with a SPM-Based Multiprobe Station

    PubMed Central

    Otero, Jorge; Guerrero, Hector; Gonzalez, Laura; Puig-Vidal, Manel

    2012-01-01

    The time required to image large samples is an important limiting factor in SPM-based systems. In multiprobe setups, especially when working with biological samples, this drawback can make impossible to conduct certain experiments. In this work, we present a feedfordward controller based on bang-bang and adaptive controls. The controls are based in the difference between the maximum speeds that can be used for imaging depending on the flatness of the sample zone. Topographic images of Escherichia coli bacteria samples were acquired using the implemented controllers. Results show that to go faster in the flat zones, rather than using a constant scanning speed for the whole image, speeds up the imaging process of large samples by up to a 4× factor. PMID:22368491

  9. Heavy metal concentrations in particle size fractions from street dust of Murcia (Spain) as the basis for risk assessment.

    PubMed

    Acosta, Jose A; Faz, Ángel; Kalbitz, Karsten; Jansen, Boris; Martínez-Martínez, Silvia

    2011-11-01

    Street dust has been sampled from six different types of land use of the city of Murcia (Spain). The samples were fractionated into eleven particle size fractions (<2, 2-10, 10-20, 20-50, 50-75, 75-106, 106-150, 150-180, 180-425, 425-850 μm and 850-2000 μm) and analyzed for Pb, Cu, Zn and Cd. The concentrations of these four potentially toxic metals were assessed, as well as the effect of particle size on their distribution. A severe enrichment of all metals was observed for all land-uses (industrial, suburban, urban and highways), with the concentration of all metals affected by the type of land-use. Coarse and fine particles in all cases showed concentrations of metals higher than those found in undisturbed areas. However, the results indicated a preferential partitioning of metals in fine particle size fractions in all cases, following a logarithmic distribution. The accumulation in the fine fractions was higher when the metals had an anthropogenic origin. The strong overrepresentation of metals in particles <10 μm indicates that if the finest fractions are removed by a vacuum-assisted dry sweeper or a regenerative-air sweeper the risk of metal dispersion and its consequent risk for humans will be highly reduced. Therefore, we recommend that risk assessment programs include monitoring of metal concentrations in dust where each land-use is separately evaluated. The finest particle fractions should be examined explicitly in order to apply the most efficient measures for reducing the risk of inhalation and ingestion of dust for humans and risk for the environment.

  10. Moderating the Covariance Between Family Member’s Substance Use Behavior

    PubMed Central

    Eaves, Lindon J.; Neale, Michael C.

    2014-01-01

    Twin and family studies implicitly assume that the covariation between family members remains constant across differences in age between the members of the family. However, age-specificity in gene expression for shared environmental factors could generate higher correlations between family members who are more similar in age. Cohort effects (cohort × genotype or cohort × common environment) could have the same effects, and both potentially reduce effect sizes estimated in genome-wide association studies where the subjects are heterogeneous in age. In this paper we describe a model in which the covariance between twins and non-twin siblings is moderated as a function of age difference. We describe the details of the model and simulate data using a variety of different parameter values to demonstrate that model fitting returns unbiased parameter estimates. Power analyses are then conducted to estimate the sample sizes required to detect the effects of moderation in a design of twins and siblings. Finally, the model is applied to data on cigarette smoking. We find that (1) the model effectively recovers the simulated parameters, (2) the power is relatively low and therefore requires large sample sizes before small to moderate effect sizes can be found reliably, and (3) the genetic covariance between siblings for smoking behavior decays very rapidly. Result 3 implies that, e.g., genome-wide studies of smoking behavior that use individuals assessed at different ages, or belonging to different birth-year cohorts may have had substantially reduced power to detect effects of genotype on cigarette use. It also implies that significant special twin environmental effects can be explained by age-moderation in some cases. This effect likely contributes to the missing heritability paradox. PMID:24647834

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohlgemuth, J.; Bokria, J.; Gu, X.

    Polymeric encapsulation materials may a change size when processed at typical module lamination temperatures. The relief of residual strain, trapped during the manufacture of encapsulation sheet, can affect module performance and reliability. For example, displaced cells and interconnects threaten: cell fracture; broken interconnects (open circuits and ground faults); delamination at interfaces; and void formation. A standardized test for the characterization of change in linear dimensions of encapsulation sheet has been developed and verified. The IEC 62788-1-5 standard quantifies the maximum change in linear dimensions that may occur to allow for process control of size change. Developments incorporated into the Committeemore » Draft (CD) of the standard as well as the assessment of the repeatability and reproducibility of the test method are described here. No pass/fail criteria are given in the standard, rather a repeatable protocol to quantify the change in dimension is provided to aid those working with encapsulation. The round-robin experiment described here identified that the repeatability and reproducibility of measurements is on the order of 1%. Recent refinements to the test procedure to improve repeatability and reproducibility include: the use of a convection oven to improve the thermal equilibration time constant and its uniformity; well-defined measurement locations reduce the effects of sampling size -and location- relative to the specimen edges; a standardized sand substrate may be readily obtained to reduce friction that would otherwise complicate the results; specimen sampling is defined, so that material is examined at known sites across the width and length of rolls; and encapsulation should be examined at the manufacturer’s recommended processing temperature, except when a cross-linking reaction may limit the size change. EVA, for example, should be examined 100 °C, between its melt transition (occurring up to 80 °C) and the onset of cross-linking (often at 100 °C).« less

  12. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Size and Shape of Solid Fuel Diffusion Flames in Very Low Speed Flows. M.S. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Foutch, David W.

    1987-01-01

    The effect of very low speed forced flows on the size and shape of a solid fuel diffusion flame are investigated experimentally. Flows due to natural convection are eliminated by performing the experiment in low gravity. The range of velocities tested is 1.5 cm/s to 6.3 cm/s and the mole fraction of oxygen in the O2/N2 atmosphere ranges from 0.15 to 0.19. The flames did not reach steady state in the 5.2 sec to which the experiment was limited. Despite limited data, trends in the transient flame temperature and, by means of extrapolation, the steady state flame size are deduced. As the flow velocity is reduced, the flames move farther from the fuel surface, and the transient flame temperature is lowered. As the oxygen concentration is reduced the flames move closer to the fuel sample and the transient flame temperature is reduced. With stand off distances up to 8.5 + or - 0.7 mm and thicknesses around 1 or 2 mm, these flames are much weaker than flames observed at normal gravity. Based on the performance of the equipment and several qualitative observations, suggestions for future work are made.

  14. The effect of ultrasound on particle size, color, viscosity and polyphenol oxidase activity of diluted avocado puree.

    PubMed

    Bi, Xiufang; Hemar, Yacine; Balaban, Murat O; Liao, Xiaojun

    2015-11-01

    The effect of ultrasound treatment on particle size, color, viscosity, polyphenol oxidase (PPO) activity and microstructure in diluted avocado puree was investigated. The treatments were carried out at 20 kHz (375 W/cm(2)) for 0-10 min. The surface mean diameter (D[3,2]) was reduced to 13.44 μm from an original value of 52.31 μm by ultrasound after 1 min. A higher L(∗) value, ΔE value and lower a(∗) value was observed in ultrasound treated samples. The avocado puree dilution followed pseudoplastic flow behavior, and the viscosity of diluted avocado puree (at 100 s(-1)) after ultrasound treatment for 1 min was 6.0 and 74.4 times higher than the control samples for dilution levels of 1:2 and 1:9, respectively. PPO activity greatly increased under all treatment conditions. A maximum increase of 25.1%, 36.9% and 187.8% in PPO activity was found in samples with dilution ratios of 1:2, 1:5 and 1:9, respectively. The increase in viscosity and measured PPO activity might be related to the decrease in particle size. The microscopy images further confirmed that ultrasound treatment induced disruption of avocado puree structure. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Operationalizing hippocampal volume as an enrichment biomarker for amnestic mild cognitive impairment trials: effect of algorithm, test-retest variability, and cut point on trial cost, duration, and sample size.

    PubMed

    Yu, Peng; Sun, Jia; Wolz, Robin; Stephenson, Diane; Brewer, James; Fox, Nick C; Cole, Patricia E; Jack, Clifford R; Hill, Derek L G; Schwarz, Adam J

    2014-04-01

    The objective of this study was to evaluate the effect of computational algorithm, measurement variability, and cut point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). We used normal control and amnestic MCI subjects from the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) as normative reference and screening cohorts. We evaluated the enrichment performance of 4 widely used hippocampal segmentation algorithms (FreeSurfer, Hippocampus Multi-Atlas Propagation and Segmentation (HMAPS), Learning Embeddings Atlas Propagation (LEAP), and NeuroQuant) in terms of 2-year changes in Mini-Mental State Examination (MMSE), Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating Sum of Boxes (CDR-SB). We modeled the implications for sample size, screen fail rates, and trial cost and duration. HCV based patient selection yielded reduced sample sizes (by ∼40%-60%) and lower trial costs (by ∼30%-40%) across a wide range of cut points. These results provide a guide to the choice of HCV cut point for amnestic MCI clinical trials, allowing an informed tradeoff between statistical and practical considerations. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Moessbauer Characterization of Magnetite/Polyaniline Magnetic Nanocomposite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, Anselmo F. R.; Faria, Fernando S. E. D. V.; Lopez, Jorge L.

    2010-12-02

    Aniline surface coated Fe{sub 3}O{sub 4} nanoparticles have been successfully synthesized by UV irradiation varying the time and the acid media (HCl, HNO{sub 3}, or H{sub 2}SO{sub 4}). The synthesized material represents a promising platform for application in nerve regeneration. XRD patterns are consistent with the crystalline structure of magnetite. Nevertheless, for UV irradiation times longer than 2 h, extra XRD lines reveal the presence of goethite. The mean crystallite size of uncoated particles is estimated to be 25.4 nm, meanwhile that size is reduced to 19.9 nm for the UV irradiated sample in HCl medium for 4 h. Moessbauermore » spectra of uncoated nanoparticles reveal the occurrence of thermal relaxation at room temperature, while the 77 K-Moessbauer spectrum suggests the occurrence of electron localization effects similar to that expected in bulk magnetite. The Mossbauer spectra of UV irradiated sample in HCl medium during 4 h, confirms the presence of the goethite phase. For this sample, the thermal relaxation is more evident, since the room temperature spectrum shows larger spectral area for the nonmagnetic component due to the smaller crystallite size. Meanwhile, the 77 K-Moessbauer spectrum suggests the absence of the electron localization effect above 77 K.« less

  17. Age at menopause: imputing age at menopause for women with a hysterectomy with application to risk of postmenopausal breast cancer

    PubMed Central

    Rosner, Bernard; Colditz, Graham A.

    2011-01-01

    Purpose Age at menopause, a major marker in the reproductive life, may bias results for evaluation of breast cancer risk after menopause. Methods We follow 38,948 premenopausal women in 1980 and identify 2,586 who reported hysterectomy without bilateral oophorectomy, and 31,626 who reported natural menopause during 22 years of follow-up. We evaluate risk factors for natural menopause, impute age at natural menopause for women reporting hysterectomy without bilateral oophorectomy and estimate the hazard of reaching natural menopause in the next 2 years. We apply this imputed age at menopause to both increase sample size and to evaluate the relation between postmenopausal exposures and risk of breast cancer. Results Age, cigarette smoking, age at menarche, pregnancy history, body mass index, history of benign breast disease, and history of breast cancer were each significantly related to age at natural menopause; duration of oral contraceptive use and family history of breast cancer were not. The imputation increased sample size substantially and although some risk factors after menopause were weaker in the expanded model (height, and alcohol use), use of hormone therapy is less biased. Conclusions Imputing age at menopause increases sample size, broadens generalizability making it applicable to women with hysterectomy, and reduces bias. PMID:21441037

  18. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  19. Niobium-titanium superconductors produced by powder metallurgy having artificial flux pinning centers

    DOEpatents

    Jablonski, Paul D.; Larbalestier, David C.

    1993-01-01

    Superconductors formed by powder metallurgy have a matrix of niobium-titanium alloy with discrete pinning centers distributed therein which are formed of a compatible metal. The artificial pinning centers in the Nb-Ti matrix are reduced in size by processing steps to sizes on the order of the coherence length, typically in the range of 1 to 10 nm. To produce the superconductor, powders of body centered cubic Nb-Ti alloy and the second phase flux pinning material, such as Nb, are mixed in the desired percentages. The mixture is then isostatically pressed, sintered at a selected temperature and selected time to produce a cohesive structure having desired characteristics without undue chemical reaction, the sintered billet is reduced in size by deformation, such as by swaging, the swaged sample receives heat treatment and recrystallization and additional swaging, if necessary, and is then sheathed in a normal conducting sheath, and the sheathed material is drawn into a wire. The resulting superconducting wire has second phase flux pinning centers distributed therein which provide enhanced J.sub.ct due to the flux pinning effects.

  20. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  1. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  2. Exploring the effect of sleep and reduced interference on different forms of declarative memory.

    PubMed

    Schönauer, Monika; Pawlizki, Annedore; Köck, Corinna; Gais, Steffen

    2014-12-01

    Many studies have found that sleep benefits declarative memory consolidation. However, fundamental questions on the specifics of this effect remain topics of discussion. It is not clear which forms of memory are affected by sleep and whether this beneficial effect is partly mediated by passive protection against interference. Moreover, a putative correlation between the structure of sleep and its memory-enhancing effects is still being discussed. In three experiments, we tested whether sleep differentially affects various forms of declarative memory. We varied verbal content (verbal/nonverbal), item type (single/associate), and recall mode (recall/recognition, cued/free recall) to examine the effect of sleep on specific memory subtypes. We compared within-subject differences in memory consolidation between intervals including sleep, active wakefulness, or quiet meditation, which reduced external as well as internal interference and rehearsal. Forty healthy adults aged 18-30 y, and 17 healthy adults aged 24-55 y with extensive meditation experience participated in the experiments. All types of memory were enhanced by sleep if the sample size provided sufficient statistical power. Smaller sample sizes showed an effect of sleep if a combined measure of different declarative memory scales was used. In a condition with reduced external and internal interference, performance was equal to one with high interference. Here, memory consolidation was significantly lower than in a sleep condition. We found no correlation between sleep structure and memory consolidation. Sleep does not preferentially consolidate a specific kind of declarative memory, but consistently promotes overall declarative memory formation. This effect is not mediated by reduced interference. © 2014 Associated Professional Sleep Societies, LLC.

  3. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  4. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  5. Production and Characterization of Bulk MgB2 Material made by the Combination of Crystalline and Carbon Coated Amorphous Boron Powders

    NASA Astrophysics Data System (ADS)

    Hiroki, K.; Muralidhar, M.; Koblischka, M. R.; Murakami, M.

    2017-07-01

    The object of this investigation is to reduce the cost of bulk production and in the same time to increase the critical current performance of bulk MgB2 material. High-purity commercial powders of Mg metal (99.9% purity) and two types of crystalline (99% purity) and 16.5 wt% carbon-coated, nanometer-sized amorphous boron powders (98.5% purity) were mixed in a nominal composition of MgB2 to reduce the boron cost and to see the effect on the superconducting and magnetic properties. Several samples were produced mixing the crystalline boron and carbon-coated, nanometer-sized amorphous boron powders in varying ratios (50:50, 60:40, 70:30, 80:20, 90:10) and synthesized using a single-step process using the solid state reaction around 800 °C for 3 h in pure argon atmosphere. The magnetization measurements exhibited a sharp superconducting transition temperature with T c, onset around 38.6 K to 37.2 K for the bulk samples prepared utilizing the mixture of crystalline boron and 16.5% carbon-coated amorphous boron. The critical current density at higher magnetic field was improved with addition of carbon-coated boron to crystalline boron in a ratio of 80:20. The highest self-field Jc around 215,000 A/cm2 and 37,000 A/cm2 were recorded at 20 K, self-field and 2 T for the sample with a ratio of 80:10. The present results clearly demonstrate that the bulk MgB2 performance can be improved by adding carbon-coated nano boron to crystalline boron, which will be attractive to reduce the cost of bulk MgB2 material for several industrial applications.

  6. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  7. The effect of plasma pre-treatment on NaHCO3 desizing of blended sizes on cotton fabrics

    NASA Astrophysics Data System (ADS)

    Li, Xuming; Qiu, Yiping

    2012-03-01

    The influence of the He/O2 atmospheric pressure plasma jet pre-treatment on subsequent NaHCO3 desizing of blends of starch phosphate and poly(vinyl alcohol) on cotton fabrics is investigated. Atomic force microscopy and scanning electron microscopy analysis indicate that the surface topography of the samples has significantly changed and the surface roughness increases with an increase in plasma exposure time. X-ray photoelectron spectroscopy analysis shows that a larger number of oxygen-containing polar groups are formed on the sized fabric surface after the plasma treatment. The results of the percent desizing ratio (PDR) indicate that the plasma pretreatment facilitated the blended sizes removal from the cotton fabrics in subsequent NaHCO3 treatment and the PDR increases with prolonging plasma treatment time. The plasma technology is a promising pretreatment for desizing of blended sizes due to dramatically reduced desizing time.

  8. Effect of annealing on particle size, microstructure and gas sensing properties of Mn substituted CoFe2O4 nanoparticles

    NASA Astrophysics Data System (ADS)

    Kumar, E. Ranjith; Kamzin, A. S.; Janani, K.

    2016-11-01

    Microstructure, morphological and gas sensor studies of Mn substituted cobalt ferrite nanoparticles synthesized by a simple evaporation method and auto- combustion method. The influence of heat treatment on phase and particle size of spinel ferrite nanoparticles were determined by X-ray diffraction and Mossbauer spectroscopy. The XRD study reveals that the lattice constant and crystallite size of the samples increases with the increase of annealing temperature. Last one was confirmed by Mossbauer data. The lowest size of particles of MnCoFe2O4 (~3 nm) is obtained by auto combustion method. The spherical shaped nanoparticles are recorded by TEM. Furthermore, conductance response of Mn-Co ferrite nanomaterial was measured by exposing the material to reducing gas like liquefied petroleum gas (LPG) which showed a sensor response of ~0.19 at an optimum operating temperature of 250 °C.

  9. Escherichia coli growth under modeled reduced gravity

    NASA Technical Reports Server (NTRS)

    Baker, Paul W.; Meyer, Michelle L.; Leff, Laura G.

    2004-01-01

    Bacteria exhibit varying responses to modeled reduced gravity that can be simulated by clino-rotation. When Escherichia coli was subjected to different rotation speeds during clino-rotation, significant differences between modeled reduced gravity and normal gravity controls were observed only at higher speeds (30-50 rpm). There was no apparent affect of removing samples on the results obtained. When E. coli was grown in minimal medium (at 40 rpm), cell size was not affected by modeled reduced gravity and there were few differences in cell numbers. However, in higher nutrient conditions (i.e., dilute nutrient broth), total cell numbers were higher and cells were smaller under reduced gravity compared to normal gravity controls. Overall, the responses to modeled reduced gravity varied with nutrient conditions; larger surface to volume ratios may help compensate for the zone of nutrient depletion around the cells under modeled reduced gravity.

  10. Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction

    PubMed Central

    Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; Coe, Jesse; Conrad, Chelsie E.; Dörner, Katerina; Sierra, Raymond G.; Stevenson, Hilary P.; Camacho-Alanis, Fernanda; Grant, Thomas D.; Nelson, Garrett; James, Daniel; Calero, Guillermo; Wachter, Rebekka M.; Spence, John C. H.; Weierstall, Uwe; Fromme, Petra; Ros, Alexandra

    2015-01-01

    The advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles can be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ∼4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. This method will also permit an analysis of the dependence of crystal quality on crystal size. PMID:26798818

  11. Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction

    DOE PAGES

    Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; ...

    2015-08-19

    We report that the advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles canmore » be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ~4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. Ultimately, this method will also permit an analysis of the dependence of crystal quality on crystal size.« less

  12. Virtual reality gaming in the rehabilitation of the upper extremities post-stroke.

    PubMed

    Yates, Michael; Kelemen, Arpad; Sik Lanyi, Cecilia

    2016-01-01

    Occurrences of strokes often result in unilateral upper limb dysfunction. Dysfunctions of this nature frequently persist and can present chronic limitations to activities of daily living. Research into applying virtual reality gaming systems to provide rehabilitation therapy have seen resurgence. Themes explored in stroke rehab for paretic limbs are action observation and imitation, versatility, intensity and repetition and preservation of gains. Fifteen articles were ultimately selected for review. The purpose of this literature review is to compare the various virtual reality gaming modalities in the current literature and ascertain their efficacy. The literature supports the use of virtual reality gaming rehab therapy as equivalent to traditional therapies or as successful augmentation to those therapies. While some degree of rigor was displayed in the literature, small sample sizes, variation in study lengths and therapy durations and unequal controls reduce generalizability and comparability. Future studies should incorporate larger sample sizes and post-intervention follow-up measures.

  13. Composite outcomes in randomized clinical trials: arguments for and against.

    PubMed

    Ross, Sue

    2007-02-01

    Composite outcomes that combine a number of individual outcomes (such as types of morbidity) are frequently used as primary outcomes in obstetrical trials. The main argument for their use is to ensure that trials can answer important clinical questions in a timely fashion, without needing huge sample sizes. Arguments against their use are that composite outcomes may be difficult to use and interpret, leading to errors in sample size estimation, possible contradictory trial results, and difficulty in interpreting findings. Such problems may reduce the credibility of the research, and may impact on the implementation of findings. Composite outcomes are an attractive solution to help to overcome the problem of limited available resources for clinical trials. However, future studies should carefully consider both the advantages and disadvantages before using composite outcomes. Rigorous development and reporting of composite outcomes is essential if the research is to be useful.

  14. Small renal size in newborns with spina bifida: possible causes.

    PubMed

    Montaldo, Paolo; Montaldo, Luisa; Iossa, Azzurra Concetta; Cennamo, Marina; Caredda, Elisabetta; Del Gado, Roberto

    2014-02-01

    Previous studies reported that children with neural tube defects, but without any history of intrinsic renal diseases, have small kidneys when compared with age-matched standard renal growth. The aim of this study was to investigate the possible causes of small renal size in children with spina bifida by comparing growth hormone deficiency, physical limitations and hyperhomocysteinemia. The sample included 187 newborns with spina bifida. Renal sizes in the patients were assessed by using maximum measurement of renal length and the measurements were compared by using the Sutherland monogram. According to the results, the sample was divided into two groups--a group of 120 patients with small kidneys (under the third percentile) and a control group of 67 newborns with normal kidney size. Plasma total homocysteine was investigated in mothers and in their children. Serum insulin-like growth factor-1 (IGF-1) levels were measured. Serum IGF-1 levels were normal in both groups. Children and mothers with homocysteine levels >10 μmol/l were more than twice as likely to have small kidneys and to give to birth children with small kidneys, respectively, compared with newborns and mothers with homocysteine levels <10 μmol/l. An inverse correlation was also found between the homocysteine levels of mothers and kidney sizes of children (r = - 0.6109 P ≤ 0.01). It is highly important for mothers with hyperhomocysteinemia to be educated about benefits of folate supplementation in order to reduce the risk of small renal size and lower renal function in children.

  15. Frail or hale: Skeletal frailty indices in Medieval London skeletons

    PubMed Central

    Crews, Douglas E.

    2017-01-01

    To broaden bioarchaeological applicability of skeletal frailty indices (SFIs) and increase sample size, we propose indices with fewer biomarkers (2–11 non-metric biomarkers) and compare these reduced biomarker SFIs to the original metric/non-metric 13-biomarker SFI. From the 2-11-biomarker SFIs, we choose the index with the fewest biomarkers (6-biomarker SFI), which still maintains the statistical robusticity of a 13-biomarker SFI, and apply this index to the same Medieval monastic and nonmonastic populations, albeit with an increased sample size. For this increased monastic and nonmonastic sample, we also propose and implement a 4-biomarker SFI, comprised of biomarkers from each of four stressor categories, and compare these SFI distributions with those of the non-metric biomarker SFIs. From the Museum of London WORD database, we tabulate multiple SFIs (2- to 13-biomarkers) for Medieval monastic and nonmonastic samples (N = 134). We evaluate associations between these ten non-metric SFIs and the 13-biomarker SFI using Spearman’s correlation coefficients. Subsequently, we test non-metric 6-biomarker and 4-biomarker SFI distributions for associations with cemetery, age, and sex using Analysis of Variance/Covariance (ANOVA/ANCOVA) on larger samples from the monastic and nonmonastic cemeteries (N = 517). For Medieval samples, Spearman’s correlation coefficients show a significant association between the 13-biomarker SFI and all non-metric SFIs. Utilizing a 6-biomarker and parsimonious 4-biomarker SFI, we increase the nonmonastic and monastic samples and demonstrate significant lifestyle and sex differences in frailty that were not observed in the original, smaller sample. Results from the 6-biomarker and parsimonious 4-biomarker SFIs generally indicate similarities in means, explained variation (R2), and associated P-values (ANOVA/ANCOVA) within and between nonmonastic and monastic samples. We show that non-metric reduced biomarker SFIs provide alternative indices for application to other bioarchaeological collections. These findings suggest that a SFI, comprised of six or more non-metric biomarkers available for the specific sample, may have greater applicability than, but comparable statistical characteristics to, the originally proposed 13-biomarker SFI. PMID:28467438

  16. Implementation of Dynamic Extensible Adaptive Locally Exchangeable Measures (IDEALEM) v 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sim, Alex; Lee, Dongeun; Wu, K. John

    2016-03-04

    Handling large streaming data is essential for various applications such as network traffic analysis, social networks, energy cost trends, and environment modeling. However, it is in general intractable to store, compute, search, and retrieve large streaming data. This software addresses a fundamental issue, which is to reduce the size of large streaming data and still obtain accurate statistical analysis. As an example, when a high-speed network such as 100 Gbps network is monitored, the collected measurement data rapidly grows so that polynomial time algorithms (e.g., Gaussian processes) become intractable. One possible solution to reduce the storage of vast amounts ofmore » measured data is to store a random sample, such as one out of 1000 network packets. However, such static sampling methods (linear sampling) have drawbacks: (1) it is not scalable for high-rate streaming data, and (2) there is no guarantee of reflecting the underlying distribution. In this software, we implemented a dynamic sampling algorithm, based on the recent technology from the relational dynamic bayesian online locally exchangeable measures, that reduces the storage of data records in a large scale, and still provides accurate analysis of large streaming data. The software can be used for both online and offline data records.« less

  17. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  18. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  19. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  20. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. A better understanding of POLDER's cloud droplet size retrieval: impact of cloud horizontal inhomogeneity and directional sampling

    NASA Astrophysics Data System (ADS)

    Shang, H.; Chen, L.; Bréon, F.-M.; Letu, H.; Li, S.; Wang, Z.; Su, L.

    2015-07-01

    The principles of the Polarization and Directionality of the Earth's Reflectance (POLDER) cloud droplet size retrieval requires that clouds are horizontally homogeneous. Nevertheless, the retrieval is applied by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using the POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval, and then analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-scale variability in droplet effective radius (CDR) can mislead both the CDR and effective variance (EV) retrievals. Nevertheless, the sub-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval is accurate using limited observations and is largely independent of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, the measurements in the primary rainbow region (137-145°) are used to ensure accurate large droplet (> 15 μm) retrievals and reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data for June 2008, the new CDR results are compared with the operational CDRs. The comparison show that the operational CDRs tend to be underestimated for large droplets. The reason is that the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Lastly, a sub-scale retrieval case is analyzed, illustrating that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size parameters from POLDER measurements.

  2. Heavy metal speciation in various grain sizes of industrially contaminated street dust using multivariate statistical analysis.

    PubMed

    Yıldırım, Gülşen; Tokalıoğlu, Şerife

    2016-02-01

    A total of 36 street dust samples were collected from the streets of the Organised Industrial District in Kayseri, Turkey. This region includes a total of 818 work places in various industrial areas. The modified BCR (the European Community Bureau of Reference) sequential extraction procedure was applied to evaluate the mobility and bioavailability of trace elements (Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb and Zn) in street dusts of the study area. The BCR was classified into three steps: water/acid soluble fraction, reducible and oxidisable fraction. The remaining residue was dissolved by using aqua regia. The concentrations of the metals in street dust samples were determined by flame atomic absorption spectrometry. Also the effect of the different grain sizes (<38µm, 38-53µm and 53-74µm) of the 36 street dust samples on the mobility of the metals was investigated using the modified BCR procedure. The mobility sequence based on the sum of the first three phases (for <74µm grain size) was: Cd (71.3)>Cu (48.9)>Pb (42.8)=Cr (42.1)>Ni (41.4)>Zn (40.9)>Co (36.6)=Mn (36.3)>Fe (3.1). No significant difference was observed among metal partitioning for the three particle sizes. Correlation, principal component and cluster analysis were applied to identify probable natural and anthropogenic sources in the region. The principal component analysis results showed that this industrial district was influenced by traffic, industrial activities, air-borne emissions and natural sources. The accuracy of the results was checked by analysis of both the BCR-701 certified reference material and by recovery studies in street dust samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Improved population estimates through the use of auxiliary information

    USGS Publications Warehouse

    Johnson, D.H.; Ralph, C.J.; Scott, J.M.

    1981-01-01

    When estimating the size of a population of birds, the investigator may have, in addition to an estimator based on a statistical sample, information on one of several auxiliary variables, such as: (1) estimates of the population made on previous occasions, (2) measures of habitat variables associated with the size of the population, and (3) estimates of the population sizes of other species that correlate with the species of interest. Although many studies have described the relationships between each of these kinds of data and the population size to be estimated, very little work has been done to improve the estimator by incorporating such auxiliary information. A statistical methodology termed 'empirical Bayes' seems to be appropriate to these situations. The potential that empirical Bayes methodology has for improved estimation of the population size of the Mallard (Anas platyrhynchos) is explored. In the example considered, three empirical Bayes estimators were found to reduce the error by one-fourth to one-half of that of the usual estimator.

  4. Zinc Nucleation and Growth in Microgravity

    NASA Technical Reports Server (NTRS)

    Michael, B. Patrick; Nuth, J. A., III; Lilleleht, L. U.; Vondrak, Richard R. (Technical Monitor)

    2000-01-01

    We report our experiences with zinc nucleation in a microgravity environment aboard NASA's Reduced Gravity Research Facility. Zinc vapor is produced by a heater in a vacuum chamber containing argon gas. Nucleation is induced by cooling and its onset is easily detected visually by the appearance of a cloud of solid, at least partially crystalline zinc particles. Size distribution of these particles is monitored in situ by photon correlation spectroscopy. Samples of particles are also extracted for later analysis by SEM. The initially rapid increase in particle size is followed by a slower period of growth. We apply Scaled Nucleation Theory to our data and find that the derived critical temperature of zinc, the critical cluster size at nucleation, and the surface tension values are all in reasonably good agreement with their accepted literature values.

  5. Personalized prediction of chronic wound healing: an exponential mixed effects model using stereophotogrammetric measurement.

    PubMed

    Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M

    2014-05-01

    Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ < 0. Healing rate differs from patient to patient. Model development and validation indicates that accurate monitoring of wound geometry can adaptively predict healing progression and that larger wounds heal more rapidly. Accuracy of the prediction curve in the current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.

  6. Sexual Functioning and Behavior of Men with Body Dysmorphic Disorder Concerning Penis Size Compared with Men Anxious about Penis Size and with Controls: A Cohort Study

    PubMed Central

    Veale, David; Miles, Sarah; Read, Julie; Troglia, Andrea; Wylie, Kevan; Muir, Gordon

    2015-01-01

    Introduction Little is known about the sexual functioning and behavior of men anxious about the size of their penis and the means that they might use to try to alter the size of their penis. Aim To compare sexual functioning and behavior in men with body dysmorphic disorder (BDD) concerning penis size and in men with small penis anxiety (SPA without BDD) and in a control group of men who do not have any concerns. Methods An opportunistic sample of 90 men from the community were recruited and divided into three groups: BDD (n = 26); SPA (n = 31) and controls (n = 33). Main Outcome Measures The Index of Erectile Function (IEF), sexual identity and history; and interventions to alter the size of their penis. Results Men with BDD compared with controls had reduced erectile dysfunction, orgasmic function, intercourse satisfaction and overall satisfaction on the IEF. Men with SPA compared with controls had reduced intercourse satisfaction. There were no differences in sexual desire, the frequency of intercourse or masturbation across any of the three groups. Men with BDD and SPA were more likely than the controls to attempt to alter the shape or size of their penis (for example jelqing, vacuum pumps or stretching devices) with poor reported success. Conclusion Men with BDD are more likely to have erectile dysfunction and less satisfaction with intercourse than controls but maintain their libido. Further research is required to develop and evaluate a psychological intervention for such men with adequate outcome measures. PMID:26468378

  7. Sexual Functioning and Behavior of Men with Body Dysmorphic Disorder Concerning Penis Size Compared with Men Anxious about Penis Size and with Controls: A Cohort Study.

    PubMed

    Veale, David; Miles, Sarah; Read, Julie; Troglia, Andrea; Wylie, Kevan; Muir, Gordon

    2015-09-01

    Little is known about the sexual functioning and behavior of men anxious about the size of their penis and the means that they might use to try to alter the size of their penis. To compare sexual functioning and behavior in men with body dysmorphic disorder (BDD) concerning penis size and in men with small penis anxiety (SPA without BDD) and in a control group of men who do not have any concerns. An opportunistic sample of 90 men from the community were recruited and divided into three groups: BDD (n = 26); SPA (n = 31) and controls (n = 33). The Index of Erectile Function (IEF), sexual identity and history; and interventions to alter the size of their penis. Men with BDD compared with controls had reduced erectile dysfunction, orgasmic function, intercourse satisfaction and overall satisfaction on the IEF. Men with SPA compared with controls had reduced intercourse satisfaction. There were no differences in sexual desire, the frequency of intercourse or masturbation across any of the three groups. Men with BDD and SPA were more likely than the controls to attempt to alter the shape or size of their penis (for example jelqing, vacuum pumps or stretching devices) with poor reported success. Men with BDD are more likely to have erectile dysfunction and less satisfaction with intercourse than controls but maintain their libido. Further research is required to develop and evaluate a psychological intervention for such men with adequate outcome measures.

  8. Multichannel Nephelometer.

    DTIC Science & Technology

    1986-10-01

    consists of two near-hemispheric shells bolted to a mounted plate housing the sampling system. Its walls are anodized black to decrease 13 surface...a distinct advantage when the test aerosol contains a significant quantity of liquid. Membrane filters tend to load and reduce throughout while...Table 6. As indicated, these filters can be estremely efficient. Should absolute retention be required, the AAO grade filter with a pore size of 0.3

  9. Drying regimes in homogeneous porous media from macro- to nanoscale

    NASA Astrophysics Data System (ADS)

    Thiery, J.; Rodts, S.; Weitz, D. A.; Coussot, P.

    2017-07-01

    Magnetic resonance imaging visualization down to nanometric liquid films in model porous media with pore sizes from micro- to nanometers enables one to fully characterize the physical mechanisms of drying. For pore size larger than a few tens of nanometers, we identify an initial constant drying rate period, probing homogeneous desaturation, followed by a falling drying rate period. This second period is associated with the development of a gradient in saturation underneath the sample free surface that initiates the inward recession of the contact line. During this latter stage, the drying rate varies in accordance with vapor diffusion through the dry porous region, possibly affected by the Knudsen effect for small pore size. However, we show that for sufficiently small pore size and/or saturation the drying rate is increasingly reduced by the Kelvin effect. Subsequently, we demonstrate that this effect governs the kinetics of evaporation in nanopores as a homogeneous desaturation occurs. Eventually, under our experimental conditions, we show that the saturation unceasingly decreases in a homogeneous manner throughout the wet regions of the medium regardless of pore size or drying regime considered. This finding suggests the existence of continuous liquid flow towards the interface of higher evaporation, down to very low saturation or very small pore size. Paradoxically, even if this net flow is unidirectional and capillary driven, it corresponds to a series of diffused local capillary equilibrations over the full height of the sample, which might explain that a simple Darcy's law model does not predict the effect of scaling of the net flow rate on the pore size observed in our tests.

  10. Analysis of the typical small watershed of warping dams in the sand properties

    NASA Astrophysics Data System (ADS)

    Li, Li; Yang, Ji Shan; Sun, Wei Ying; Shen, Sha Sha

    2018-06-01

    Coarse sediment with a particle size greater than 0.05mm is the main deposit of riverbed in the lower Yellow River, the Loess Plateau is one of the concentrated source of coarse sediment, warping dam is one of the important engineering measures for gully control. Jiuyuangou basin is a typical small basin in the first sub region of hilly-gullied loess region, twenty warping dams in Jiuyuangou basin was selected as research object, samples of sediment along the main line of dam from upper, middle to lower reaches of dam fields and samples of undisturbed soil in slope of dam control basin were taken to carry out particle gradation analysis, in the hope of clearing reducing capacity on coarse sediment of different types of warping dam through the experimental data. The results show that the undisturbed soil in slope of dam control basin has characteristics of standard loess, the particle size are mainly distributed in 0.025 0.05mm, and the 0.05mm particle size of Jiuyuangou basinof loess is an obvious boundary; Particle size of sediment in 15 warping dam of Jiuyuangou basin are mainly distributed in 0.031 0.05mm with the dam tail is greater than dam front in general. The separation effect of horizontal pipe drainage is better than shaft drainage for which particle size greater than 0.05mm, notch dam is for particle size between 0.025 0.1 mm, and fill dam is for particle size between 0.016 0.1 mm, they all have a certain function in the sediment sorting.

  11. The moon illusion: a different view through the legs.

    PubMed

    Coren, S

    1992-12-01

    The fact that the overestimation of the horizon moon is reduced when individuals bend over and view it through their legs has been used as support for theories of the moon illusion based upon angle of regard and vestibular inputs. Inversion of the visual scene, however, can also reduce the salience of depth cue, so illusion reduction might be consistent with size constancy explanations. A sample of 70 subjects viewed normal and inverted pictorial arrays. The moon illusion was reduced in the inverted arrays, suggesting that the "through the legs" reduction of the moon illusion may reflect the alteration in perceived depth associated with scene inversion rather than angle of regard or vestibular effects.

  12. Sample size considerations for studies of intervention efficacy in the occupational setting.

    PubMed

    Lazovich, Deann; Murray, David M; Brosseau, Lisa M; Parker, David L; Milton, F Thomas; Dugan, Siobhan K

    2002-03-01

    Due to a shared environment and similarities among workers within a worksite, the strongest analytical design to evaluate the efficacy of an intervention to reduce occupational health or safety hazards is to randomly assign worksites, not workers, to the intervention and comparison conditions. Statistical methods are well described for estimating the sample size when the unit of assignment is a group but these methods have not been applied in the evaluation of occupational health and safety interventions. We review and apply the statistical methods for group-randomized trials in planning a study to evaluate the effectiveness of technical/behavioral interventions to reduce wood dust levels among small woodworking businesses. We conducted a pilot study in five small woodworking businesses to estimate variance components between and within worksites and between and within workers. In each worksite, 8 h time-weighted dust concentrations were obtained for each production employee on between two and five occasions. With these data, we estimated the parameters necessary to calculate the percent change in dust concentrations that we could detect (alpha = 0.05, power = 80%) for a range of worksites per condition, workers per worksite and repeat measurements per worker. The mean wood dust concentration across woodworking businesses was 4.53 mg/m3. The measure of similarity among workers within a woodworking business was large (intraclass correlation = 0.5086). Repeated measurements within a worker were weakly correlated (r = 0.1927) while repeated measurements within a worksite were strongly correlated (r = 0.8925). The dominant factor in the sample size calculation was the number of worksites per condition, with the number of workers per worksite playing a lesser role. We also observed that increasing the number of repeat measurements per person had little benefit given the low within-worker correlation in our data. We found that 30 worksites per condition and 10 workers per worksite would give us 80% power to detect a reduction of approximately 30% in wood dust levels (alpha = 0.05). Our results demonstrate the application of the group-randomized trials methodology to evaluate interventions to reduce occupational hazards. The methodology is widely applicable and not limited to the context of wood dust reduction.

  13. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  14. Maintenance of phenotypic variation: Repeatability, heritability and size-dependent processes in a wild brook trout population

    USGS Publications Warehouse

    Letcher, B.H.; Coombs, J.A.; Nislow, K.H.

    2011-01-01

    Phenotypic variation in body size can result from within-cohort variation in birth dates, among-individual growth variation and size-selective processes. We explore the relative effects of these processes on the maintenance of wide observed body size variation in stream-dwelling brook trout (Salvelinus fontinalis). Based on the analyses of multiple recaptures of individual fish, it appears that size distributions are largely determined by the maintenance of early size variation. We found no evidence for size-dependent compensatory growth (which would reduce size variation) and found no indication that size-dependent survival substantially influenced body size distributions. Depensatory growth (faster growth by larger individuals) reinforced early size variation, but was relatively strong only during the first sampling interval (age-0, fall). Maternal decisions on the timing and location of spawning could have a major influence on early, and as our results suggest, later (>age-0) size distributions. If this is the case, our estimates of heritability of body size (body length=0.25) will be dominated by processes that generate and maintain early size differences. As a result, evolutionary responses to environmental change that are mediated by body size may be largely expressed via changes in the timing and location of reproduction. Published 2011. This article is a US Government work and is in the public domain in the USA.

  15. Development of an X-ray fluorescence holographic measurement system for protein crystals

    NASA Astrophysics Data System (ADS)

    Sato-Tomita, Ayana; Shibayama, Naoya; Happo, Naohisa; Kimura, Koji; Okabe, Takahiro; Matsushita, Tomohiro; Park, Sam-Yong; Sasaki, Yuji C.; Hayashi, Kouichi

    2016-06-01

    Experimental procedure and setup for obtaining X-ray fluorescence hologram of crystalline metalloprotein samples are described. Human hemoglobin, an α2β2 tetrameric metalloprotein containing the Fe(II) heme active-site in each chain, was chosen for this study because of its wealth of crystallographic data. A cold gas flow system was introduced to reduce X-ray radiation damage of protein crystals that are usually fragile and susceptible to damage. A χ-stage was installed to rotate the sample while avoiding intersection between the X-ray beam and the sample loop or holder, which is needed for supporting fragile protein crystals. Huge hemoglobin crystals (with a maximum size of 8 × 6 × 3 mm3) were prepared and used to keep the footprint of the incident X-ray beam smaller than the sample size during the entire course of the measurement with the incident angle of 0°-70°. Under these experimental and data acquisition conditions, we achieved the first observation of the X-ray fluorescence hologram pattern from the protein crystals with minimal radiation damage, opening up a new and potential method for investigating the stereochemistry of the metal active-sites in biomacromolecules.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sato-Tomita, Ayana, E-mail: ayana.sato@jichi.ac.jp, E-mail: shibayam@jichi.ac.jp, E-mail: hayashi.koichi@nitech.ac.jp; Shibayama, Naoya, E-mail: ayana.sato@jichi.ac.jp, E-mail: shibayam@jichi.ac.jp, E-mail: hayashi.koichi@nitech.ac.jp; Okabe, Takahiro

    Experimental procedure and setup for obtaining X-ray fluorescence hologram of crystalline metalloprotein samples are described. Human hemoglobin, an α{sub 2}β{sub 2} tetrameric metalloprotein containing the Fe(II) heme active-site in each chain, was chosen for this study because of its wealth of crystallographic data. A cold gas flow system was introduced to reduce X-ray radiation damage of protein crystals that are usually fragile and susceptible to damage. A χ-stage was installed to rotate the sample while avoiding intersection between the X-ray beam and the sample loop or holder, which is needed for supporting fragile protein crystals. Huge hemoglobin crystals (with amore » maximum size of 8 × 6 × 3 mm{sup 3}) were prepared and used to keep the footprint of the incident X-ray beam smaller than the sample size during the entire course of the measurement with the incident angle of 0°-70°. Under these experimental and data acquisition conditions, we achieved the first observation of the X-ray fluorescence hologram pattern from the protein crystals with minimal radiation damage, opening up a new and potential method for investigating the stereochemistry of the metal active-sites in biomacromolecules.« less

  17. A transmission imaging spectrograph and microfabricated channel system for DNA analysis.

    PubMed

    Simpson, J W; Ruiz-Martinez, M C; Mulhern, G T; Berka, J; Latimer, D R; Ball, J A; Rothberg, J M; Went, G T

    2000-01-01

    In this paper we present the development of a DNA analysis system using a microfabricated channel device and a novel transmission imaging spectrograph which can be efficiently incorporated into a high throughput genomics facility for both sizing and sequencing of DNA fragments. The device contains 48 channels etched on a glass substrate. The channels are sealed with a flat glass plate which also provides a series of apertures for sample loading and contact with buffer reservoirs. Samples can be easily loaded in volumes up to 640 nL without band broadening because of an efficient electrokinetic stacking at the electrophoresis channel entrance. The system uses a dual laser excitation source and a highly sensitive charge-coupled device (CCD) detector allowing for simultaneous detection of many fluorescent dyes. The sieving matrices for the separation of single-stranded DNA fragments are polymerized in situ in denaturing buffer systems. Examples of separation of single-stranded DNA fragments up to 500 bases in length are shown, including accurate sizing of GeneCalling fragments, and sequencing samples prepared with a reduced amount of dye terminators. An increase in sample throughput has been achieved by color multiplexing.

  18. The Effect of Traditional Treatments on Heavy Metal Toxicity of Armenian Bole

    PubMed Central

    Hosamo, Ammar; Zarshenas, Mohammad Mehdi; Mehdizadeh, Alireza; Zomorodian, Kamiar; Khani, Ayda Hossein

    2016-01-01

    Background: Clay has been used for its nutrition, cosmetic, and antibacterial properties for thousands of years. Its small particle size, large surface area, and high concentration of ions have made it an interesting subject for pharmaceutical research. There have been studies on scavenging foreign substances and antibacterial properties of clay minerals. The main problem with the medical use of these agents, today, is their heavy metal toxicity. This includes arsenic, cadmium, lead, nickel, zinc, and iron. Iranian traditional medicine (ITM) introduces different clays as medicaments. In this system, there are specific processes for these agents, which might reduce the chance of heavy metal toxicity. Armenian bole is a type of clay that has been used to treat a wound. Before in vivo studies of this clay, its safety should be confirmed. Methods: In this work, we investigated the effect of washing process as mentioned in ITM books regarding the presence of Pb, As, and Cd in 5 samples using atomic absorption spectrometry. We washed each sample (50 g) with 500 cc of distilled water. The samples were filtered and dried at room temperature for 24 hours. Results: In all studied samples, the amount of Pb and Cd was reduced after the ITM washing process. The amount of As was reduced in 3 samples and increased in 2 other samples. Conclusion: In ITM books, there are general considerations for the use of medicinal clay. These agents should not be used before special treatments such as the washing process. In this study, we observed the effect of washing process on reducing the amount of heavy metals in Armenian bole samples. In two samples, washing caused an increase in the amount of As. As these heavy metals sediment according to their density in different layers, the sample layer on which the spectrometry is performed could have an effect on the results. PMID:27840531

  19. The Effect of Traditional Treatments on Heavy Metal Toxicity of Armenian Bole

    PubMed Central

    Hosamo, Ammar; Zarshenas, Mohammad Mehdi; Mehdizadeh, Alireza; Zomorodian, Kamiar; Khani, Ayda Hossein

    2016-01-01

    Background: Clay has been used for its nutrition, cosmetic, and antibacterial properties for thousands of years. Its small particle size, large surface area, and high concentration of ions have made it an interesting subject for pharmaceutical research. There have been studies on scavenging foreign substances and antibacterial properties of clay minerals. The main problem with the medical use of these agents, today, is their heavy metal toxicity. This includes arsenic, cadmium, lead, nickel, zinc, and iron. Iranian traditional medicine (ITM) introduces different clays as medicaments. In this system, there are specific processes for these agents, which might reduce the chance of heavy metal toxicity. Armenian bole is a type of clay that has been used to treat a wound. Before in vivo studies of this clay, its safety should be confirmed. Methods: In this work, we investigated the effect of washing process as mentioned in ITM books regarding the presence of Pb, As, and Cd in 5 samples using atomic absorption spectrometry. We washed each sample (50 g) with 500 cc of distilled water. The samples were filtered and dried at room temperature for 24 hours. Results: In all studied samples, the amount of Pb and Cd was reduced after the ITM washing process. The amount of As was reduced in 3 samples and increased in 2 other samples. Conclusion: In ITM books, there are general considerations for the use of medicinal clay. These agents should not be used before special treatments such as the washing process. In this study, we observed the effect of washing process on reducing the amount of heavy metals in Armenian bole samples. In two samples, washing caused an increase in the amount of As. As these heavy metals sediment according to their density in different layers, the sample layer on which the spectrometry is performed could have an effect on the results. PMID:27516695

  20. The Effect of Traditional Treatments on Heavy Metal Toxicity of Armenian Bole.

    PubMed

    Hosamo, Ammar; Zarshenas, Mohammad Mehdi; Mehdizadeh, Alireza; Zomorodian, Kamiar; Khani, Ayda Hossein

    2016-05-01

    Clay has been used for its nutrition, cosmetic, and antibacterial properties for thousands of years. Its small particle size, large surface area, and high concentration of ions have made it an interesting subject for pharmaceutical research. There have been studies on scavenging foreign substances and antibacterial properties of clay minerals. The main problem with the medical use of these agents, today, is their heavy metal toxicity. This includes arsenic, cadmium, lead, nickel, zinc, and iron. Iranian traditional medicine (ITM) introduces different clays as medicaments. In this system, there are specific processes for these agents, which might reduce the chance of heavy metal toxicity. Armenian bole is a type of clay that has been used to treat a wound. Before in vivo studies of this clay, its safety should be confirmed. In this work, we investigated the effect of washing process as mentioned in ITM books regarding the presence of Pb, As, and Cd in 5 samples using atomic absorption spectrometry. We washed each sample (50 g) with 500 cc of distilled water. The samples were filtered and dried at room temperature for 24 hours. In all studied samples, the amount of Pb and Cd was reduced after the ITM washing process. The amount of As was reduced in 3 samples and increased in 2 other samples. In ITM books, there are general considerations for the use of medicinal clay. These agents should not be used before special treatments such as the washing process. In this study, we observed the effect of washing process on reducing the amount of heavy metals in Armenian bole samples. In two samples, washing caused an increase in the amount of As. As these heavy metals sediment according to their density in different layers, the sample layer on which the spectrometry is performed could have an effect on the results.

  1. Polyol-mediated thermolysis process for the synthesis of MgO nanoparticles and nanowires

    NASA Astrophysics Data System (ADS)

    Subramania, A.; Vijaya Kumar, G.; Sathiya Priya, A. R.; Vasudevan, T.

    2007-06-01

    The main aim of this work is to prepare MgO nanoparticles and nanowires by a novel polyol-mediated thermolysis (PMT) process. The influence of different mole concentration of magnesium acetate, polyvinyl pyrrolidone (PVP; capping agent) and ethylene glycol (EG; solvent as well as reducing agent) on the formation of nanoparticles and nanowires and the effect of calcination on the crystalline size of the samples were also examined. The resultant oxide structure, thermal behaviour, size and shape have been studied using x-ray diffraction (XRD) studies, thermal (TG/DTA) analysis and scanning electron microscopy (SEM)/transmission electron microscopy (TEM) respectively.

  2. Use of Friction Stir Processing for Improving Heat-Affected Zone Liquation Cracking Resistance of a Cast Magnesium Alloy AZ91D

    NASA Astrophysics Data System (ADS)

    Karthik, G. M.; Janaki Ram, G. D.; Kottada, Ravi Sankar

    2017-12-01

    In this work, a cast magnesium alloy AZ91D was friction stir processed. Detailed microstructural studies and Gleeble hot ductility tests were conducted on the as-cast and the FSPed samples to comparatively assess their heat-affected zone liquation cracking behavior. The results show that the use of FSP as a pretreatment to fusion welding can strikingly improve the heat-affected zone liquation cracking resistance of alloy AZ91D by reducing the amount and size of the low-melting eutectic β (Mg17Al12) as well as by refining the matrix grain size.

  3. The Slope of Change: An Environmental Management Approach to Reduce Drinking on a Day of Celebration at a U.S. College

    PubMed Central

    Marchell, Timothy C.; Lewis, Deborah D.; Croom, Katherine; Lesser, Martin L.; Murphy, Susan H.; Reyna, Valerie F.; Frank, Jeremy; Staiano-Coico, Lisa

    2013-01-01

    OBJECTIVE This research extends the literature on event-specific environmental management with a case study evaluation of an intervention designed to reduce student drinking at a university's year-end celebration. PARTICIPANTS Cornell University undergraduates were surveyed each May from 2001 through 2009. Sample sizes ranged from 322 to 1,973. METHODS Randomly sampled surveys were conducted after a large, annual spring campus celebration. An environmental management plan was initiated in 2003 that included increased enforcement of the minimum age drinking law (MADL). RESULTS In the short-term, drinking at the campus celebration decreased while drinking before the event increased. Over time, the intervention significantly reduced high-risk drinking on the day of the event, especially among those under the age of 21. CONCLUSION These findings are contrary to the argument that enforcement of MADLs simply lead to increased high-risk drinking, and therefore have implications for how colleges approach the challenge of student alcohol misuse. PMID:23930747

  4. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Influence of Yb{sub 2}O{sub 3} on electrical and microstructural characteristics of CaCu{sub 3}Ti{sub 4}O{sub 12} ceramics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Kai; State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054; Luo, Yun

    2015-09-15

    Graphical abstract: Some Yb atoms entered in the lattice of CCTO substituted the Ca sites, the rest of Yb atoms concentrated at grain boundaries decreased the grain size. The dielectric constant was decreased by Yb doping. The dielectric loss of the CCTO could be greatly reduced at low frequency. - Highlights: • Yb atoms may take the place of Ca sites and concentrate at grain boundaries. • Tiny second phase corresponding to Yb may decrease the grain size. • Decrease of the grain size leads to the decrease of dielectric constant. • Yb doping could decrease the dielectric loss ofmore » CCTO. - Abstract: This paper focuses on the remarkable effects of Yb{sub 2}O{sub 3} doping on the microstructure and dielectric characteristics of CaCu{sub 3}Ti{sub 4}O{sub 12} (CCTO). Samples were prepared by the solid phase reaction method and sintered in air at 1030 °C for 12 h. X-ray diffraction and X-ray photoelectron spectroscopy studies confirm that the primary phase is CCTO. Some Yb{sup 3+} ions may substitute into the Ca site at the center or zenith sites of the CCTO lattice hexahedron, while the rest of the Yb atoms may concentrate at grain boundaries. The grain size of Yb{sub 2}O{sub 3}-doped CCTO ceramics were examined by scanning electron microscopy and demonstrate sharp grain size reduction with Yb{sub 2}O{sub 3} doping. From dielectric property measurements, the Yb{sub 2}O{sub 3} doping reduces the dielectric constant of CCTO, and the dielectric loss is also reduced.« less

  6. Autonomous microfluidic sample preparation system for protein profile-based detection of aerosolized bacterial cells and spores.

    PubMed

    Stachowiak, Jeanne C; Shugard, Erin E; Mosier, Bruce P; Renzi, Ronald F; Caton, Pamela F; Ferko, Scott M; Van de Vreugde, James L; Yee, Daniel D; Haroldsen, Brent L; VanderNoot, Victoria A

    2007-08-01

    For domestic and military security, an autonomous system capable of continuously monitoring for airborne biothreat agents is necessary. At present, no system meets the requirements for size, speed, sensitivity, and selectivity to warn against and lead to the prevention of infection in field settings. We present a fully automated system for the detection of aerosolized bacterial biothreat agents such as Bacillus subtilis (surrogate for Bacillus anthracis) based on protein profiling by chip gel electrophoresis coupled with a microfluidic sample preparation system. Protein profiling has previously been demonstrated to differentiate between bacterial organisms. With the goal of reducing response time, multiple microfluidic component modules, including aerosol collection via a commercially available collector, concentration, thermochemical lysis, size exclusion chromatography, fluorescent labeling, and chip gel electrophoresis were integrated together to create an autonomous collection/sample preparation/analysis system. The cycle time for sample preparation was approximately 5 min, while total cycle time, including chip gel electrophoresis, was approximately 10 min. Sensitivity of the coupled system for the detection of B. subtilis spores was 16 agent-containing particles per liter of air, based on samples that were prepared to simulate those collected by wetted cyclone aerosol collector of approximately 80% efficiency operating for 7 min.

  7. Evaluation of sampling frequency, window size and sensor position for classification of sheep behaviour.

    PubMed

    Walton, Emily; Casey, Christy; Mitsch, Jurgen; Vázquez-Diosdado, Jorge A; Yan, Juan; Dottorini, Tania; Ellis, Keith A; Winterlich, Anthony; Kaler, Jasmeet

    2018-02-01

    Automated behavioural classification and identification through sensors has the potential to improve health and welfare of the animals. Position of a sensor, sampling frequency and window size of segmented signal data has a major impact on classification accuracy in activity recognition and energy needs for the sensor, yet, there are no studies in precision livestock farming that have evaluated the effect of all these factors simultaneously. The aim of this study was to evaluate the effects of position (ear and collar), sampling frequency (8, 16 and 32 Hz) of a triaxial accelerometer and gyroscope sensor and window size (3, 5 and 7 s) on the classification of important behaviours in sheep such as lying, standing and walking. Behaviours were classified using a random forest approach with 44 feature characteristics. The best performance for walking, standing and lying classification in sheep (accuracy 95%, F -score 91%-97%) was obtained using combination of 32 Hz, 7 s and 32 Hz, 5 s for both ear and collar sensors, although, results obtained with 16 Hz and 7 s window were comparable with accuracy of 91%-93% and F -score 88%-95%. Energy efficiency was best at a 7 s window. This suggests that sampling at 16 Hz with 7 s window will offer benefits in a real-time behavioural monitoring system for sheep due to reduced energy needs.

  8. Evaluation of sampling frequency, window size and sensor position for classification of sheep behaviour

    PubMed Central

    Walton, Emily; Casey, Christy; Mitsch, Jurgen; Vázquez-Diosdado, Jorge A.; Yan, Juan; Dottorini, Tania; Ellis, Keith A.; Winterlich, Anthony

    2018-01-01

    Automated behavioural classification and identification through sensors has the potential to improve health and welfare of the animals. Position of a sensor, sampling frequency and window size of segmented signal data has a major impact on classification accuracy in activity recognition and energy needs for the sensor, yet, there are no studies in precision livestock farming that have evaluated the effect of all these factors simultaneously. The aim of this study was to evaluate the effects of position (ear and collar), sampling frequency (8, 16 and 32 Hz) of a triaxial accelerometer and gyroscope sensor and window size (3, 5 and 7 s) on the classification of important behaviours in sheep such as lying, standing and walking. Behaviours were classified using a random forest approach with 44 feature characteristics. The best performance for walking, standing and lying classification in sheep (accuracy 95%, F-score 91%–97%) was obtained using combination of 32 Hz, 7 s and 32 Hz, 5 s for both ear and collar sensors, although, results obtained with 16 Hz and 7 s window were comparable with accuracy of 91%–93% and F-score 88%–95%. Energy efficiency was best at a 7 s window. This suggests that sampling at 16 Hz with 7 s window will offer benefits in a real-time behavioural monitoring system for sheep due to reduced energy needs. PMID:29515862

  9. Estimating individual glomerular volume in the human kidney: clinical perspectives.

    PubMed

    Puelles, Victor G; Zimanyi, Monika A; Samuel, Terence; Hughson, Michael D; Douglas-Denton, Rebecca N; Bertram, John F; Armitage, James A

    2012-05-01

    Measurement of individual glomerular volumes (IGV) has allowed the identification of drivers of glomerular hypertrophy in subjects without overt renal pathology. This study aims to highlight the relevance of IGV measurements with possible clinical implications and determine how many profiles must be measured in order to achieve stable size distribution estimates. We re-analysed 2250 IGV estimates obtained using the disector/Cavalieri method in 41 African and 34 Caucasian Americans. Pooled IGV analysis of mean and variance was conducted. Monte-Carlo (Jackknife) simulations determined the effect of the number of sampled glomeruli on mean IGV. Lin's concordance coefficient (R(C)), coefficient of variation (CV) and coefficient of error (CE) measured reliability. IGV mean and variance increased with overweight and hypertensive status. Superficial glomeruli were significantly smaller than juxtamedullary glomeruli in all subjects (P < 0.01), by race (P < 0.05) and in obese individuals (P < 0.01). Subjects with multiple chronic kidney disease (CKD) comorbidities showed significant increases in IGV mean and variability. Overall, mean IGV was particularly reliable with nine or more sampled glomeruli (R(C) > 0.95, <5% difference in CV and CE). These observations were not affected by a reduced sample size and did not disrupt the inverse linear correlation between mean IGV and estimated total glomerular number. Multiple comorbidities for CKD are associated with increased IGV mean and variance within subjects, including overweight, obesity and hypertension. Zonal selection and the number of sampled glomeruli do not represent drawbacks for future longitudinal biopsy-based studies of glomerular size and distribution.

  10. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  11. Size exclusion chromatography with superficially porous particles.

    PubMed

    Schure, Mark R; Moran, Robert E

    2017-01-13

    A comparison is made using size-exclusion chromatography (SEC) of synthetic polymers between fully porous particles (FPPs) and superficially porous particles (SPPs) with similar particle diameters, pore sizes and equal flow rates. Polystyrene molecular weight standards with a mobile phase of tetrahydrofuran are utilized for all measurements conducted with standard HPLC equipment. Although it is traditionally thought that larger pore volume is thermodynamically advantageous in SEC for better separations, SPPs have kinetic advantages and these will be shown to compensate for the loss in pore volume compared to FPPs. The comparison metrics include the elution range (smaller with SPPs), the plate count (larger for SPPs), the rate production of theoretical plates (larger for SPPs) and the specific resolution (larger with FPPs). Advantages to using SPPs for SEC are discussed such that similar separations can be conducted faster using SPPs. SEC using SPPs offers similar peak capacities to that using FPPs but with faster operation. This also suggests that SEC conducted in the second dimension of a two-dimensional liquid chromatograph may benefit with reduced run time and with equivalently reduced peak width making SPPs advantageous for sampling the first dimension by the second dimension separator. Additional advantages are discussed for biomolecules along with a discussion of optimization criteria for size-based separations. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Microwave Heating of Synthetic Skin Samples for Potential Treatment of Gout Using the Metal-Assisted and Microwave-Accelerated Decrystallization Technique

    PubMed Central

    2016-01-01

    Physical stability of synthetic skin samples during their exposure to microwave heating was investigated to demonstrate the use of the metal-assisted and microwave-accelerated decrystallization (MAMAD) technique for potential biomedical applications. In this regard, optical microscopy and temperature measurements were employed for the qualitative and quantitative assessment of damage to synthetic skin samples during 20 s intermittent microwave heating using a monomode microwave source (at 8 GHz, 2–20 W) up to 120 s. The extent of damage to synthetic skin samples, assessed by the change in the surface area of skin samples, was negligible for microwave power of ≤7 W and more extensive damage (>50%) to skin samples occurred when exposed to >7 W at initial temperature range of 20–39 °C. The initial temperature of synthetic skin samples significantly affected the extent of change in temperature of synthetic skin samples during their exposure to microwave heating. The proof of principle use of the MAMAD technique was demonstrated for the decrystallization of a model biological crystal (l-alanine) placed under synthetic skin samples in the presence of gold nanoparticles. Our results showed that the size (initial size ∼850 μm) of l-alanine crystals can be reduced up to 60% in 120 s without damage to synthetic skin samples using the MAMAD technique. Finite-difference time-domain-based simulations of the electric field distribution of an 8 GHz monomode microwave radiation showed that synthetic skin samples are predicted to absorb ∼92.2% of the microwave radiation. PMID:27917407

  13. Spatio-temporal population structuring and genetic diversity retention in depleted Atlantic Bluefin tuna of the Mediterranean Sea

    PubMed Central

    Riccioni, Giulia; Landi, Monica; Ferrara, Giorgia; Milano, Ilaria; Cariani, Alessia; Zane, Lorenzo; Sella, Massimo; Barbujani, Guido; Tinti, Fausto

    2010-01-01

    Fishery genetics have greatly changed our understanding of population dynamics and structuring in marine fish. In this study, we show that the Atlantic Bluefin tuna (ABFT, Thunnus thynnus), an oceanic predatory species exhibiting highly migratory behavior, large population size, and high potential for dispersal during early life stages, displays significant genetic differences over space and time, both at the fine and large scales of variation. We compared microsatellite variation of contemporary (n = 256) and historical (n = 99) biological samples of ABFTs of the central-western Mediterranean Sea, the latter dating back to the early 20th century. Measures of genetic differentiation and a general heterozygote deficit suggest that differences exist among population samples, both now and 96–80 years ago. Thus, ABFTs do not represent a single panmictic population in the Mediterranean Sea. Statistics designed to infer changes in population size, both from current and past genetic variation, suggest that some Mediterranean ABFT populations, although still not severely reduced in their genetic potential, might have suffered from demographic declines. The short-term estimates of effective population size are straddled on the minimum threshold (effective population size = 500) indicated to maintain genetic diversity and evolutionary potential across several generations in natural populations. PMID:20080643

  14. Strategies for Improving Power in School-Randomized Studies of Professional Development.

    PubMed

    Kelcey, Ben; Phelps, Geoffrey

    2013-12-01

    Group-randomized designs are well suited for studies of professional development because they can accommodate programs that are delivered to intact groups (e.g., schools), the collaborative nature of professional development, and extant teacher/school assignments. Though group designs may be theoretically favorable, prior evidence has suggested that they may be challenging to conduct in professional development studies because well-powered designs will typically require large sample sizes or expect large effect sizes. Using teacher knowledge outcomes in mathematics, we investigated when and the extent to which there is evidence that covariance adjustment on a pretest, teacher certification, or demographic covariates can reduce the sample size necessary to achieve reasonable power. Our analyses drew on multilevel models and outcomes in five different content areas for over 4,000 teachers and 2,000 schools. Using these estimates, we assessed the minimum detectable effect sizes for several school-randomized designs with and without covariance adjustment. The analyses suggested that teachers' knowledge is substantially clustered within schools in each of the five content areas and that covariance adjustment for a pretest or, to a lesser extent, teacher certification, has the potential to transform designs that are unreasonably large for professional development studies into viable studies. © The Author(s) 2014.

  15. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  16. Technical advances in flow cytometry-based diagnosis and monitoring of paroxysmal nocturnal hemoglobinuria

    PubMed Central

    Correia, Rodolfo Patussi; Bento, Laiz Cameirão; Bortolucci, Ana Carolina Apelle; Alexandre, Anderson Marega; Vaz, Andressa da Costa; Schimidell, Daniela; Pedro, Eduardo de Carvalho; Perin, Fabricio Simões; Nozawa, Sonia Tsukasa; Mendes, Cláudio Ernesto Albers; Barroso, Rodrigo de Souza; Bacal, Nydia Strachman

    2016-01-01

    ABSTRACT Objective: To discuss the implementation of technical advances in laboratory diagnosis and monitoring of paroxysmal nocturnal hemoglobinuria for validation of high-sensitivity flow cytometry protocols. Methods: A retrospective study based on analysis of laboratory data from 745 patient samples submitted to flow cytometry for diagnosis and/or monitoring of paroxysmal nocturnal hemoglobinuria. Results: Implementation of technical advances reduced test costs and improved flow cytometry resolution for paroxysmal nocturnal hemoglobinuria clone detection. Conclusion: High-sensitivity flow cytometry allowed more sensitive determination of paroxysmal nocturnal hemoglobinuria clone type and size, particularly in samples with small clones. PMID:27759825

  17. Multilaser Herriott Cell for Planetary Tunable Laser Spectrometers

    NASA Technical Reports Server (NTRS)

    Tarsitano, Christopher G.; Webster, Christopher R.

    2007-01-01

    Geometric optics and matrix methods are used to mathematically model multilaser Herriott cells for tunable laser absorption spectrometers for planetary missions. The Herriott cells presented accommodate several laser sources that follow independent optical paths but probe a single gas cell. Strategically placed output holes located in the far mirrors of the Herriott cells reduce the size of the spectrometers. A four-channel Herriott cell configuration is presented for the specific application as the sample cell of the tunable laser spectrometer instrument selected for the sample analysis at Mars analytical suite on the 2009 Mars Science Laboratory mission.

  18. Pooling sheep faecal samples for the assessment of anthelmintic drug efficacy using McMaster and Mini-FLOTAC in gastrointestinal strongyle and Nematodirus infection.

    PubMed

    Kenyon, Fiona; Rinaldi, Laura; McBean, Dave; Pepe, Paola; Bosco, Antonio; Melville, Lynsey; Devin, Leigh; Mitchell, Gillian; Ianniello, Davide; Charlier, Johannes; Vercruysse, Jozef; Cringoli, Giuseppe; Levecke, Bruno

    2016-07-30

    In small ruminants, faecal egg counts (FECs) and reduction in FECs (FECR) are the most common methods for the assessment of intensity of gastrointestinal (GI) nematodes infections and anthelmintic drug efficacy, respectively. The main limitation of these methods is the time and cost to conduct FECs on a representative number of individual animals. A cost-saving alternative would be to examine pooled faecal samples, however little is known regarding whether pooling can give representative results. In the present study, we compared the FECR results obtained by both an individual and a pooled examination strategy across different pool sizes and analytical sensitivity of the FEC techniques. A survey was conducted on 5 sheep farms in Scotland, where anthelmintic resistance is known to be widespread. Lambs were treated with fenbendazole (4 groups), levamisole (3 groups), ivermectin (3 groups) or moxidectin (1 group). For each group, individual faecal samples were collected from 20 animals, at baseline (D0) and 14 days after (D14) anthelmintic administration. Faecal samples were analyzed as pools of 3-5, 6-10, and 14-20 individual samples. Both individual and pooled samples were screened for GI strongyle and Nematodirus eggs using two FEC techniques with three different levels of analytical sensitivity, including Mini-FLOTAC (analytical sensitivity of 10 eggs per gram of faeces (EPG)) and McMaster (analytical sensitivity of 15 or 50 EPG).For both Mini-FLOTAC and McMaster (analytical sensitivity of 15 EPG), there was a perfect agreement in classifying the efficacy of the anthelmintic as 'normal', 'doubtful' or 'reduced' regardless of pool size. When using the McMaster method (analytical sensitivity of 50 EPG) anthelmintic efficacy was often falsely classified as 'normal' or assessment was not possible due to zero FECs at D0, and this became more pronounced when the pool size increased. In conclusion, pooling ovine faecal samples holds promise as a cost-saving and efficient strategy for assessing GI nematode FECR. However, for the assessment FECR one will need to consider the baseline FEC, pool size and analytical sensitivity of the method. Copyright © 2016. Published by Elsevier B.V.

  19. Poly(vinyl alcohol) gels as photoacoustic breast phantoms revisited.

    PubMed

    Xia, Wenfeng; Piras, Daniele; Heijblom, Michelle; Steenbergen, Wiendelt; van Leeuwen, Ton G; Manohar, Srirang

    2011-07-01

    A popular phantom in photoacoustic imaging is poly(vinyl alcohol) (PVA) hydrogel fabricated by freezing and thawing (F-T) aqueous solutions of PVA. The material possesses acoustic and optical properties similar to those of tissue. Earlier work characterized PVA gels in small test specimens where temperature distributions during F-T are relatively homogeneous. In this work, in breast-sized samples we observed substantial temperature differences between the shallow regions and the interior during the F-T procedure. We investigated whether spatial variations were also present in the acoustic and optical properties. The speed of sound, acoustic attenuation, and optical reduced scattering coefficients were measured on specimens sampled at various locations in a large phantom. In general, the properties matched values quoted for breast tissue. But while acoustic properties were relatively homogeneous, the reduced scattering was substantially different at the surface compared with the interior. We correlated these variations with gel microstructure inspected using scanning electron microscopy. Interestingly, the phantom's reduced scattering spatial distribution matches the optical properties of the standard two-layer breast model used in x ray dosimetry. We conclude that large PVA samples prepared using the standard recipe make excellent breast tissue phantoms.

  20. Effects of ammonium hydroxide on the structure and gas adsorption of nanosized Zr-MOFs (UiO-66).

    PubMed

    Abid, Hussein Rasool; Ang, Ha Ming; Wang, Shaobin

    2012-05-21

    Several zirconium-based metal-organic frameworks (Zr-MOFs) have been synthesized using ammonium hydroxide as an additive in the synthesis process. Their physicochemical properties have been characterized by N(2) adsorption/desorption, XRD, SEM, FTIR, and TGA, and their application in CO(2) adsorption was evaluated. It was found that addition of ammonium hydroxide produced some effects on the structure and adsorption behavior of Zr-MOFs. The pore size and pore volume of Zr-MOFs were enhanced with the additive, however, specific surface area of Zr-MOFs was reduced. Using an ammonium hydroxide additive, the crystal size of Zr-MOF was reduced with increasing amount of the additive. All the samples presented strong thermal stability. Adsorption tests showed that capacity of CO(2) adsorption on the Zr-MOFs under standard conditions was reduced due to decreased micropore fractions. However, modified Zr-MOFs had significantly lower adsorption heat. The adsorption capacity of carbon dioxide was increased at high pressure, reaching 8.63 mmol g(-1) at 987 kPa for Zr-MOF-NH(4)-2.

  1. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  2. Fast wettability transition from hydrophilic to superhydrophobic laser-textured stainless steel surfaces under low-temperature annealing

    NASA Astrophysics Data System (ADS)

    Ngo, Chi-Vinh; Chun, Doo-Man

    2017-07-01

    Recently, the fabrication of superhydrophobic metallic surfaces by means of pulsed laser texturing has been developed. After laser texturing, samples are typically chemically coated or aged in ambient air for a relatively long time of several weeks to achieve superhydrophobicity. To accelerate the wettability transition from hydrophilicity to superhydrophobicity without the use of additional chemical treatment, a simple annealing post process has been developed. In the present work, grid patterns were first fabricated on stainless steel by a nanosecond pulsed laser, then an additional low-temperature annealing post process at 100 °C was applied. The effect of 100-500 μm step size of the textured grid upon the wettability transition time was also investigated. The proposed post process reduced the transition time from a couple of months to within several hours. All samples showed superhydrophobicity with contact angles greater than 160° and sliding angles smaller than 10° except samples with 500 μm step size, and could be applied in several potential applications such as self-cleaning and control of water adhesion.

  3. Problems in determining the surface density of the Galactic disk

    NASA Technical Reports Server (NTRS)

    Statler, Thomas S.

    1989-01-01

    A new method is presented for determining the local surface density of the Galactic disk from distance and velocity measurements of stars toward the Galactic poles. The procedure is fully three-dimensional, approximating the Galactic potential by a potential of Staeckel form and using the analytic third integral to treat the tilt and the change of shape of the velocity ellipsoid consistently. Applying the procedure to artificial data superficially resembling the K dwarf sample of Kuijken and Gilmore (1988, 1989), it is shown that the current best estimates of local disk surface density are uncertain by at least 30 percent. Of this, about 25 percent is due to the size of the velocity sample, about 15 percent comes from uncertainties in the rotation curve and the solar galactocentric distance, and about 10 percent from ignorance of the shape of the velocity distribution above z = 1 kpc, the errors adding in quadrature. Increasing the sample size by a factor of 3 will reduce the error to 20 percent. To achieve 10 percent accuracy, observations will be needed along other lines of sight to constrain the shape of the velocity ellipsoid.

  4. Reducing uncertainty in dust monitoring to detect aeolian sediment transport responses to land cover change

    NASA Astrophysics Data System (ADS)

    Webb, N.; Chappell, A.; Van Zee, J.; Toledo, D.; Duniway, M.; Billings, B.; Tedela, N.

    2017-12-01

    Anthropogenic land use and land cover change (LULCC) influence global rates of wind erosion and dust emission, yet our understanding of the magnitude of the responses remains poor. Field measurements and monitoring provide essential data to resolve aeolian sediment transport patterns and assess the impacts of human land use and management intensity. Data collected in the field are also required for dust model calibration and testing, as models have become the primary tool for assessing LULCC-dust cycle interactions. However, there is considerable uncertainty in estimates of dust emission due to the spatial variability of sediment transport. Field sampling designs are currently rudimentary and considerable opportunities are available to reduce the uncertainty. Establishing the minimum detectable change is critical for measuring spatial and temporal patterns of sediment transport, detecting potential impacts of LULCC and land management, and for quantifying the uncertainty of dust model estimates. Here, we evaluate the effectiveness of common sampling designs (e.g., simple random sampling, systematic sampling) used to measure and monitor aeolian sediment transport rates. Using data from the US National Wind Erosion Research Network across diverse rangeland and cropland cover types, we demonstrate how only large changes in sediment mass flux (of the order 200% to 800%) can be detected when small sample sizes are used, crude sampling designs are implemented, or when the spatial variation is large. We then show how statistical rigour and the straightforward application of a sampling design can reduce the uncertainty and detect change in sediment transport over time and between land use and land cover types.

  5. The more the heavier? Family size and childhood obesity in the U.S.

    PubMed

    Datar, Ashlesha

    2017-05-01

    Childhood obesity remains a top public health concern and understanding its drivers is important for combating this epidemic. Contemporaneous trends in declining family size and increasing childhood obesity in the U.S. suggest that family size may be a potential contributor, but limited evidence exists. Using data from a national sample of children in the U.S. this study examines whether family size, measured by the number of siblings a child has, is associated with child BMI and obesity, and the possible mechanisms at work. The potential endogeneity of family size is addressed by using several complementary approaches including sequentially introducing of a rich set of controls, subgroup analyses, and estimating school fixed-effects and child fixed-effects models. Results suggest that having more siblings is associated with significantly lower BMI and lower likelihood of obesity. Children with siblings have healthier diets and watch less television. Family mealtimes, less eating out, reduced maternal work, and increased adult supervision of children are potential mechanisms through which family size is protective of childhood obesity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Optical extinction dependence on wavelength and size distribution of airborne dust

    NASA Astrophysics Data System (ADS)

    Pangle, Garrett E.; Hook, D. A.; Long, Brandon J. N.; Philbrick, C. R.; Hallen, Hans D.

    2013-05-01

    The optical scattering from laser beams propagating through atmospheric aerosols has been shown to be very useful in describing air pollution aerosol properties. This research explores and extends that capability to particulate matter. The optical properties of Arizona Road Dust (ARD) samples are measured in a chamber that simulates the particle dispersal of dust aerosols in the atmospheric environment. Visible, near infrared, and long wave infrared lasers are used. Optical scattering measurements show the expected dependence of laser wavelength and particle size on the extinction of laser beams. The extinction at long wavelengths demonstrates reduced scattering, but chemical absorption of dust species must be considered. The extinction and depolarization of laser wavelengths interacting with several size cuts of ARD are examined. The measurements include studies of different size distributions, and their evolution over time is recorded by an Aerodynamic Particle Sizer. We analyze the size-dependent extinction and depolarization of ARD. We present a method of predicting extinction for an arbitrary ARD size distribution. These studies provide new insights for understanding the optical propagation of laser beams through airborne particulate matter.

  7. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  8. Instability improvement of the subgrade soils by lime addition at Borg El-Arab, Alexandria, Egypt

    NASA Astrophysics Data System (ADS)

    El Shinawi, A.

    2017-06-01

    Subgrade soils can affect the stability of any construction elsewhere, instability problems were found at Borg El-Arab, Alexandria, Egypt. This paper investigates geoengineering properties of lime treated subgrade soils at Borg El-Arab. Basic laboratory tests, such as water content, wet and dry density, grain size, specific gravity and Atterberg limits, were performed for twenty-five samples. Moisture-density (compaction); California Bearing Ratio (CBR) and Unconfined Compression Strength (UCS) were conducted on treated and natural soils. The measured geotechnical parameters of the treated soil shows that 6% lime is good enough to stabilize the subgrade soils. It was found that by adding lime, samples shifted to coarser side, Atterberg limits values of the treated soil samples decreased and this will improve the soil to be more stable. On the other hand, Subgrade soils improved as a result of the bonding fine particles, cemented together to form larger size and reduce the plastiCity index which increase soils strength. The environmental scanning electron microscope (ESEM) is point to the presence of innovative aggregated cement materials which reduce the porosity and increase the strength as a long-term curing. Consequently, the mixture of soil with the lime has acceptable mechanical characteristics where, it composed of a high strength base or sub-base materials and this mixture considered as subgrade soil for stabilizations and mitigation the instability problems that found at Borg Al-Arab, Egypt.

  9. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. An unbiased adaptive sampling algorithm for the exploration of RNA mutational landscapes under evolutionary pressure.

    PubMed

    Waldispühl, Jérôme; Ponty, Yann

    2011-11-01

    The analysis of the relationship between sequences and structures (i.e., how mutations affect structures and reciprocally how structures influence mutations) is essential to decipher the principles driving molecular evolution, to infer the origins of genetic diseases, and to develop bioengineering applications such as the design of artificial molecules. Because their structures can be predicted from the sequence data only, RNA molecules provide a good framework to study this sequence-structure relationship. We recently introduced a suite of algorithms called RNAmutants which allows a complete exploration of RNA sequence-structure maps in polynomial time and space. Formally, RNAmutants takes an input sequence (or seed) to compute the Boltzmann-weighted ensembles of mutants with exactly k mutations, and sample mutations from these ensembles. However, this approach suffers from major limitations. Indeed, since the Boltzmann probabilities of the mutations depend of the free energy of the structures, RNAmutants has difficulties to sample mutant sequences with low G+C-contents. In this article, we introduce an unbiased adaptive sampling algorithm that enables RNAmutants to sample regions of the mutational landscape poorly covered by classical algorithms. We applied these methods to sample mutations with low G+C-contents. These adaptive sampling techniques can be easily adapted to explore other regions of the sequence and structural landscapes which are difficult to sample. Importantly, these algorithms come at a minimal computational cost. We demonstrate the insights offered by these techniques on studies of complete RNA sequence structures maps of sizes up to 40 nucleotides. Our results indicate that the G+C-content has a strong influence on the size and shape of the evolutionary accessible sequence and structural spaces. In particular, we show that low G+C-contents favor the apparition of internal loops and thus possibly the synthesis of tertiary structure motifs. On the other hand, high G+C-contents significantly reduce the size of the evolutionary accessible mutational landscapes.

  11. Lunar resources: Oxygen from rocks and soil

    NASA Technical Reports Server (NTRS)

    Allen, C. C.; Gibson, M. A.; Knudsen, C. W.; Kanamori, H.; Morris, R. V.; Keller, L. P.; Mckay, D. S.

    1992-01-01

    The first set of hydrogen reduction experiments to use actual lunar material was recently completed. The sample, 70035, is a coarse-grained vesicular basalt containing 18.46 wt. percent FeO and 12.97 wt. percent TiO2. The mineralogy includes pyroxene, ilmenite, plagioclase, and minor olivine. The sample was crushed to a grain size of less than 500 microns. The crushed basalt was reduced with hydrogen in seven tests at temperatures of 900-1050 C and pressures of 1-10 atm for 30-60 minutes. A capacitance probe, measuring the dew point of the gas stream, was used to follow reaction progress. Experiments were also conducted using a terrestrial basalt similar to some lunar mare samples. Minnesota Lunar Simulant (MLS-1) contains 13.29 wt. percent FeO, 2.96 wt. percent Fe2O3, and 6.56 wt. percent TiO2. The major minerals include plagioclase, pyroxene, olivine, ilmenite, and magnetite. The rock was ground and seived, and experiments were run on the less than 74- and 500-1168-micron fractions. Experiments were also conducted on less than 74-micron powders of olivine, pyroxene, synthetic ilmenite, and TiO2. The terrestrial rock and mineral samples were reduced with flowing hydrogen at 1100 C in a microbalance furnace, with reaction progress monitored by weight loss. Experiments were run at atmospheric pressure for durations of 3-4 hr. Solid samples from both sets of experiments were analyzed by Mossbauer spectroscopy, petrographic microscopy, scanning electron microscopy, tunneling electron microscopy, and x-ray diffraction. Apollo 17 soil 78221 was examined for evidence of natural reduction in the lunar environment. This sample was chosen based on its high maturity level (I sub s/FeO = 93.0). The FeO content is 11.68 wt. percent and the TiO2 content is 3.84 wt. percent. A polished thin section of the 90-150 micron size fraction was analyzed by petrographic microscopy and scanning electron microscopy.

  12. The efficacy of respondent-driven sampling for the health assessment of minority populations.

    PubMed

    Badowski, Grazyna; Somera, Lilnabeth P; Simsiman, Brayan; Lee, Hye-Ryeon; Cassel, Kevin; Yamanaka, Alisha; Ren, JunHao

    2017-10-01

    Respondent driven sampling (RDS) is a relatively new network sampling technique typically employed for hard-to-reach populations. Like snowball sampling, initial respondents or "seeds" recruit additional respondents from their network of friends. Under certain assumptions, the method promises to produce a sample independent from the biases that may have been introduced by the non-random choice of "seeds." We conducted a survey on health communication in Guam's general population using the RDS method, the first survey that has utilized this methodology in Guam. It was conducted in hopes of identifying a cost-efficient non-probability sampling strategy that could generate reasonable population estimates for both minority and general populations. RDS data was collected in Guam in 2013 (n=511) and population estimates were compared with 2012 BRFSS data (n=2031) and the 2010 census data. The estimates were calculated using the unweighted RDS sample and the weighted sample using RDS inference methods and compared with known population characteristics. The sample size was reached in 23days, providing evidence that the RDS method is a viable, cost-effective data collection method, which can provide reasonable population estimates. However, the results also suggest that the RDS inference methods used to reduce bias, based on self-reported estimates of network sizes, may not always work. Caution is needed when interpreting RDS study findings. For a more diverse sample, data collection should not be conducted in just one location. Fewer questions about network estimates should be asked, and more careful consideration should be given to the kind of incentives offered to participants. Copyright © 2017. Published by Elsevier Ltd.

  13. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  14. Evaluation of respondent-driven sampling.

    PubMed

    McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required when interpreting findings based on the sampling method.

  15. Compact photoacoustic tomography system

    NASA Astrophysics Data System (ADS)

    Kalva, Sandeep Kumar; Pramanik, Manojit

    2017-03-01

    Photoacoustic tomography (PAT) is a non-ionizing biomedical imaging modality which finds applications in brain imaging, tumor angiogenesis, monitoring of vascularization, breast cancer imaging, monitoring of oxygen saturation levels etc. Typical PAT systems uses Q-switched Nd:YAG laser light illumination, single element large ultrasound transducer (UST) as detector. By holding the UST in horizontal plane and moving it in a circular motion around the sample in full 2π radians photoacoustic data is collected and images are reconstructed. The horizontal positioning of the UST make the scanning radius large, leading to larger water tank and also increases the load on the motor that rotates the UST. To overcome this limitation, we present a compact photoacoustic tomographic (ComPAT) system. In this ComPAT system, instead of holding the UST in horizontal plane, it is held in vertical plane and the photoacoustic waves generated at the sample are detected by the UST after it is reflected at 45° by an acoustic reflector attached to the transducer body. With this we can reduce the water tank size and load on the motor, thus overall PAT system size can be reduced. Here we show that with the ComPAT system nearly similar PA images (phantom and in vivo data) can be obtained as that of the existing PAT systems using both flat and cylindrically focused transducers.

  16. Effect of Zn addition on bulk microstructure of lead-free solder SN100C

    NASA Astrophysics Data System (ADS)

    Nur Nadirah M., K.; Nurulakmal M., S.

    2017-12-01

    This paper reports the effect of adding Zn (0.5 wt% Zn, 1.0 wt% Zn) to the bulk microstructure and intermetallic compound (IMC) formation of commercial SN100C (Sn-0.7Cu-0.05Ni+Ge) lead-free solder alloy. Solder alloys were prepared by melting SN100C ingot and Zn shots, and subsequently casted into steel mold. Samples were ground and polished for XRF, and polished samples were then etched for microstructure analysis. Microstructure of bulk solder and the IMC were observed using SEM equipped with EDX. SEM result showed the addition of 0.5 wt% Zn resulted in increased grain size of β-Sn matrix but further addition of Zn (1 wt%) reduced the size of β-Sn dendrites in the bulk solder. Several intermetallic compounds (IMCs) were observed distributed in the Sn matrix; Cu-Zn, Ni-Zn and Cu-Zn-Ni IMC but in relatively small percentage compared to Cu-Zn and Ni-Zn. These particles could be considered as effective nucleating agent that led to finer β-Sn grains. It is expected that the finer β-Sn will contribute towards higher solder strength and the various IMCs present could act as suppressant for Sn diffusion which will then tend to reduce the IMC growth during thermal aging.

  17. Effect of ticagrelor with clopidogrel on high on-treatment platelet reactivity in acute stroke or transient ischemic attack (PRINCE) trial: Rationale and design.

    PubMed

    Wang, Yilong; Lin, Yi; Meng, Xia; Chen, Weiqi; Chen, Guohua; Wang, Zhimin; Wu, Jialing; Wang, Dali; Li, Jianhua; Cao, Yibin; Xu, Yuming; Zhang, Guohua; Li, Xiaobo; Pan, Yuesong; Li, Hao; Liu, Liping; Zhao, Xingquan; Wang, Yongjun

    2017-04-01

    Rationale and aim Little is known about the safety and efficacy of the combination of ticagrelor and aspirin in acute ischemic stroke. This study aimed to evaluate whether the combination of ticagrelor and aspirin was superior to that of clopidogrel and aspirin in reducing the 90-day high on-treatment platelet reactivity for acute minor stroke or transient ischemic attack, especially for carriers of cytochrome P450 2C19 loss-of-function allele. Sample size and design This study was designed as a prospective, multicenter, randomized, open-label, active-controlled, and blind-endpoint, phase II b trial. The required sample size was 952 patients. It was registered with ClinicalTrials.gov (NCT02506140). Study outcomes The primary outcome was the proportion of patients with high on-treatment platelet reactivity at 90 days. High on-treatment platelet reactivity is defined as the P2Y12 reaction unit >208 measured using the VerifyNow P2Y12 assay. Conclusion The Platelet Reactivity in Acute Non-disabling Cerebrovascular Events study explored whether ticagrelor combined with aspirin could reduce further the proportion of patients with high on-treatment platelet reactivity at 90 days after acute minor stroke or transient ischemic attack compared with clopidogrel and aspirin.

  18. Association of β-defensin copy number and psoriasis in three cohorts of European origin

    PubMed Central

    Stuart, Philip E; Hüffmeier, Ulrike; Nair, Rajan P; Palla, Raquel; Tejasvi, Trilokraj; Schalkwijk, Joost; Elder, James T; Reis, Andre; Armour, John AL

    2012-01-01

    A single previous study has demonstrated significant association of psoriasis with copy number of beta-defensin genes, using DNA from psoriasis cases and controls from Nijmegen and Erlangen. In this study we attempted to replicate that finding in larger new cohorts from Erlangen (N = 2017) and Michigan (N = 5412), using improved methods for beta-defensin copy number determination based on the paralog ratio test (PRT), and enhanced methods of analysis and association testing implemented in the CNVtools resource. We demonstrate that the association with psoriasis found in the discovery sample is maintained after applying improved typing and analysis methods (p = 5.5 × 10−4, OR = 1.25). We also find that the association is replicated in 2616 cases and 2526 controls from Michigan, although at reduced significance (p = 0.014), but not in new samples from Erlangen (1396 cases and 621 controls, p = 0.38). Meta-analysis across all cohorts suggests a nominally significant association (p = 6.6 × 10−3/2 × 10−4) with an effect size (OR = 1.081) much lower than found in the discovery study (OR = 1.32). This reduced effect size and significance on replication is consistent with a genuine but weak association. PMID:22739795

  19. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  20. Weighing the potential effectiveness of various treatments for sleep bruxism.

    PubMed

    Huynh, Nelly; Manzini, Christiane; Rompré, Pierre H; Lavigne, Gilles J

    2007-10-01

    Sleep bruxism may lead to a variety of problems, but its pathophysiology has not been completely elucidated. As such, there is no definitive treatment, but certain preventive measures and/or drugs may be used in acute cases, particularly those involving pain. This article is intended to guide clinician scientists to the treatment most appropriate for future clinical studies. To determine the best current treatment, 2 measures were used to compare the results of 10 clinical studies on sleep bruxism, 3 involving oral devices and 7 involving pharmacologic therapy. The first measure, the number needed to treat (NNT), allows several randomized clinical studies to be compared and a general conclusion to be drawn. The second measure, effect size, allows evaluation of the impact of treatment relative to a placebo using different studies of similar design. Taking into account the NNT, the effect size and the power of each study, it can be concluded that the following treatments reduce sleep bruxism: mandibular advancement device, clonidine and occlusal splint. However, the first 2 of these have been linked to adverse effects. The occlusal splint is therefore the treatment of choice, as it reduces grinding noise and protects the teeth from premature wear with no reported adverse effects. The NNT could not be calculated for an alternative pharmacologic treatment, short-term clonazepam therapy, which had a large effect size and reduced the average bruxism index. However, the risk of dependency limits its use over long periods. Assessment of efficacy and safety of the most promising treatments will require studies with larger sample sizes over longer periods.

  1. Concentrations of selected constituents in surface-water and streambed-sediment samples collected from streams in and near an area of oil and natural-gas development, south-central Texas, 2011-13

    USGS Publications Warehouse

    Opsahl, Stephen P.; Crow, Cassi L.

    2014-01-01

    During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.

  2. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  3. An audit strategy for time-to-event outcomes measured with error: application to five randomized controlled trials in oncology.

    PubMed

    Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari

    2013-10-01

    Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.

  4. Size effects on magnetic actuation in Ni-Mn-Ga shape-memory alloys.

    PubMed

    Dunand, David C; Müllner, Peter

    2011-01-11

    The off-stoichiometric Ni(2)MnGa Heusler alloy is a magnetic shape-memory alloy capable of reversible magnetic-field-induced strains (MFIS). These are generated by twin boundaries moving under the influence of an internal stress produced by a magnetic field through the magnetocrystalline anisotropy. While MFIS are very large (up to 10%) for monocrystalline Ni-Mn-Ga, they are near zero (<0.01%) in fine-grained polycrystals due to incompatibilities during twinning of neighboring grains and the resulting internal geometrical constraints. By growing the grains and/or shrinking the sample, the grain size becomes comparable to one or more characteristic sample sizes (film thickness, wire or strut diameter, ribbon width, particle diameter, etc), and the grains become surrounded by free space. This reduces the incompatibilities between neighboring grains and can favor twinning and thus increase the MFIS. This approach was validated recently with very large MFIS (0.2-8%) measured in Ni-Mn-Ga fibers and foams with bamboo grains with dimensions similar to the fiber or strut diameters and in thin plates where grain diameters are comparable to plate thickness. Here, we review processing, micro- and macrostructure, and magneto-mechanical properties of (i) Ni-Mn-Ga powders, fibers, ribbons and films with one or more small dimension, which are amenable to the growth of bamboo grains leading to large MFIS, and (ii) "constructs" from these structural elements (e.g., mats, laminates, textiles, foams and composites). Various strategies are proposed to accentuate this geometric effect which enables large MFIS in polycrystalline Ni-Mn-Ga by matching grain and sample sizes.

  5. HIV prevention interventions to reduce sexual risk for African Americans: the influence of community-level stigma and psychological processes.

    PubMed

    Reid, Allecia E; Dovidio, John F; Ballester, Estrellita; Johnson, Blair T

    2014-02-01

    Interventions to improve public health may benefit from consideration of how environmental contexts can facilitate or hinder their success. We examined the extent to which efficacy of interventions to improve African Americans' condom use practices was moderated by two indicators of structural stigma-Whites' attitudes toward African Americans and residential segregation in the communities where interventions occurred. A previously published meta-analytic database was re-analyzed to examine the interplay of community-level stigma with the psychological processes implied by intervention content in influencing intervention efficacy. All studies were conducted in the United States and included samples that were at least 50% African American. Whites' attitudes were drawn from the American National Election Studies, which collects data from nationally representative samples. Residential segregation was drawn from published reports. Results showed independent effects of Whites' attitudes and residential segregation on condom use effect sizes. Interventions were most successful when Whites' attitudes were more positive or when residential segregation was low. These two structural factors interacted: Interventions improved condom use only when communities had both relatively positive attitudes toward African Americans and lower levels of segregation. The effect of Whites' attitudes was more pronounced at longer follow-up intervals and for younger samples and those samples with more African Americans. Tailoring content to participants' values and needs, which may reduce African Americans' mistrust of intervention providers, buffered against the negative influence of Whites' attitudes on condom use. The structural factors uniquely accounted for variance in condom use effect sizes over and above intervention-level features and community-level education and poverty. Results highlight the interplay of social identity and environment in perpetuating intergroup disparities. Potential mechanisms for these effects are discussed along with public health implications. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. HIV Prevention Interventions to Reduce Sexual Risk for African Americans: The Influence of Community-Level Stigma and Psychological Processes

    PubMed Central

    Reid, Allecia E.; Dovidio, John F.; Ballester, Estrellita; Johnson, Blair T.

    2013-01-01

    Interventions to improve public health may benefit from consideration of how environmental contexts can facilitate or hinder their success. We examined the extent to which efficacy of interventions to improve African Americans’ condom use practices was moderated by two indicators of structural stigma—Whites’ attitudes toward African Americans and residential segregation in the communities where interventions occurred. A previously published meta-analytic database was re-analyzed to examine the interplay of community-level stigma with the psychological processes implied by intervention content in influencing intervention efficacy. All studies were conducted in the United States and included samples that were at least 50% African American. Whites’ attitudes were drawn from the American National Election Studies, which collects data from nationally representative samples. Residential segregation was drawn from published reports. Results showed independent effects of Whites’ attitudes and residential segregation on condom use effect sizes. Interventions were most successful when Whites’ attitudes were more positive or when residential segregation was low. These two structural factors interacted: Interventions improved condom use only when communities had both relatively positive attitudes toward African Americans and lower levels of segregation. The effect of Whites’ attitudes was more pronounced at longer follow-up intervals and for younger samples and those samples with more African Americans. Tailoring content to participants’ values and needs, which may reduce African Americans’ mistrust of intervention providers, buffered against the negative influence of Whites’ attitudes on condom use. The structural factors uniquely accounted for variance in condom use effect sizes over and above intervention-level features and community-level education and poverty. Results highlight the interplay of social identity and environment in perpetuating intergroup disparities. Potential mechanisms for these effects are discussed along with public health implications. PMID:24507916

  7. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  8. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  9. Reduction and characterization of bioaerosols in a wastewater treatment station via ventilation.

    PubMed

    Guo, Xuesong; Wu, Pianpian; Ding, Wenjie; Zhang, Weiyi; Li, Lin

    2014-08-01

    Bioaerosols from wastewater treatment processes are a significant subgroup of atmospheric aerosols. In the present study, airborne microorganisms generated from a wastewater treatment station (WWTS) that uses an oxidation ditch process were diminished by ventilation. Conventional sampling and detection methods combined with cloning/sequencing techniques were applied to determine the groups, concentrations, size distributions, and species diversity of airborne microorganisms before and after ventilation. There were 3021 ± 537 CFU/m³ of airborne bacteria and 926 ± 132 CFU/m³ of airborne fungi present in the WWTS bioaerosol. Results showed that the ventilation reduced airborne microorganisms significantly compared to the air in the WWTS. Over 60% of airborne bacteria and airborne fungi could be reduced after 4 hr of air exchange. The highest removal (92.1% for airborne bacteria and 89.1% for fungi) was achieved for 0.65-1.1 μm sized particles. The bioaerosol particles over 4.7 μm were also reduced effectively. Large particles tended to be lost by gravitational settling and small particles were generally carried away, which led to the relatively easy reduction of bioaerosol particles 0.65-1.1 μm and over 4.7 μm in size. An obvious variation occurred in the structure of the bacterial communities when ventilation was applied to control the airborne microorganisms in enclosed spaces. Copyright © 2014. Published by Elsevier B.V.

  10. Reduction of the capillary water absorption of foamed concrete by using the porous aggregate

    NASA Astrophysics Data System (ADS)

    Namsone, E.; Sahmenko, G.; Namsone, E.; Korjakins, A.

    2017-10-01

    The article reports on the research of reduction of the capillary water absorption of foamed concrete (FC) by using the porous aggregate such as the granules of expanded glass (EG) and the cenospheres (CS). The EG granular aggregate is produced by using recycled glass and blowing agents, melted down in high temperature. The unique structure of the EG granules is obtained where the air is kept closed inside the pellet. The use of the porous aggregate in the preparation process of the FC samples provides an opportunity to improve some physical and mechanical properties of the FC, classifying it as a product of high-performance. In this research the FC samples were produced by adding the EG granules and the CS. The capillary water absorption of hardened samples has been verified. The pore size distribution has been determined by microscope. It is a very important characteristic, specifically in the cold climate territories-where temperature often falls below zero degrees. It is necessary to prevent forming of the micro sized pores in the final structure of the material as it reduces its water absorption capacity. In addition, at a below zero temperature water inside these micro sized pores can increase them by expanding the stress on their walls during the freezing process. Research of the capillary water absorption kinetics can be practical for prevision of the FC durability.

  11. Application of Diffusion Tensor Imaging Parameters to Detect Change in Longitudinal Studies in Cerebral Small Vessel Disease.

    PubMed

    Zeestraten, Eva Anna; Benjamin, Philip; Lambert, Christian; Lawrence, Andrew John; Williams, Owen Alan; Morris, Robin Guy; Barrick, Thomas Richard; Markus, Hugh Stephen

    2016-01-01

    Cerebral small vessel disease (SVD) is the major cause of vascular cognitive impairment, resulting in significant disability and reduced quality of life. Cognitive tests have been shown to be insensitive to change in longitudinal studies and, therefore, sensitive surrogate markers are needed to monitor disease progression and assess treatment effects in clinical trials. Diffusion tensor imaging (DTI) is thought to offer great potential in this regard. Sensitivity of the various parameters that can be derived from DTI is however unknown. We aimed to evaluate the differential sensitivity of DTI markers to detect SVD progression, and to estimate sample sizes required to assess therapeutic interventions aimed at halting decline based on DTI data. We investigated 99 patients with symptomatic SVD, defined as clinical lacunar syndrome with MRI confirmation of a corresponding infarct as well as confluent white matter hyperintensities over a 3 year follow-up period. We evaluated change in DTI histogram parameters using linear mixed effect models and calculated sample size estimates. Over a three-year follow-up period we observed a decline in fractional anisotropy and increase in diffusivity in white matter tissue and most parameters changed significantly. Mean diffusivity peak height was the most sensitive marker for SVD progression as it had the smallest sample size estimate. This suggests disease progression can be monitored sensitively using DTI histogram analysis and confirms DTI's potential as surrogate marker for SVD.

  12. Sample size estimation for alternating logistic regressions analysis of multilevel randomized community trials of under-age drinking.

    PubMed

    Reboussin, Beth A; Preisser, John S; Song, Eun-Young; Wolfson, Mark

    2012-07-01

    Under-age drinking is an enormous public health issue in the USA. Evidence that community level structures may impact on under-age drinking has led to a proliferation of efforts to change the environment surrounding the use of alcohol. Although the focus of these efforts is to reduce drinking by individual youths, environmental interventions are typically implemented at the community level with entire communities randomized to the same intervention condition. A distinct feature of these trials is the tendency of the behaviours of individuals residing in the same community to be more alike than that of others residing in different communities, which is herein called 'clustering'. Statistical analyses and sample size calculations must account for this clustering to avoid type I errors and to ensure an appropriately powered trial. Clustering itself may also be of scientific interest. We consider the alternating logistic regressions procedure within the population-averaged modelling framework to estimate the effect of a law enforcement intervention on the prevalence of under-age drinking behaviours while modelling the clustering at multiple levels, e.g. within communities and within neighbourhoods nested within communities, by using pairwise odds ratios. We then derive sample size formulae for estimating intervention effects when planning a post-test-only or repeated cross-sectional community-randomized trial using the alternating logistic regressions procedure.

  13. Aesthetic phenomena as supernormal stimuli: the case of eye, lip, and lower-face size and roundness in artistic portraits.

    PubMed

    Costa, Marco; Corazza, Leonardo

    2006-01-01

    In the first study, eye and lip size and roundness, and lower-face roundness were compared between a control sample of 289 photographic portraits and an experimental sample of 776 artistic portraits covering the whole period of the history of art. Results showed that eye roundness, lip roundness, eye height, eye width, and lip height were significantly enhanced in artistic portraits compared to photographic ones. Lip width and lower-face roundness, on the contrary, were less prominent in artistic than in photographic portraits. In a second study, forty-two art academy students were requested to draw two self-portraits, one with a mirror and one without (from memory). Eye, lip, and lower-face roundness in artistic self-portraits was compared to the same features derived from photographic portraits of the participants. The results obtained confirmed those found in the first study. Eye and lip size and roundness were greater in artistic self-portraits, while lower-face roundness was significantly reduced. The same degree of modification was found also when a mirror was available to the subjects. In a third study the effect of lower-face roundness on the perception of attractiveness was assessed: fifty-three participants had to adjust the face width of 24 photographic portraits in order to achieve the highest level of attractiveness. Participants contracted the face width by a mean value of 5.26%, showing a preference for a reduced lower-face roundness. All results are discussed in terms of the importance of the 'supernormalisation' process as a means of assigning aesthetic value to perceptual stimuli.

  14. Evolution of eye size and shape in primates.

    PubMed

    Ross, Callum F; Kirk, E Christopher

    2007-03-01

    Strepsirrhine and haplorhine primates exhibit highly derived features of the visual system that distinguish them from most other mammals. Comparative data link the evolution of these visual specializations to the sequential acquisition of nocturnal visual predation in the primate stem lineage and diurnal visual predation in the anthropoid stem lineage. However, it is unclear to what extent these shifts in primate visual ecology were accompanied by changes in eye size and shape. Here we investigate the evolution of primate eye morphology using a comparative study of a large sample of mammalian eyes. Our analysis shows that primates differ from other mammals in having large eyes relative to body size and that anthropoids exhibit unusually small corneas relative to eye size and body size. The large eyes of basal primates probably evolved to improve visual acuity while maintaining high sensitivity in a nocturnal context. The reduced corneal sizes of anthropoids reflect reductions in the size of the dioptric apparatus as a means of increasing posterior nodal distance to improve visual acuity. These data support the conclusion that the origin of anthropoids was associated with a change in eye shape to improve visual acuity in the context of a diurnal predatory habitus.

  15. Fenton-treated functionalized diamond nanoparticles as gene delivery system.

    PubMed

    Martín, Roberto; Alvaro, Mercedes; Herance, José Raúl; García, Hermenegildo

    2010-01-26

    When raw diamond nanoparticles (Dnp, 7 nm average particle size) obtained from detonation are submitted to harsh Fenton-treatment, the resulting material becomes free of amorphous soot matter and the process maintains the crystallinity, reduces the particle size (4 nm average particle size), increases the surface OH population, and increases water solubility. All these changes are beneficial for subsequent Dnp covalent functionalization and for the ability of Dnp to cross cell membranes. Fenton-treated Dnps have been functionalized with thionine and the resulting sample has been observed in HeLa cell nuclei. A triethylammonium-functionalized Dnp pairs electrostatically with a plasmid having the green fluorescent protein gene and acts as gene delivery system permitting the plasmid to cross HeLa cell membrane, something that does not occur for the plasmid alone without assistance of polycationic Dnp.

  16. Gravitational Effects on Closed-Cellular-Foam Microstructure

    NASA Technical Reports Server (NTRS)

    Noever, David A.; Cronise, Raymond J.; Wessling, Francis C.; McMannus, Samuel P.; Mathews, John; Patel, Darayas

    1996-01-01

    Polyurethane foam has been produced in low gravity for the first time. The cause and distribution of different void or pore sizes are elucidated from direct comparison of unit-gravity and low-gravity samples. Low gravity is found to increase the pore roundness by 17% and reduce the void size by 50%. The standard deviation for pores becomes narrower (a more homogeneous foam is produced) in low gravity. Both a Gaussian and a Weibull model fail to describe the statistical distribution of void areas, and hence the governing dynamics do not combine small voids in either a uniform or a dependent fashion to make larger voids. Instead, the void areas follow an exponential law, which effectively randomizes the production of void sizes in a nondependent fashion consistent more with single nucleation than with multiple or combining events.

  17. High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Juan; Zou, Qingze, E-mail: qzzou@rci.rutgers.edu

    In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized inmore » a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.« less

  18. High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force.

    PubMed

    Ren, Juan; Zou, Qingze

    2014-07-01

    In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized in a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.

  19. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...

  20. Fragmentation and flow of grazed coastal Bermudagrass through the digestive tract of cattle.

    PubMed

    Pond, K R; Ellis, W C; Lascano, C E; Akin, D E

    1987-08-01

    Samples of forage fragments were obtained from the upper (RUS) and lower (RLS) strata of the reticulorumen and feces (F) of four Brahman X Jersey steers grazing Coastal bermudagrass (CB) of two maturities with dry matter digestibilities (DMD) of 54.8 and 64.3%. Forage fragments were separated by particle size and evaluated histochemically for tissue type and fragmentation pattern. Fragmentation pattern was similar to that previously observed due to ingestive mastication. There was longitudinal separation of vascular bundles (VB) and severing at VB ends. Microscopically, similar size fragments from RUS were indistinguishable from those of RLS. The major difference between RUS and RLS was the distribution of different size particles. Larger particles were associated with the RUS in cattle consuming immature and mature CB. More large particles were associated with mature compared with the immature CB in the RUS and RLS. The distribution of different size particles in the F was similar for both maturities, suggesting that similar particle size reduction was required regardless of maturity. Smaller particles in the rumen and F appeared to contain more lignin (determined histochemically) and were composed of indigestible fragments of cuticle and lignified vascular tissue. Cattle grazing mature CB had higher ruminal fills (2.40 vs 2.02 kg dry matter/100 kg body weight), reduced rates of passage and lower voluntary intake (2.50 vs 3.14 kg DM/100 kg body weight). Lower intake of mature CB may have resulted from a reduced rate of particle size reduction. Similarities in fragmentation patterns due to ingestive and ruminative mastication were interpreted to indicate that mastication was responsible for most of the particle size reduction of CB and that mastication facilitated digestion of potentially digestible tissues.

  1. Splenic release of platelets contributes to increased circulating platelet size and inflammation after myocardial infarction.

    PubMed

    Gao, Xiao-Ming; Moore, Xiao-Lei; Liu, Yang; Wang, Xin-Yu; Han, Li-Ping; Su, Yidan; Tsai, Alan; Xu, Qi; Zhang, Ming; Lambert, Gavin W; Kiriazis, Helen; Gao, Wei; Dart, Anthony M; Du, Xiao-Jun

    2016-07-01

    Acute myocardial infarction (AMI) is characterized by a rapid increase in circulating platelet size but the mechanism for this is unclear. Large platelets are hyperactive and associated with adverse clinical outcomes. We determined mean platelet volume (MPV) and platelet-monocyte conjugation (PMC) using blood samples from patients, and blood and the spleen from mice with AMI. We further measured changes in platelet size, PMC, cardiac and splenic contents of platelets and leucocyte infiltration into the mouse heart. In AMI patients, circulating MPV and PMC increased at 1-3 h post-MI and MPV returned to reference levels within 24 h after admission. In mice with MI, increases in platelet size and PMC became evident within 12 h and were sustained up to 72 h. Splenic platelets are bigger than circulating platelets in normal or infarct mice. At 24 h post-MI, splenic platelet storage was halved whereas cardiac platelets increased by 4-fold. Splenectomy attenuated all changes observed in the blood, reduced leucocyte and platelet accumulation in the infarct myocardium, limited infarct size and alleviated cardiac dilatation and dysfunction. AMI-induced elevated circulating levels of adenosine diphosphate and catecholamines in both human and the mouse, which may trigger splenic platelet release. Pharmacological inhibition of angiotensin-converting enzyme, β1-adrenergic receptor or platelet P2Y12 receptor reduced platelet abundance in the murine infarct myocardium albeit having diverse effects on platelet size and PMC. In conclusion, AMI evokes release of splenic platelets, which contributes to the increase in platelet size and PMC and facilitates myocardial accumulation of platelets and leucocytes, thereby promoting post-infarct inflammation. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.

    The primary goal of this project is to evaluate x-ray spectra generated within a scanning electron microscope (SEM) to determine elemental composition of small samples. This will be accomplished by performing Monte Carlo simulations of the electron and photon interactions in the sample and in the x-ray detector. The elemental inventories will be determined by an inverse process that progressively reduces the difference between the measured and simulated x-ray spectra by iteratively adjusting composition and geometric variables in the computational model. The intended benefit of this work will be to develop a method to perform quantitative analysis on substandard samplesmore » (heterogeneous phases, rough surfaces, small sizes, etc.) without involving standard elemental samples or empirical matrix corrections (i.e., true standardless quantitative analysis).« less

  3. Macrophage Migration Inhibitory Factor for the Early Prediction of Infarct Size

    PubMed Central

    Chan, William; White, David A.; Wang, Xin‐Yu; Bai, Ru‐Feng; Liu, Yang; Yu, Hai‐Yi; Zhang, You‐Yi; Fan, Fenling; Schneider, Hans G.; Duffy, Stephen J.; Taylor, Andrew J.; Du, Xiao‐Jun; Gao, Wei; Gao, Xiao‐Ming; Dart, Anthony M.

    2013-01-01

    Background Early diagnosis and knowledge of infarct size is critical for the management of acute myocardial infarction (MI). We evaluated whether early elevated plasma level of macrophage migration inhibitory factor (MIF) is useful for these purposes in patients with ST‐elevation MI (STEMI). Methods and Results We first studied MIF level in plasma and the myocardium in mice and determined infarct size. MI for 15 or 60 minutes resulted in 2.5‐fold increase over control values in plasma MIF levels while MIF content in the ischemic myocardium reduced by 50% and plasma MIF levels correlated with myocardium‐at‐risk and infarct size at both time‐points (P<0.01). In patients with STEMI, we obtained admission plasma samples and measured MIF, conventional troponins (TnI, TnT), high sensitive TnI (hsTnI), creatine kinase (CK), CK‐MB, and myoglobin. Infarct size was assessed by cardiac magnetic resonance (CMR) imaging. Patients with chronic stable angina and healthy volunteers were studied as controls. Of 374 STEMI patients, 68% had elevated admission MIF levels above the highest value in healthy controls (>41.6 ng/mL), a proportion similar to hsTnI (75%) and TnI (50%), but greater than other biomarkers studied (20% to 31%, all P<0.05 versus MIF). Only admission MIF levels correlated with CMR‐derived infarct size, ventricular volumes and ejection fraction (n=42, r=0.46 to 0.77, all P<0.01) at 3 day and 3 months post‐MI. Conclusion Plasma MIF levels are elevated in a high proportion of STEMI patients at the first obtainable sample and these levels are predictive of final infarct size and the extent of cardiac remodeling. PMID:24096574

  4. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  5. Cell-free DNA fragment-size distribution analysis for non-invasive prenatal CNV prediction.

    PubMed

    Arbabi, Aryan; Rampášek, Ladislav; Brudno, Michael

    2016-06-01

    Non-invasive detection of aneuploidies in a fetal genome through analysis of cell-free DNA circulating in the maternal plasma is becoming a routine clinical test. Such tests, which rely on analyzing the read coverage or the allelic ratios at single-nucleotide polymorphism (SNP) loci, are not sensitive enough for smaller sub-chromosomal abnormalities due to sequencing biases and paucity of SNPs in a genome. We have developed an alternative framework for identifying sub-chromosomal copy number variations in a fetal genome. This framework relies on the size distribution of fragments in a sample, as fetal-origin fragments tend to be smaller than those of maternal origin. By analyzing the local distribution of the cell-free DNA fragment sizes in each region, our method allows for the identification of sub-megabase CNVs, even in the absence of SNP positions. To evaluate the accuracy of our method, we used a plasma sample with the fetal fraction of 13%, down-sampled it to samples with coverage of 10X-40X and simulated samples with CNVs based on it. Our method had a perfect accuracy (both specificity and sensitivity) for detecting 5 Mb CNVs, and after reducing the fetal fraction (to 11%, 9% and 7%), it could correctly identify 98.82-100% of the 5 Mb CNVs and had a true-negative rate of 95.29-99.76%. Our source code is available on GitHub at https://github.com/compbio-UofT/FSDA CONTACT: : brudno@cs.toronto.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: A tutorial using simulations and empirical data.

    PubMed

    de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff

    2016-09-01

    The Pearson product–moment correlation coefficient ( r p ) and the Spearman rank correlation coefficient ( r s ) are widely used in psychological research. We compare r p and r s on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, r p and r s have similar expected values but r s is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, r p is more variable than r s . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, r p had lower variability than r s in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, r s had lower variability than r p , and often corresponded more accurately to the population Pearson correlation coefficient ( R p ) than r p did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing r s instead of r p . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of r s and r p . In conclusion, r p is suitable for light-tailed distributions, whereas r s is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. PsycINFO Database Record (c) 2016 APA, all rights reserved

  7. Energy Storage Sizing Taking Into Account Forecast Uncertainties and Receding Horizon Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Hug, Gabriela; Li, Xin

    Energy storage systems (ESS) have the potential to be very beneficial for applications such as reducing the ramping of generators, peak shaving, and balancing not only the variability introduced by renewable energy sources, but also the uncertainty introduced by errors in their forecasts. Optimal usage of storage may result in reduced generation costs and an increased use of renewable energy. However, optimally sizing these devices is a challenging problem. This paper aims to provide the tools to optimally size an ESS under the assumption that it will be operated under a model predictive control scheme and that the forecast ofmore » the renewable energy resources include prediction errors. A two-stage stochastic model predictive control is formulated and solved, where the optimal usage of the storage is simultaneously determined along with the optimal generation outputs and size of the storage. Wind forecast errors are taken into account in the optimization problem via probabilistic constraints for which an analytical form is derived. This allows for the stochastic optimization problem to be solved directly, without using sampling-based approaches, and sizing the storage to account not only for a wide range of potential scenarios, but also for a wide range of potential forecast errors. In the proposed formulation, we account for the fact that errors in the forecast affect how the device is operated later in the horizon and that a receding horizon scheme is used in operation to optimally use the available storage.« less

  8. Primary and Aggregate Size Distributions of PM in Tail Pipe Emissions form Diesel Engines

    NASA Astrophysics Data System (ADS)

    Arai, Masataka; Amagai, Kenji; Nakaji, Takayuki; Hayashi, Shinji

    Particulate matter (PM) emission exhausted from diesel engine should be reduced to keep the clean air environment. PM emission was considered that it consisted of coarse and aggregate particles, and nuclei-mode particles of which diameter was less than 50nm. However the detail characteristics about these particles of the PM were still unknown and they were needed for more physically accurate measurement and more effective reduction of exhaust PM emission. In this study, the size distributions of solid particles in PM emission were reported. PMs in the tail-pipe emission were sampled from three type diesel engines. Sampled PM was chemically treated to separate the solid carbon fraction from other fractions such as soluble organic fraction (SOF). The electron microscopic and optical-manual size measurement procedures were used to determine the size distribution of primary particles those were formed through coagulation process from nuclei-mode particles and consisted in aggregate particles. The centrifugal sedimentation method was applied to measure the Stokes diameter of dry-soot. Aerodynamic diameters of nano and aggregate particles were measured with scanning mobility particle sizer (SMPS). The peak aggregate diameters detected by SMPS were fallen in the same size regime as the Stokes diameter of dry-soot. Both of primary and Stokes diameters of dry-soot decreased with increases of engine speed and excess air ratio. Also, the effects of fuel properties and engine types on primary and aggregate particle diameters were discussed.

  9. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  11. [Ecological Correlates of Cardiovascular Disease Risk in Korean Blue-collar Workers: A Multi-level Study].

    PubMed

    Hwang, Won Ju; Park, Yunhee

    2015-12-01

    The purpose of this study was to investigate individual and organizational level of cardiovascular disease (CVD) risk factors associated with CVD risk in Korean blue-collar workers working in small sized companies. Self-report questionnaires and blood sampling for lipid and glucose were collected from 492 workers in 31 small sized companies in Korea. Multilevel modeling was conducted to estimate effects of related factors at the individual and organizational level. Multilevel regression analysis showed that workers in the workplace having a cafeteria had 1.81 times higher CVD risk after adjusting for factors at the individual level (p=.022). The explanatory power of variables related to organizational level variances in CVD risk was 17.1%. The results of this study indicate that differences in the CVD risk were related to organizational factors. It is necessary to consider not only individual factors but also organizational factors when planning a CVD risk reduction program. The factors caused by having cafeteria in the workplace can be reduced by improvement in the CVD-related risk environment, therefore an organizational-level intervention approach should be available to reduce CVD risk of workers in small sized companies in Korea.

  12. Magnetic properties of Mn0.1Mg0.2TM0.7Fe2O4 (TM = Zn, Co, or Ni) prepared by hydrothermal processes: The effects of crystal size and chemical composition

    NASA Astrophysics Data System (ADS)

    Nhlapo, T. A.; Msomi, J. Z.; Moyo, T.

    2018-02-01

    Nano-crystalline Zn-, Co-, and Ni-substituted Mn-Mg ferrites were prepared by hydrothermal process and annealed at 1100 °C. Annealing conditions are critical on the crystalline phase. TEM and XRD data reveal particle sizes between 8 nm and 15 nm for the as-prepared fine powders, which increase to about 73 nm after sintering at 1100 °C. Mӧssbauer spectra show well resolved magnetic splitting in bulk samples. The as-prepared fine powders show weak hyperfine splitting and broad central doublets associated with fine particles. Magnetization data reveal a high coercive field at about 300 K of about 945 Oe in the Co-based nanosized oxide, which reduces to about 360 Oe after thermal annealing at 1100 °C. The magnetization curves of Zn- and Ni-based samples show much lower coercive fields indicative of superparamagnetic nanoparticles. The crystallite size and chemical composition have significant effects on the properties of Mn0.1Mg0.2(Zn,Co,Ni)0.7Fe2O4 investigated.

  13. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  14. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  15. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Nanocarbon coating on the basis of partially reduced graphene oxide

    NASA Astrophysics Data System (ADS)

    Bocharov, G. S.; Budaev, V. P.; Eletskii, A. V.; Fedorovich, S. D.

    2017-11-01

    There has been developed an approach to the production of graphene as a result of the thermal reduction of graphene oxide (GO). GO has been synthesized by the use of the modified Hummers method with utilization of sodium nitrate and concentrated sulfuric acid. A paper-like material of 40 - 60 μm in thickness and 1.2 g/cm3 in density was formed on a filter after deposition from dispersion. The material was cut onto samples of about 15×25 mm2 in size which were experienced to the thermal treatment at various temperatures between 100 and 800 °C. This resulted in a set of GO samples reduced to various degrees. The degree of reduction was determined on the basis of measurements of the conductivity. Along with that the evolution of samples density was studied as the annealing temperature was enhanced. The analysis of the X-ray photoelectron spectra of partially reduced GO permitted the determination of the dynamics of changing the chemical composition of the material in the process of the thermal treatment. The analysis of Raman spectra of the GO samples indicates rather high degree of the disordering of the material. A possibility of the usage of the material produced as a nanocarbon coating in experiments on the interaction of high intense liquid flows with a wall surface is discussed.

  17. Microstructure changes of on the extruded high-amylose bionanocomposites as affected by moisture content via synchrotron radiation studies

    NASA Astrophysics Data System (ADS)

    Liu, Huihua; Chaudhary, Deeptangshu

    2014-08-01

    The crystalline domain changes and lamellar structure observations of sorbitol-plasticized starch nanocomposite had been investigated via synchrotron. Strong interactions were found between amylose-sorbitol, resulting in reduced inter-helix spacing of the starch polymer. Achievable dspacing of nanoclay was confirmed to be correlated to the moisture content (mc) within the nanocomposites. SAXS diffraction patterns changed from circular (high mc samples) to elliptical (low mc samples), indicating the formation of long periodic structure and increased heterogeneities of the electron density within the samples. Two different domains sized at around 90 Å and 350 Å were found for the low mc samples. However, only the ~90 Å domain was observed in high mc samples. Formation of the 380 Å domain is attributed to the retrogradation behaviour in the absence of water molecules. Meanwhile, the nucleation effect of nanoclay is another factor leading to the emergence of the larger crystalline domain.

  18. Selected engineering properties and applications of EPS geofoam

    NASA Astrophysics Data System (ADS)

    Elragi, Ahmed Fouad

    Expanded polystyrene (EPS) geofoam is a lightweight material that has been used in engineering applications since at least the 1950s. Its density is about a hundredth of that of soil. It has good thermal insulation properties with stiffness and compression strength comparable to medium clay. It is utilized in reducing settlement below embankments, sound and vibration damping, reducing lateral pressure on substructures, reducing stresses on rigid buried conduits and related applications. This study starts with an overview on EPS geofoam. EPS manufacturing processes are described followed by a review of engineering properties found in previous research work done so far. Standards and design manuals applicable to EPS are presented. Selected EPS geofoam-engineering applications are discussed with examples. State-of-the-art of experimental work is done on different sizes of EPS specimens under different loading rates for better understanding of the behavior of the material. The effects of creep, sample size, strain rate and cyclic loading on the stress strain response are studied. Equations for the initial modulus and the strength of the material under compression for different strain rates are presented. The initial modulus and Poisson's ratio are discussed in detail. Sample size effect on creep behavior is examined. Three EPS projects are shown in this study. The creep behavior of the largest EPS geofoam embankment fill is shown. Results from laboratory tests, mathematical modeling and field records are compared to each other. Field records of a geofoam-stabilized slope are compared to finite difference analysis results. Lateral stress reduction on an EPS backfill retaining structure is analyzed. The study ends with a discussion on two promising properties of EPS geofoam. These are the damping ability and the compressibility of this material. Finite element analysis, finite difference analysis and lab results are included in this discussion. The discussion with the rest of the study points towards the main conclusion that EPS geofoam is the future material of promise in various civil engineering applications.

  19. A comprehensive algorithm for determining whether a run-in strategy will be a cost-effective design modification in a randomized clinical trial.

    PubMed

    Schechtman, K B; Gordon, M E

    1993-01-30

    In randomized clinical trials, poor compliance and treatment intolerance lead to reduced between-group differences, increased sample size requirements, and increased cost. A run-in strategy is intended to reduce these problems. In this paper, we develop a comprehensive set of measures specifically sensitive to the effect of a run-in on cost and sample size requirements, both before and after randomization. Using these measures, we describe a step-by-step algorithm through which one can estimate the cost-effectiveness of a potential run-in. Because the cost-effectiveness of a run-in is partly mediated by its effect on sample size, we begin by discussing the likely impact of a planned run-in on the required number of randomized, eligible, and screened subjects. Run-in strategies are most likely to be cost-effective when: (1) per patient costs during the post-randomization as compared to the screening period are high; (2) poor compliance is associated with a substantial reduction in response to treatment; (3) the number of screened patients needed to identify a single eligible patient is small; (4) the run-in is inexpensive; (5) for most patients, the run-in compliance status is maintained following randomization and, most importantly, (6) many subjects excluded by the run-in are treatment intolerant or non-compliant to the extent that we expect little or no treatment response. Our analysis suggests that conditions for the cost-effectiveness of run-in strategies are stringent. In particular, if the only purpose of a run-in is to exclude ordinary partial compliers, the run-in will frequently add to the cost of the trial. Often, the cost-effectiveness of a run-in requires that one can identify and exclude a substantial number of treatment intolerant or otherwise unresponsive subjects.

  20. On the degrees of freedom of reduced-rank estimators in multivariate regression

    PubMed Central

    Mukherjee, A.; Chen, K.; Wang, N.; Zhu, J.

    2015-01-01

    Summary We study the effective degrees of freedom of a general class of reduced-rank estimators for multivariate regression in the framework of Stein's unbiased risk estimation. A finite-sample exact unbiased estimator is derived that admits a closed-form expression in terms of the thresholded singular values of the least-squares solution and hence is readily computable. The results continue to hold in the high-dimensional setting where both the predictor and the response dimensions may be larger than the sample size. The derived analytical form facilitates the investigation of theoretical properties and provides new insights into the empirical behaviour of the degrees of freedom. In particular, we examine the differences and connections between the proposed estimator and a commonly-used naive estimator. The use of the proposed estimator leads to efficient and accurate prediction risk estimation and model selection, as demonstrated by simulation studies and a data example. PMID:26702155

  1. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  2. Driving an Industry: Medium and Heavy Duty Fuel Cell Electric Truck Component Sizing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kast, James; Marcinkoski, Jason; Vijayagopal, Ram

    Medium and heavy duty (MD and HD respectively) vehicles are responsible for 26 percent of the total U.S. transportation petroleum consumption [1]. Hydrogen fuel cells have demonstrated value as part of a portfolio of strategies for reducing petroleum use and emissions from MD and HD vehicles. [2] [3], but their performance and range capabilities, and associated component sizing remain less clear when compared to other powertrains. This paper examines the suitability of converting a representative sample of MD and HD diesel trucks into Fuel Cell Electric Trucks (FCETs), while ensuring the same truck performance, in terms of range, payload, acceleration,more » speed, gradeability and fuel economy.« less

  3. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  4. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  5. Apollo rocks, fines and soil cores

    NASA Astrophysics Data System (ADS)

    Allton, J.; Bevill, T.

    Apollo rocks and soils not only established basic lunar properties and ground truth for global remote sensing, they also provided important lessons for planetary protection (Adv. Space Res ., 1998, v. 22, no. 3 pp. 373-382). The six Apollo missions returned 2196 samples weighing 381.7 kg, comprised of rocks, fines, soil cores and 2 gas samples. By examining which samples were allocated for scientific investigations, information was obtained on usefulness of sampling strategy, sampling devices and containers, sample types and diversity, and on size of sample needed by various disciplines. Diversity was increased by using rakes to gather small rocks on the Moon and by removing fragments >1 mm from soils by sieving in the laboratory. Breccias and soil cores are diverse internally. Per unit weight these samples were more often allocated for research. Apollo investigators became adept at wringing information from very small sample sizes. By pushing the analytical limits, the main concern was adequate size for representative sampling. Typical allocations for trace element analyses were 750 mg for rocks, 300 mg for fines and 70 mg for core subsamples. Age-dating and isotope systematics allocations were typically 1 g for rocks and fines, but only 10% of that amount for core depth subsamples. Historically, allocations for organics and microbiology were 4 g (10% for cores). Modern allocations for biomarker detection are 100mg. Other disciplines supported have been cosmogenic nuclides, rock and soil petrology, sedimentary volatiles, reflectance, magnetics, and biohazard studies . Highly applicable to future sample return missions was the Apollo experience with organic contamination, estimated to be from 1 to 5 ng/g sample for Apollo 11 (Simonheit &Flory, 1970; Apollo 11, 12 &13 Organic contamination Monitoring History, U.C. Berkeley; Burlingame et al., 1970, Apollo 11 LSC , pp. 1779-1792). Eleven sources of contaminants, of which 7 are applicable to robotic missions, were identified and reduced; thus, improving Apollo 12 samples to 0.1 ng/g. Apollo sample documentation preserves the parentage, orientation, and location, packaging, handling and environmental histories of each of the 90,000 subsamples currently curated. Active research on Apollo samples continues today, and because 80% by weight of the Apollo collection remains pristine, researchers have a reservoir of material to support studies well into the future.

  6. Exploiting sparsity and low-rank structure for the recovery of multi-slice breast MRIs with reduced sampling error.

    PubMed

    Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D

    2012-09-01

    It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.

  7. Static versus dynamic sampling for data mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John, G.H.; Langley, P.

    1996-12-31

    As data warehouses grow to the point where one hundred gigabytes is considered small, the computational efficiency of data-mining algorithms on large databases becomes increasingly important. Using a sample from the database can speed up the datamining process, but this is only acceptable if it does not reduce the quality of the mined knowledge. To this end, we introduce the {open_quotes}Probably Close Enough{close_quotes} criterion to describe the desired properties of a sample. Sampling usually refers to the use of static statistical tests to decide whether a sample is sufficiently similar to the large database, in the absence of any knowledgemore » of the tools the data miner intends to use. We discuss dynamic sampling methods, which take into account the mining tool being used and can thus give better samples. We describe dynamic schemes that observe a mining tool`s performance on training samples of increasing size and use these results to determine when a sample is sufficiently large. We evaluate these sampling methods on data from the UCI repository and conclude that dynamic sampling is preferable.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir

    Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less

  9. Repopulation of calibrations with samples from the target site: effect of the size of the calibration.

    NASA Astrophysics Data System (ADS)

    Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.

    2009-04-01

    Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".

  10. Acoustic phonon spectrum and thermal transport in nanoporous alumina arrays

    DOE PAGES

    Kargar, Fariborz; Ramirez, Sylvester; Debnath, Bishwajit; ...

    2015-10-28

    We report results of a combined investigation of thermal conductivity and acoustic phonon spectra in nanoporous alumina membranes with the pore diameter decreasing from D=180 nm to 25 nm. The samples with the hexagonally arranged pores were selected to have the same porosity Ø ≈13%. The Brillouin-Mandelstam spectroscopy measurements revealed bulk-like phonon spectrum in the samples with D = 180 nm pores and spectral features, which were attributed to spatial confinement, in the samples with 25 nm and 40 nm pores. The velocity of the longitudinal acoustic phonons was reduced in the samples with smaller pores. As a result, analysismore » of the experimental data and calculated phonon dispersion suggests that both phonon-boundary scattering and phonon spatial confinement affect heat conduction in membranes with the feature sizes D < 40 nm.« less

  11. Brief Communication: Buoyancy-Induced Differences in Soot Morphology

    NASA Technical Reports Server (NTRS)

    Ku, Jerry C.; Griffin, Devon W.; Greenberg, Paul S.; Roma, John

    1995-01-01

    Reduction or elimination of buoyancy in flames affects the dominant mechanisms driving heat transfer, burning rates and flame shape. The absence of buoyancy produces longer residence times for soot formation, clustering and oxidation. In addition, soot pathlines are strongly affected in microgravity. We recently conducted the first experiments comparing soot morphology in normal and reduced-gravity laminar gas jet diffusion flames. Thermophoretic sampling is a relatively new but well-established technique for studying the morphology of soot primaries and aggregates. Although there have been some questions about biasing that may be induced due to sampling, recent analysis by Rosner et al. showed that the sample is not biased when the system under study is operating in the continuum limit. Furthermore, even if the sampling is preferentially biased to larger aggregates, the size-invariant premise of fractal analysis should produce a correct fractal dimension.

  12. Disposable cartridge extraction of retinol and alpha-tocopherol from fatty samples.

    PubMed

    Bourgeois, C F; Ciba, N

    1988-01-01

    A new approach is proposed for liquid/solid extraction of retinol and alpha-tocopherol from samples, using a disposable kieselguhr cartridge. The substitution of the mixture methanol-ethanol-n-butanol (4 + 3 + 1) for methanol in the alkaline hydrolysis solution makes it now possible to process fatty samples. Methanol is necessary to solubilize the antioxidant ascorbic acid, and a linear chain alcohol such as n-butanol is necessary to reduce the size of soap micelles so that they can penetrate into the kieselguhr pores. In comparisons of the proposed method with conventional methods on mineral premixes and fatty feedstuffs, recovery and accuracy are at least as good by the proposed method. Advantages are increased rate of determinations and the ability to hydrolyze and extract retinol and alpha-tocopherol together from the same sample.

  13. Structural brain development between childhood and adulthood: Convergence across four longitudinal samples.

    PubMed

    Mills, Kathryn L; Goddings, Anne-Lise; Herting, Megan M; Meuwese, Rosa; Blakemore, Sarah-Jayne; Crone, Eveline A; Dahl, Ronald E; Güroğlu, Berna; Raznahan, Armin; Sowell, Elizabeth R; Tamnes, Christian K

    2016-11-01

    Longitudinal studies including brain measures acquired through magnetic resonance imaging (MRI) have enabled population models of human brain development, crucial for our understanding of typical development as well as neurodevelopmental disorders. Brain development in the first two decades generally involves early cortical grey matter volume (CGMV) increases followed by decreases, and monotonic increases in cerebral white matter volume (CWMV). However, inconsistencies regarding the precise developmental trajectories call into question the comparability of samples. This issue can be addressed by conducting a comprehensive study across multiple datasets from diverse populations. Here, we present replicable models for gross structural brain development between childhood and adulthood (ages 8-30years) by repeating analyses in four separate longitudinal samples (391 participants; 852 scans). In addition, we address how accounting for global measures of cranial/brain size affect these developmental trajectories. First, we found evidence for continued development of both intracranial volume (ICV) and whole brain volume (WBV) through adolescence, albeit following distinct trajectories. Second, our results indicate that CGMV is at its highest in childhood, decreasing steadily through the second decade with deceleration in the third decade, while CWMV increases until mid-to-late adolescence before decelerating. Importantly, we show that accounting for cranial/brain size affects models of regional brain development, particularly with respect to sex differences. Our results increase confidence in our knowledge of the pattern of brain changes during adolescence, reduce concerns about discrepancies across samples, and suggest some best practices for statistical control of cranial volume and brain size in future studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Genome-wide meta-analyses of stratified depression in Generation Scotland and UK Biobank.

    PubMed

    Hall, Lynsey S; Adams, Mark J; Arnau-Soler, Aleix; Clarke, Toni-Kim; Howard, David M; Zeng, Yanni; Davies, Gail; Hagenaars, Saskia P; Maria Fernandez-Pujals, Ana; Gibson, Jude; Wigmore, Eleanor M; Boutin, Thibaud S; Hayward, Caroline; Scotland, Generation; Porteous, David J; Deary, Ian J; Thomson, Pippa A; Haley, Chris S; McIntosh, Andrew M

    2018-01-10

    Few replicable genetic associations for Major Depressive Disorder (MDD) have been identified. Recent studies of MDD have identified common risk variants by using a broader phenotype definition in very large samples, or by reducing phenotypic and ancestral heterogeneity. We sought to ascertain whether it is more informative to maximize the sample size using data from all available cases and controls, or to use a sex or recurrent stratified subset of affected individuals. To test this, we compared heritability estimates, genetic correlation with other traits, variance explained by MDD polygenic score, and variants identified by genome-wide meta-analysis for broad and narrow MDD classifications in two large British cohorts - Generation Scotland and UK Biobank. Genome-wide meta-analysis of MDD in males yielded one genome-wide significant locus on 3p22.3, with three genes in this region (CRTAP, GLB1, and TMPPE) demonstrating a significant association in gene-based tests. Meta-analyzed MDD, recurrent MDD and female MDD yielded equivalent heritability estimates, showed no detectable difference in association with polygenic scores, and were each genetically correlated with six health-correlated traits (neuroticism, depressive symptoms, subjective well-being, MDD, a cross-disorder phenotype and Bipolar Disorder). Whilst stratified GWAS analysis revealed a genome-wide significant locus for male MDD, the lack of independent replication, and the consistent pattern of results in other MDD classifications suggests that phenotypic stratification using recurrence or sex in currently available sample sizes is currently weakly justified. Based upon existing studies and our findings, the strategy of maximizing sample sizes is likely to provide the greater gain.

  15. Synthesis and investigation of physico-chemical, antibacterial, biomymetic properties of silver and zinc containing hydroxyapatite

    NASA Astrophysics Data System (ADS)

    Zhuk, Ilya; Rasskazova, Lyudmila; Korotchenko, Natalia; Kozik, Vladimir; Kurzina, Irina

    2017-11-01

    In the work we carried out microwave synthesis of modified hydroxyapatites (HA) with different content of ions. A solid solution based on HA remains a single-phase sample when the calcium ions are substituted by silver and zinc ions up to 5 % by weight (0.5 mole fraction). The microstructure parameters, morphology and the particle powders size were studied by X-ray diffraction analysis, IR spectroscopy, and scanning electron microscopy (SEM). It is shown that the modification of HA by silver (AgHA) and zinc (ZnHA) ions increases the size of its particles, the degree of crystallinity, and the pore sizes of the samples while reducing their specific surface and uniformity of their forms. Elemental analysis and distribution of elements over the surface of HA, AgHA, and ZnHA powders were performed by X-ray spectral microanalysis (RSMA). The ratio of Ca/P is within the range of 1.66-1.77 and corresponds to the ratio of Ca/P in stoichiometric HA and the HA entering bone tissue. The ability of AgHA- and ZnHA-substrates to form on their surface a calcium-phosphate layer from the simulated body fluid (SBF) at 37 °C is determined. This ability decreases in the order: in ZnHA it is less than in AgHA, but greater than in HA. The antibacterial activity of the samples was analyzed. The AgHA sample has both bactericidal and persistent bacteriostatic properties in the case of direct contact with Escherichia coli cells.

  16. Estimating individual glomerular volume in the human kidney: clinical perspectives

    PubMed Central

    Puelles, Victor G.; Zimanyi, Monika A.; Samuel, Terence; Hughson, Michael D.; Douglas-Denton, Rebecca N.; Bertram, John F.

    2012-01-01

    Background. Measurement of individual glomerular volumes (IGV) has allowed the identification of drivers of glomerular hypertrophy in subjects without overt renal pathology. This study aims to highlight the relevance of IGV measurements with possible clinical implications and determine how many profiles must be measured in order to achieve stable size distribution estimates. Methods. We re-analysed 2250 IGV estimates obtained using the disector/Cavalieri method in 41 African and 34 Caucasian Americans. Pooled IGV analysis of mean and variance was conducted. Monte-Carlo (Jackknife) simulations determined the effect of the number of sampled glomeruli on mean IGV. Lin’s concordance coefficient (RC), coefficient of variation (CV) and coefficient of error (CE) measured reliability. Results. IGV mean and variance increased with overweight and hypertensive status. Superficial glomeruli were significantly smaller than juxtamedullary glomeruli in all subjects (P < 0.01), by race (P < 0.05) and in obese individuals (P < 0.01). Subjects with multiple chronic kidney disease (CKD) comorbidities showed significant increases in IGV mean and variability. Overall, mean IGV was particularly reliable with nine or more sampled glomeruli (RC > 0.95, <5% difference in CV and CE). These observations were not affected by a reduced sample size and did not disrupt the inverse linear correlation between mean IGV and estimated total glomerular number. Conclusions. Multiple comorbidities for CKD are associated with increased IGV mean and variance within subjects, including overweight, obesity and hypertension. Zonal selection and the number of sampled glomeruli do not represent drawbacks for future longitudinal biopsy-based studies of glomerular size and distribution. PMID:21984554

  17. Nano-Sized Structurally Disordered Metal Oxide Composite Aerogels as High-Power Anodes in Hybrid Supercapacitors.

    PubMed

    Huang, Haijian; Wang, Xing; Tervoort, Elena; Zeng, Guobo; Liu, Tian; Chen, Xi; Sologubenko, Alla; Niederberger, Markus

    2018-03-27

    A general method for preparing nano-sized metal oxide nanoparticles with highly disordered crystal structure and their processing into stable aqueous dispersions is presented. With these nanoparticles as building blocks, a series of nanoparticles@reduced graphene oxide (rGO) composite aerogels are fabricated and directly used as high-power anodes for lithium-ion hybrid supercapacitors (Li-HSCs). To clarify the effect of the degree of disorder, control samples of crystalline nanoparticles with similar particle size are prepared. The results indicate that the structurally disordered samples show a significantly enhanced electrochemical performance compared to the crystalline counterparts. In particular, structurally disordered Ni x Fe y O z @rGO delivers a capacity of 388 mAh g -1 at 5 A g -1 , which is 6 times that of the crystalline sample. Disordered Ni x Fe y O z @rGO is taken as an example to study the reasons for the enhanced performance. Compared with the crystalline sample, density functional theory calculations reveal a smaller volume expansion during Li + insertion for the structurally disordered Ni x Fe y O z nanoparticles, and they are found to exhibit larger pseudocapacitive effects. Combined with an activated carbon (AC) cathode, full-cell tests of the lithium-ion hybrid supercapacitors are performed, demonstrating that the structurally disordered metal oxide nanoparticles@rGO||AC hybrid systems deliver high energy and power densities within the voltage range of 1.0-4.0 V. These results indicate that structurally disordered nanomaterials might be interesting candidates for exploring high-power anodes for Li-HSCs.

  18. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  19. Sample size calculations for case-control studies

    Cancer.gov

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  20. Reducing the extinction risk of stochastic populations via nondemographic noise

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Assaf, Michael

    2018-02-01

    We consider nondemographic noise in the form of uncertainty in the reaction step size and reveal a dramatic effect this noise may have on the stability of self-regulating populations. Employing the reaction scheme m A →k A but allowing, e.g., the product number k to be a priori unknown and sampled from a given distribution, we show that such nondemographic noise can greatly reduce the population's extinction risk compared to the fixed k case. Our analysis is tested against numerical simulations, and by using empirical data of different species, we argue that certain distributions may be more evolutionary beneficial than others.

Top