Sample records for minimal sample size

  1. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  2. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    PubMed Central

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  3. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  4. Minimizing the Maximum Expected Sample Size in Two-Stage Phase II Clinical Trials with Continuous Outcomes

    PubMed Central

    Wason, James M. S.; Mander, Adrian P.

    2012-01-01

    Two-stage designs are commonly used for Phase II trials. Optimal two-stage designs have the lowest expected sample size for a specific treatment effect, for example, the null value, but can perform poorly if the true treatment effect differs. Here we introduce a design for continuous treatment responses that minimizes the maximum expected sample size across all possible treatment effects. The proposed design performs well for a wider range of treatment effects and so is useful for Phase II trials. We compare the design to a previously used optimal design and show it has superior expected sample size properties. PMID:22651118

  5. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  6. Repeated significance tests of linear combinations of sensitivity and specificity of a diagnostic biomarker

    PubMed Central

    Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi

    2016-01-01

    A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768

  7. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  8. (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.

    PubMed

    van Rijnsoever, Frank J

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  9. Sample size allocation for food item radiation monitoring and safety inspection.

    PubMed

    Seto, Mayumi; Uriu, Koichiro

    2015-03-01

    The objective of this study is to identify a procedure for determining sample size allocation for food radiation inspections of more than one food item to minimize the potential risk to consumers of internal radiation exposure. We consider a simplified case of food radiation monitoring and safety inspection in which a risk manager is required to monitor two food items, milk and spinach, in a contaminated area. Three protocols for food radiation monitoring with different sample size allocations were assessed by simulating random sampling and inspections of milk and spinach in a conceptual monitoring site. Distributions of (131)I and radiocesium concentrations were determined in reference to (131)I and radiocesium concentrations detected in Fukushima prefecture, Japan, for March and April 2011. The results of the simulations suggested that a protocol that allocates sample size to milk and spinach based on the estimation of (131)I and radiocesium concentrations using the apparent decay rate constants sequentially calculated from past monitoring data can most effectively minimize the potential risks of internal radiation exposure. © 2014 Society for Risk Analysis.

  10. Minimization of reflection cracks in flexible pavements.

    DOT National Transportation Integrated Search

    1977-01-01

    This report describes the performance of fabrics used under overlays in an effort to minimize longitudinal and alligator cracking in flexible pavements. It is concluded, although the sample size is small, that the treatments will extend the pavement ...

  11. Search for Minimal and Semi-Minimal Rule Sets in Incremental Learning of Context-Free and Definite Clause Grammars

    NASA Astrophysics Data System (ADS)

    Imada, Keita; Nakamura, Katsuhiko

    This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called “bridging” based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.

  12. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    PubMed Central

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358

  13. Measuring Compartment Size and Gas Solubility in Marine Mammals

    DTIC Science & Technology

    2014-09-30

    analyzed by gas chromatography . Injection of the sample into the gas chromatograph is done using a sample loop to minimize volume injection error. We...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Measuring Compartment Size and Gas Solubility in Marine...study is to develop methods to estimate marine mammal tissue compartment sizes, and tissue gas solubility. We aim to improve the data available for

  14. Forestry inventory based on multistage sampling with probability proportional to size

    NASA Technical Reports Server (NTRS)

    Lee, D. C. L.; Hernandez, P., Jr.; Shimabukuro, Y. E.

    1983-01-01

    A multistage sampling technique, with probability proportional to size, is developed for a forest volume inventory using remote sensing data. The LANDSAT data, Panchromatic aerial photographs, and field data are collected. Based on age and homogeneity, pine and eucalyptus classes are identified. Selection of tertiary sampling units is made through aerial photographs to minimize field work. The sampling errors for eucalyptus and pine ranged from 8.34 to 21.89 percent and from 7.18 to 8.60 percent, respectively.

  15. Minimal-assumption inference from population-genomic data

    NASA Astrophysics Data System (ADS)

    Weissman, Daniel; Hallatschek, Oskar

    Samples of multiple complete genome sequences contain vast amounts of information about the evolutionary history of populations, much of it in the associations among polymorphisms at different loci. Current methods that take advantage of this linkage information rely on models of recombination and coalescence, limiting the sample sizes and populations that they can analyze. We introduce a method, Minimal-Assumption Genomic Inference of Coalescence (MAGIC), that reconstructs key features of the evolutionary history, including the distribution of coalescence times, by integrating information across genomic length scales without using an explicit model of recombination, demography or selection. Using simulated data, we show that MAGIC's performance is comparable to PSMC' on single diploid samples generated with standard coalescent and recombination models. More importantly, MAGIC can also analyze arbitrarily large samples and is robust to changes in the coalescent and recombination processes. Using MAGIC, we show that the inferred coalescence time histories of samples of multiple human genomes exhibit inconsistencies with a description in terms of an effective population size based on single-genome data.

  16. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.

  17. Optimal Inspection of Imports to Prevent Invasive Pest Introduction.

    PubMed

    Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G

    2018-03-01

    The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.

  18. Further improvement of hydrostatic pressure sample injection for microchip electrophoresis.

    PubMed

    Luo, Yong; Zhang, Qingquan; Qin, Jianhua; Lin, Bingcheng

    2007-12-01

    Hydrostatic pressure sample injection method is able to minimize the number of electrodes needed for a microchip electrophoresis process; however, it neither can be applied for electrophoretic DNA sizing, nor can be implemented on the widely used single-cross microchip. This paper presents an injector design that makes the hydrostatic pressure sample injection method suitable for DNA sizing. By introducing an assistant channel into the normal double-cross injector, a rugged DNA sample plug suitable for sizing can be successfully formed within the cross area during the sample loading. This paper also demonstrates that the hydrostatic pressure sample injection can be performed in the single-cross microchip by controlling the radial position of the detection point in the separation channel. Rhodamine 123 and its derivative as model sample were successfully separated.

  19. Statistical considerations in evaluating pharmacogenomics-based clinical effect for confirmatory trials.

    PubMed

    Wang, Sue-Jane; O'Neill, Robert T; Hung, Hm James

    2010-10-01

    The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.

  20. Generalized optimal design for two-arm, randomized phase II clinical trials with endpoints from the exponential dispersion family.

    PubMed

    Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S

    2016-11-01

    For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    PubMed

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  2. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  3. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  5. Effect of sample inhomogeneity in KAr dating

    USGS Publications Warehouse

    Engels, J.C.; Ingamells, C.O.

    1970-01-01

    Error in K-Ar ages is often due more to deficiencies in the splitting process, whereby portions of the sample are taken for potassium and for argon determination, than to imprecision in the analytical methods. The effect of the grain size of a sample and of the composition of a contaminating mineral can be evaluated, and this provides a useful guide in attempts to minimize error. Rocks and minerals should be prepared for age determination with the effects of contaminants and grain size in mind. The magnitude of such effects can be much larger than intuitive estimates might indicate. ?? 1970.

  6. SW-846 Test Method 3511: Organic Compounds in Water by Microextraction

    EPA Pesticide Factsheets

    a procedure for extracting selected volatile and semivolatileorganic compounds from water. The microscale approach minimizes sample size and solventusage, thereby reducing the supply costs, health and safety risks, and waste generated.

  7. Sample exchange by beam scanning with applications to noncollinear pump-probe spectroscopy at kilohertz repetition rates.

    PubMed

    Spencer, Austin P; Hill, Robert J; Peters, William K; Baranov, Dmitry; Cho, Byungmoon; Huerta-Viga, Adriana; Carollo, Alexa R; Curtis, Anna C; Jonas, David M

    2017-06-01

    In laser spectroscopy, high photon flux can perturb the sample away from thermal equilibrium, altering its spectroscopic properties. Here, we describe an optical beam scanning apparatus that minimizes repetitive sample excitation while providing shot-to-shot sample exchange for samples such as cryostats, films, and air-tight cuvettes. In this apparatus, the beam crossing point is moved within the focal plane inside the sample by scanning both tilt angles of a flat mirror. A space-filling spiral scan pattern was designed that efficiently utilizes the sample area and mirror scanning bandwidth. Scanning beams along a spiral path is shown to increase the average number of laser shots that can be sampled before a spot on the sample cell is resampled by the laser to ∼1700 (out of the maximum possible 2500 for the sample area and laser spot size) while ensuring minimal shot-to-shot spatial overlap. Both an all-refractive version and an all-reflective version of the apparatus are demonstrated. The beam scanning apparatus does not measurably alter the time delay (less than the 0.4 fs measurement uncertainty), the laser focal spot size (less than the 2 μm measurement uncertainty), or the beam overlap (less than the 3.3% measurement uncertainty), leading to pump-probe and autocorrelation signal transients that accurately characterize the equilibrium sample.

  8. Neuromuscular dose-response studies: determining sample size.

    PubMed

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  9. Strategies for informed sample size reduction in adaptive controlled clinical trials

    NASA Astrophysics Data System (ADS)

    Arandjelović, Ognjen

    2017-12-01

    Clinical trial adaptation refers to any adjustment of the trial protocol after the onset of the trial. The main goal is to make the process of introducing new medical interventions to patients more efficient. The principal challenge, which is an outstanding research problem, is to be found in the question of how adaptation should be performed so as to minimize the chance of distorting the outcome of the trial. In this paper, we propose a novel method for achieving this. Unlike most of the previously published work, our approach focuses on trial adaptation by sample size adjustment, i.e. by reducing the number of trial participants in a statistically informed manner. Our key idea is to select the sample subset for removal in a manner which minimizes the associated loss of information. We formalize this notion and describe three algorithms which approach the problem in different ways, respectively, using (i) repeated random draws, (ii) a genetic algorithm, and (iii) what we term pair-wise sample compatibilities. Experiments on simulated data demonstrate the effectiveness of all three approaches, with a consistently superior performance exhibited by the pair-wise sample compatibilities-based method.

  10. DIY Tomography sample holder

    NASA Astrophysics Data System (ADS)

    Lari, L.; Wright, I.; Boyes, E. D.

    2015-10-01

    A very simple tomography sample holder at minimal cost was developed in-house. The holder is based on a JEOL single tilt fast exchange sample holder where its exchangeable tip was modified to allow high angle degree tilt. The shape of the tip was designed to retain mechanical stability while minimising the lateral size of the tip. The sample can be mounted on as for a standard 3mm Cu grids as well as semi-circular grids from FIB sample preparation. Applications of the holder on different sample systems are shown.

  11. Correlation between standard Charpy and sub-size Charpy test results of selected steels in upper shelf region

    NASA Astrophysics Data System (ADS)

    Konopík, P.; Džugan, J.; Bucki, T.; Rzepa, S.; Rund, M.; Procházka, R.

    2017-02-01

    Absorbed energy obtained from impact Charpy tests is one of the most important values in many applications, for example in residual lifetime assessment of components in service. Minimal absorbed energy is often the value crucial for extending components service life, e.g. turbines, boilers and steam lines. Using a portable electric discharge sampling equipment (EDSE), it is possible to sample experimental material non-destructively and subsequently produce mini-Charpy specimens. This paper presents a new approach in correlation from sub-size to standard Charpy test results.

  12. Self-navigation of a scanning tunneling microscope tip toward a micron-sized graphene sample.

    PubMed

    Li, Guohong; Luican, Adina; Andrei, Eva Y

    2011-07-01

    We demonstrate a simple capacitance-based method to quickly and efficiently locate micron-sized conductive samples, such as graphene flakes, on insulating substrates in a scanning tunneling microscope (STM). By using edge recognition, the method is designed to locate and to identify small features when the STM tip is far above the surface, allowing for crash-free search and navigation. The method can be implemented in any STM environment, even at low temperatures and in strong magnetic field, with minimal or no hardware modifications.

  13. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  14. How accurate is the Pearson r-from-Z approximation? A Monte Carlo simulation study.

    PubMed

    Hittner, James B; May, Kim

    2012-01-01

    The Pearson r-from-Z approximation estimates the sample correlation (as an effect size measure) from the ratio of two quantities: the standard normal deviate equivalent (Z-score) corresponding to a one-tailed p-value divided by the square root of the total (pooled) sample size. The formula has utility in meta-analytic work when reports of research contain minimal statistical information. Although simple to implement, the accuracy of the Pearson r-from-Z approximation has not been empirically evaluated. To address this omission, we performed a series of Monte Carlo simulations. Results indicated that in some cases the formula did accurately estimate the sample correlation. However, when sample size was very small (N = 10) and effect sizes were small to small-moderate (ds of 0.1 and 0.3), the Pearson r-from-Z approximation was very inaccurate. Detailed figures that provide guidance as to when the Pearson r-from-Z formula will likely yield valid inferences are presented.

  15. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  16. Data splitting for artificial neural networks using SOM-based stratified sampling.

    PubMed

    May, R J; Maier, H R; Dandy, G C

    2010-03-01

    Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.

  17. The determination of specific forms of aluminum in natural water

    USGS Publications Warehouse

    Barnes, R.B.

    1975-01-01

    A procedure for analysis and pretreatment of natural-water samples to determine very low concentrations of Al is described which distinguishes the rapidly reacting equilibrium species from the metastable or slowly reacting macro ions and colloidal suspended material. Aluminum is complexed with 8-hydroxyquinoline (oxine), pH is adjusted to 8.3 to minimize interferences, and the aluminum oxinate is extracted with methyl isobutyl ketone (MIBK) prior to analysis by atomic absorption. To determine equilibrium species only, the contact time between sample and 8-hydroxyquinoline is minimized. The Al may be extracted at the sample site with a minimum of equipment and the MIBK extract stored for several weeks prior to atomic absorption analysis. Data obtained from analyses of 39 natural groundwater samples indicate that filtration through a 0.1-??m pore size filter is not an adequate means of removing all insoluble and metastable Al species present, and extraction of Al immediately after collection is necessary if only dissolved and readily reactive species are to be determined. An average of 63% of the Al present in natural waters that had been filtered through 0.1-??m pore size filters was in the form of monomeric ions. The total Al concentration, which includes all forms that passed through a 0.1-??m pore size filter, ranged 2-70 ??g/l. The concentration of Al in the form of monomeric ions ranged from below detection to 57 ??g/l. Most of the natural water samples used in this study were collected from thermal springs and oil wells. ?? 1975.

  18. Globally maximizing, locally minimizing: unsupervised discriminant projection with applications to face and palm biometrics.

    PubMed

    Yang, Jian; Zhang, David; Yang, Jing-Yu; Niu, Ben

    2007-04-01

    This paper develops an unsupervised discriminant projection (UDP) technique for dimensionality reduction of high-dimensional data in small sample size cases. UDP can be seen as a linear approximation of a multimanifolds-based learning framework which takes into account both the local and nonlocal quantities. UDP characterizes the local scatter as well as the nonlocal scatter, seeking to find a projection that simultaneously maximizes the nonlocal scatter and minimizes the local scatter. This characteristic makes UDP more intuitive and more powerful than the most up-to-date method, Locality Preserving Projection (LPP), which considers only the local scatter for clustering or classification tasks. The proposed method is applied to face and palm biometrics and is examined using the Yale, FERET, and AR face image databases and the PolyU palmprint database. The experimental results show that UDP consistently outperforms LPP and PCA and outperforms LDA when the training sample size per class is small. This demonstrates that UDP is a good choice for real-world biometrics applications.

  19. Type I error probability spending for post-market drug and vaccine safety surveillance with binomial data.

    PubMed

    Silva, Ivair R

    2018-01-15

    Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Effect of modulation of the particle size distributions in the direct solid analysis by total-reflection X-ray fluorescence

    NASA Astrophysics Data System (ADS)

    Fernández-Ruiz, Ramón; Friedrich K., E. Josue; Redrejo, M. J.

    2018-02-01

    The main goal of this work was to investigate, in a systematic way, the influence of the controlled modulation of the particle size distribution of a representative solid sample with respect to the more relevant analytical parameters of the Direct Solid Analysis (DSA) by Total-reflection X-Ray Fluorescence (TXRF) quantitative method. In particular, accuracy, uncertainty, linearity and detection limits were correlated with the main parameters of their size distributions for the following elements; Al, Si, P, S, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Rb, Sr, Ba and Pb. In all cases strong correlations were finded. The main conclusion of this work can be resumed as follows; the modulation of particles shape to lower average sizes next to a minimization of the width of particle size distributions, produce a strong increment of accuracy, minimization of uncertainties and limit of detections for DSA-TXRF methodology. These achievements allow the future use of the DSA-TXRF analytical methodology for development of ISO norms and standardized protocols for the direct analysis of solids by mean of TXRF.

  1. Method for Hot Real-Time Sampling of Gasification Products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pomeroy, Marc D

    The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a highly instrumented half-ton/day pilot scale plant capable of demonstrating industrially relevant thermochemical technologies from lignocellulosic biomass conversion, including gasification. Gasification creates primarily Syngas (a mixture of Hydrogen and Carbon Monoxide) that can be utilized with synthesis catalysts to form transportation fuels and other valuable chemicals. Biomass derived gasification products are a very complex mixture of chemical components that typically contain Sulfur and Nitrogen species that can act as catalysis poisons for tar reforming and synthesis catalysts. Real-time hot online sampling techniques, such as Molecular Beammore » Mass Spectrometry (MBMS), and Gas Chromatographs with Sulfur and Nitrogen specific detectors can provide real-time analysis providing operational indicators for performance. Sampling typically requires coated sampling lines to minimize trace sulfur interactions with steel surfaces. Other materials used inline have also shown conversion of sulfur species into new components and must be minimized. Sample line Residence time within the sampling lines must also be kept to a minimum to reduce further reaction chemistries. Solids from ash and char contribute to plugging and must be filtered at temperature. Experience at NREL has shown several key factors to consider when designing and installing an analytical sampling system for biomass gasification products. They include minimizing sampling distance, effective filtering as close to source as possible, proper line sizing, proper line materials or coatings, even heating of all components, minimizing pressure drops, and additional filtering or traps after pressure drops.« less

  2. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  3. Variation in aluminum, iron, and particle concentrations in oxic groundwater samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    NASA Astrophysics Data System (ADS)

    Szabo, Zoltan; Oden, Jeannette H.; Gibs, Jacob; Rice, Donald E.; Ding, Yuan

    2002-02-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering.

  4. Nonlinear vibrational microscopy

    DOEpatents

    Holtom, Gary R.; Xie, Xiaoliang Sunney; Zumbusch, Andreas

    2000-01-01

    The present invention is a method and apparatus for microscopic vibrational imaging using coherent Anti-Stokes Raman Scattering or Sum Frequency Generation. Microscopic imaging with a vibrational spectroscopic contrast is achieved by generating signals in a nonlinear optical process and spatially resolved detection of the signals. The spatial resolution is attained by minimizing the spot size of the optical interrogation beams on the sample. Minimizing the spot size relies upon a. directing at least two substantially co-axial laser beams (interrogation beams) through a microscope objective providing a focal spot on the sample; b. collecting a signal beam together with a residual beam from the at least two co-axial laser beams after passing through the sample; c. removing the residual beam; and d. detecting the signal beam thereby creating said pixel. The method has significantly higher spatial resolution then IR microscopy and higher sensitivity than spontaneous Raman microscopy with much lower average excitation powers. CARS and SFG microscopy does not rely on the presence of fluorophores, but retains the resolution and three-dimensional sectioning capability of confocal and two-photon fluorescence microscopy. Complementary to these techniques, CARS and SFG microscopy provides a contrast mechanism based on vibrational spectroscopy. This vibrational contrast mechanism, combined with an unprecedented high sensitivity at a tolerable laser power level, provides a new approach for microscopic investigations of chemical and biological samples.

  5. Design and Analysis of an Isokinetic Sampling Probe for Submicron Particle Measurements at High Altitude

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.

    2012-01-01

    An isokinetic dilution probe has been designed with the aid of computational fluid dynamics to sample sub-micron particles emitted from aviation combustion sources. The intended operational range includes standard day atmospheric conditions up to 40,000-ft. With dry nitrogen as the diluent, the probe is intended to minimize losses from particle microphysics and transport while rapidly quenching chemical kinetics. Initial results indicate that the Mach number ratio of the aerosol sample and dilution streams in the mixing region is an important factor for successful operation. Flow rate through the probe tip was found to be highly sensitive to the static pressure at the probe exit. Particle losses through the system were estimated to be on the order of 50% with minimal change in the overall particle size distribution apparent. Following design refinement, experimental testing and validation will be conducted in the Particle Aerosol Laboratory, a research facility located at the NASA Glenn Research Center to study the evolution of aviation emissions at lower stratospheric conditions. Particle size distributions and number densities from various combustion sources will be used to better understand particle-phase microphysics, plume chemistry, evolution to cirrus, and environmental impacts of aviation.

  6. Designing a multiple dependent state sampling plan based on the coefficient of variation.

    PubMed

    Yan, Aijun; Liu, Sanyang; Dong, Xiaojuan

    2016-01-01

    A multiple dependent state (MDS) sampling plan is developed based on the coefficient of variation of the quality characteristic which follows a normal distribution with unknown mean and variance. The optimal plan parameters of the proposed plan are solved by a nonlinear optimization model, which satisfies the given producer's risk and consumer's risk at the same time and minimizes the sample size required for inspection. The advantages of the proposed MDS sampling plan over the existing single sampling plan are discussed. Finally an example is given to illustrate the proposed plan.

  7. Minimally Invasive Thumb-sized Pterional Craniotomy for Surgical Clip Ligation of Unruptured Anterior Circulation Aneurysms.

    PubMed

    Deshaies, Eric M; Villwock, Mark R; Singla, Amit; Toshkezi, Gentian; Padalino, David J

    2015-08-11

    Less invasive surgical approaches for intracranial aneurysm clipping may reduce length of hospital stay, surgical morbidity, treatment cost, and improve patient outcomes. We present our experience with a minimally invasive pterional approach for anterior circulation aneurysms performed in a major tertiary cerebrovascular center and compare the results with an aged matched dataset from the Nationwide Inpatient Sample (NIS). From August 2008 to December 2012, 22 elective aneurysm clippings on patients ≤55 years of age were performed by the same dual fellowship-trained cerebrovascular/endovascular neurosurgeon. One patient (4.5%) experienced transient post-operative complications. 18 of 22 patients returned for follow-up imaging and there were no recurrences through an average duration of 22 months. A search in the NIS database from 2008 to 2010, also for patients aged ≤55 years of age, yielded 1,341 hospitalizations for surgical clip ligation of unruptured cerebral aneurysms. Inpatient length of stay and hospital charges at our institution using the minimally invasive thumb-sized pterional technique were nearly half that of NIS (length of stay: 3.2 vs 5.7 days; hospital charges: $52,779 vs. $101,882). The minimally invasive thumb-sized pterional craniotomy allows good exposure of unruptured small and medium-sized supraclinoid anterior circulation aneurysms. Cerebrospinal fluid drainage from key subarachnoid cisterns and constant bimanual microsurgical techniques avoid the need for retractors which can cause contusions, localized venous infarctions, and post-operative cerebral edema at the retractor sites. Utilizing this set of techniques has afforded our patients with a shorter hospital stay at a lower cost compared to the national average.

  8. Sampling intraspecific variability in leaf functional traits: Practical suggestions to maximize collected information.

    PubMed

    Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni

    2017-12-01

    The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.

  9. Mesoscale spatial variability of selected aquatic invertebrate community metrics from a minimally impaired stream segment

    USGS Publications Warehouse

    Gebler, J.B.

    2004-01-01

    The related topics of spatial variability of aquatic invertebrate community metrics, implications of spatial patterns of metric values to distributions of aquatic invertebrate communities, and ramifications of natural variability to the detection of human perturbations were investigated. Four metrics commonly used for stream assessment were computed for 9 stream reaches within a fairly homogeneous, minimally impaired stream segment of the San Pedro River, Arizona. Metric variability was assessed for differing sampling scenarios using simple permutation procedures. Spatial patterns of metric values suggest that aquatic invertebrate communities are patchily distributed on subsegment and segment scales, which causes metric variability. Wide ranges of metric values resulted in wide ranges of metric coefficients of variation (CVs) and minimum detectable differences (MDDs), and both CVs and MDDs often increased as sample size (number of reaches) increased, suggesting that any particular set of sampling reaches could yield misleading estimates of population parameters and effects that can be detected. Mean metric variabilities were substantial, with the result that only fairly large differences in metrics would be declared significant at ?? = 0.05 and ?? = 0.20. The number of reaches required to obtain MDDs of 10% and 20% varied with significance level and power, and differed for different metrics, but were generally large, ranging into tens and hundreds of reaches. Study results suggest that metric values from one or a small number of stream reach(es) may not be adequate to represent a stream segment, depending on effect sizes of interest, and that larger sample sizes are necessary to obtain reasonable estimates of metrics and sample statistics. For bioassessment to progress, spatial variability may need to be investigated in many systems and should be considered when designing studies and interpreting data.

  10. Impact of holmium fibre laser radiation (λ = 2.1 μm) on the spinal cord dura mater and adipose tissue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filatova, S A; Kamynin, V A; Ryabova, A V

    The impact of holmium fibre laser radiation on the samples of biologic tissues (dura mater of spinal cord and adipose tissue with interlayers of muscle) is studied. The experimental results are evaluated by the size of carbonisation and coagulation necrosis zones. The experiment shows that in the case of irradiation of the spinal cord dura mater samples the size of carbonisation and coagulation necrosis zones is insignificant. In the adipose tissue the carbonisation zone is also insignificant, but the region of cellular structure disturbance is large. In the muscle tissue the situation is opposite. The cw laser operation provides clinicallymore » acceptable degree of destruction in tissue samples with a minimal carbonisation zone. (laser applications in medicine)« less

  11. A systematic approach to designing statistically powerful heteroscedastic 2 × 2 factorial studies while minimizing financial costs.

    PubMed

    Jan, Show-Li; Shieh, Gwowen

    2016-08-31

    The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.

  12. Photographic techniques for characterizing streambed particle sizes

    USGS Publications Warehouse

    Whitman, Matthew S.; Moran, Edward H.; Ourso, Robert T.

    2003-01-01

    We developed photographic techniques to characterize coarse (>2-mm) and fine (≤2-mm) streambed particle sizes in 12 streams in Anchorage, Alaska. Results were compared with current sampling techniques to assess which provided greater sampling efficiency and accuracy. The streams sampled were wadeable and contained gravel—cobble streambeds. Gradients ranged from about 5% at the upstream sites to about 0.25% at the downstream sites. Mean particle sizes and size-frequency distributions resulting from digitized photographs differed significantly from those resulting from Wolman pebble counts for five sites in the analysis. Wolman counts were biased toward selecting larger particles. Photographic analysis also yielded a greater number of measured particles (mean = 989) than did the Wolman counts (mean = 328). Stream embeddedness ratings assigned from field and photographic observations were significantly different at 5 of the 12 sites, although both types of ratings showed a positive relationship with digitized surface fines. Visual estimates of embeddedness and digitized surface fines may both be useful indicators of benthic conditions, but digitizing surface fines produces quantitative rather than qualitative data. Benefits of the photographic techniques include reduced field time, minimal streambed disturbance, convenience of postfield processing, easy sample archiving, and improved accuracy and replication potential.

  13. Maintaining oncologic integrity with minimally invasive resection of pediatric embryonal tumors.

    PubMed

    Phelps, Hannah M; Ayers, Gregory D; Ndolo, Josephine M; Dietrich, Hannah L; Watson, Katherine D; Hilmes, Melissa A; Lovvorn, Harold N

    2018-05-08

    Embryonal tumors arise typically in infants and young children and are often massive at presentation. Operative resection is a cornerstone in the multimodal treatment of embryonal tumors but potentially disrupts therapeutic timelines. When used appropriately, minimally invasive surgery can minimize treatment delays. The oncologic integrity and safety attainable with minimally invasive resection of embryonal tumors, however, remains controversial. Query of the Vanderbilt Cancer Registry identified all children treated for intracavitary, embryonal tumors during a 15-year period. Tumors were assessed radiographically to measure volume (mL) and image-defined risk factors (neuroblastic tumors only) at time of diagnosis, and at preresection and postresection. Patient and tumor characteristics, perioperative details, and oncologic outcomes were compared between minimally invasive surgery and open resection of tumors of comparable size. A total of 202 patients were treated for 206 intracavitary embryonal tumors, of which 178 were resected either open (n = 152, 85%) or with minimally invasive surgery (n = 26, 15%). The 5-year, relapse-free, and overall survival were not significantly different after minimally invasive surgery or open resection of tumors having a volume less than 100 mL, corresponding to the largest resected with minimally invasive surgery (P = .249 and P = .124, respectively). No difference in margin status or lymph node sampling between the 2 operative approaches was detected (p = .333 and p = .070, respectively). Advantages associated with minimally invasive surgery were decreased blood loss (P < .001), decreased operating time (P = .002), and shorter hospital stay (P < .001). Characteristically, minimally invasive surgery was used for smaller volume and earlier stage neuroblastic tumors without image-defined risk factors. When selected appropriately, minimally invasive resection of pediatric embryonal tumors, particularly neuroblastic tumors, provides acceptable oncologic integrity. Large tumor volume, small patient size, and image-defined risk factors may limit the broader applicability of minimally invasive surgery. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors

    PubMed Central

    Weng, Jian; Dong, Shanshan; He, Hongjian; Chen, Feiyan; Peng, Xiaogang

    2015-01-01

    Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group. PMID:26207985

  15. Compendium of Operations Research and Economic Analysis Studies

    DTIC Science & Technology

    1992-10-01

    were to: (1) review and document current po~icios and procedures, k2) identity relevant economic and non -economic decision vAriibles, (3) design a...minimize the total sample size while ensuring that the proportion of samples closely resembled the actual population proportions. Both linear and non ...would cost about $290.00. DLA-92-PlO10. Impact of Increasing the Non -Competitive Threshold from Index No. 92-26 $2,500 to $5,000 (October 1991) In

  16. Sampling Mars: Analytical requirements and work to do in advance

    NASA Technical Reports Server (NTRS)

    Koeberl, Christian

    1988-01-01

    Sending a mission to Mars to collect samples and return them to the Earth for analysis is without doubt one of the most exciting and important tasks for planetary science in the near future. Many scientifically important questions are associated with the knowledge of the composition and structure of Martian samples. Amongst the most exciting questions is the clarification of the SNC problem- to prove or disprove a possible Martian origin of these meteorites. Since SNC meteorites have been used to infer the chemistry of the planet Mars, and its evolution (including the accretion history), it would be important to know if the whole story is true. But before addressing possible scientific results, we have to deal with the analytical requirements, and with possible pre-return work. It is unlikely to expect that a possible Mars sample return mission will bring back anything close to the amount returned by the Apollo missions. It will be more like the amount returned by the Luna missions, or at least in that order of magnitude. This requires very careful sample selection, and very precise analytical techniques. These techniques should be able to use minimal sample sizes and on the other hand optimize the scientific output. The possibility to work with extremely small samples should not obstruct another problem: possible sampling errors. As we know from terrestrial geochemical studies, sampling procedures are quite complicated and elaborate to ensure avoiding sampling errors. The significance of analyzing a milligram or submilligram sized sample and putting that in relationship with the genesis of whole planetary crusts has to be viewed with care. This leaves a dilemma on one hand, to minimize the sample size as far as possible in order to have the possibility of returning as many different samples as possible, and on the other hand to take a sample large enough to be representative. Whole rock samples are very useful, but should not exceed the 20 to 50 g range, except in cases of extreme inhomogeneity, because for larger samples the information tends to become redundant. Soil samples should be in the 2 to 10 g range, permitting the splitting of the returned samples for studies in different laboratories with variety of techniques.

  17. Methodological quality of behavioural weight loss studies: a systematic review

    PubMed Central

    Lemon, S. C.; Wang, M. L.; Haughton, C. F.; Estabrook, D. P.; Frisard, C. F.; Pagoto, S. L.

    2018-01-01

    Summary This systematic review assessed the methodological quality of behavioural weight loss intervention studies conducted among adults and associations between quality and statistically significant weight loss outcome, strength of intervention effectiveness and sample size. Searches for trials published between January, 2009 and December, 2014 were conducted using PUBMED, MEDLINE and PSYCINFO and identified ninety studies. Methodological quality indicators included study design, anthropometric measurement approach, sample size calculations, intent-to-treat (ITT) analysis, loss to follow-up rate, missing data strategy, sampling strategy, report of treatment receipt and report of intervention fidelity (mean = 6.3). Indicators most commonly utilized included randomized design (100%), objectively measured anthropometrics (96.7%), ITT analysis (86.7%) and reporting treatment adherence (76.7%). Most studies (62.2%) had a follow-up rate >75% and reported a loss to follow-up analytic strategy or minimal missing data (69.9%). Describing intervention fidelity (34.4%) and sampling from a known population (41.1%) were least common. Methodological quality was not associated with reporting a statistically significant result, effect size or sample size. This review found the published literature of behavioural weight loss trials to be of high quality for specific indicators, including study design and measurement. Identified for improvement include utilization of more rigorous statistical approaches to loss to follow up and better fidelity reporting. PMID:27071775

  18. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    PubMed Central

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-01-01

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150

  19. Randomization in clinical trials: stratification or minimization? The HERMES free simulation software.

    PubMed

    Fron Chabouis, Hélène; Chabouis, Francis; Gillaizeau, Florence; Durieux, Pierre; Chatellier, Gilles; Ruse, N Dorin; Attal, Jean-Pierre

    2014-01-01

    Operative clinical trials are often small and open-label. Randomization is therefore very important. Stratification and minimization are two randomization options in such trials. The first aim of this study was to compare stratification and minimization in terms of predictability and balance in order to help investigators choose the most appropriate allocation method. Our second aim was to evaluate the influence of various parameters on the performance of these techniques. The created software generated patients according to chosen trial parameters (e.g., number of important prognostic factors, number of operators or centers, etc.) and computed predictability and balance indicators for several stratification and minimization methods over a given number of simulations. Block size and proportion of random allocations could be chosen. A reference trial was chosen (50 patients, 1 prognostic factor, and 2 operators) and eight other trials derived from this reference trial were modeled. Predictability and balance indicators were calculated from 10,000 simulations per trial. Minimization performed better with complex trials (e.g., smaller sample size, increasing number of prognostic factors, and operators); stratification imbalance increased when the number of strata increased. An inverse correlation between imbalance and predictability was observed. A compromise between predictability and imbalance still has to be found by the investigator but our software (HERMES) gives concrete reasons for choosing between stratification and minimization; it can be downloaded free of charge. This software will help investigators choose the appropriate randomization method in future two-arm trials.

  20. Minimally Invasive Thumb-sized Pterional Craniotomy for Surgical Clip Ligation of Unruptured Anterior Circulation Aneurysms

    PubMed Central

    Deshaies, Eric M; Villwock, Mark R; Singla, Amit; Toshkezi, Gentian; Padalino, David J

    2015-01-01

    Less invasive surgical approaches for intracranial aneurysm clipping may reduce length of hospital stay, surgical morbidity, treatment cost, and improve patient outcomes. We present our experience with a minimally invasive pterional approach for anterior circulation aneurysms performed in a major tertiary cerebrovascular center and compare the results with an aged matched dataset from the Nationwide Inpatient Sample (NIS). From August 2008 to December 2012, 22 elective aneurysm clippings on patients ≤55 years of age were performed by the same dual fellowship-trained cerebrovascular/endovascular neurosurgeon. One patient (4.5%) experienced transient post-operative complications. 18 of 22 patients returned for follow-up imaging and there were no recurrences through an average duration of 22 months. A search in the NIS database from 2008 to 2010, also for patients aged ≤55 years of age, yielded 1,341 hospitalizations for surgical clip ligation of unruptured cerebral aneurysms. Inpatient length of stay and hospital charges at our institution using the minimally invasive thumb-sized pterional technique were nearly half that of NIS (length of stay: 3.2 vs 5.7 days; hospital charges: $52,779 vs. $101,882). The minimally invasive thumb-sized pterional craniotomy allows good exposure of unruptured small and medium-sized supraclinoid anterior circulation aneurysms. Cerebrospinal fluid drainage from key subarachnoid cisterns and constant bimanual microsurgical techniques avoid the need for retractors which can cause contusions, localized venous infarctions, and post-operative cerebral edema at the retractor sites. Utilizing this set of techniques has afforded our patients with a shorter hospital stay at a lower cost compared to the national average. PMID:26325337

  1. The size of a pilot study for a clinical trial should be calculated in relation to considerations of precision and efficiency.

    PubMed

    Sim, Julius; Lewis, Martyn

    2012-03-01

    To investigate methods to determine the size of a pilot study to inform a power calculation for a randomized controlled trial (RCT) using an interval/ratio outcome measure. Calculations based on confidence intervals (CIs) for the sample standard deviation (SD). Based on CIs for the sample SD, methods are demonstrated whereby (1) the observed SD can be adjusted to secure the desired level of statistical power in the main study with a specified level of confidence; (2) the sample for the main study, if calculated using the observed SD, can be adjusted, again to obtain the desired level of statistical power in the main study; (3) the power of the main study can be calculated for the situation in which the SD in the pilot study proves to be an underestimate of the true SD; and (4) an "efficient" pilot size can be determined to minimize the combined size of the pilot and main RCT. Trialists should calculate the appropriate size of a pilot study, just as they should the size of the main RCT, taking into account the twin needs to demonstrate efficiency in terms of recruitment and to produce precise estimates of treatment effect. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. A thermal emission spectral library of rock-forming minerals

    NASA Astrophysics Data System (ADS)

    Christensen, Philip R.; Bandfield, Joshua L.; Hamilton, Victoria E.; Howard, Douglas A.; Lane, Melissa D.; Piatek, Jennifer L.; Ruff, Steven W.; Stefanov, William L.

    2000-04-01

    A library of thermal infrared spectra of silicate, carbonate, sulfate, phosphate, halide, and oxide minerals has been prepared for comparison to spectra obtained from planetary and Earth-orbiting spacecraft, airborne instruments, and laboratory measurements. The emphasis in developing this library has been to obtain pure samples of specific minerals. All samples were hand processed and analyzed for composition and purity. The majority are 710-1000 μm particle size fractions, chosen to minimize particle size effects. Spectral acquisition follows a method described previously, and emissivity is determined to within 2% in most cases. Each mineral spectrum is accompanied by descriptive information in database form including compositional information, sample quality, and a comments field to describe special circumstances and unique conditions. More than 150 samples were selected to include the common rock-forming minerals with an emphasis on igneous and sedimentary minerals. This library is available in digital form and will be expanded as new, well-characterized samples are acquired.

  3. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less

  4. Modeling chain folding in protein-constrained circular DNA.

    PubMed Central

    Martino, J A; Olson, W K

    1998-01-01

    An efficient method for sampling equilibrium configurations of DNA chains binding one or more DNA-bending proteins is presented. The technique is applied to obtain the tertiary structures of minimal bending energy for a selection of dinucleosomal minichromosomes that differ in degree of protein-DNA interaction, protein spacing along the DNA chain contour, and ring size. The protein-bound portions of the DNA chains are represented by tight, left-handed supercoils of fixed geometry. The protein-free regions are modeled individually as elastic rods. For each random spatial arrangement of the two nucleosomes assumed during a stochastic search for the global minimum, the paths of the flexible connecting DNA segments are determined through a numerical solution of the equations of equilibrium for torsionally relaxed elastic rods. The minimal energy forms reveal how protein binding and spacing and plasmid size differentially affect folding and offer new insights into experimental minichromosome systems. PMID:9591675

  5. Thermal diffusivity and adiabatic limit temperature characterization of consolidate granular expanded perlite using the flash method

    NASA Astrophysics Data System (ADS)

    Raefat, Saad; Garoum, Mohammed; Laaroussi, Najma; Thiam, Macodou; Amarray, Khaoula

    2017-07-01

    In this work experimental investigation of apparent thermal diffusivity and adiabatic limit temperature of expanded granular perlite mixes has been made using the flash technic. Perlite granulates were sieved to produce essentially three characteristic grain sizes. The consolidated samples were manufactured by mixing controlled proportions of the plaster and water. The effect of the particle size on the diffusivity was examined. The inverse estimation of the diffusivity and the adiabatic limit temperature at the rear face as well as the heat losses coefficients were performed using several numerical global minimization procedures. The function to be minimized is the quadratic distance between the experimental temperature rise at the rear face and the analytical model derived from the one dimension heat conduction. It is shown that, for all granulometry tested, the estimated parameters lead to a good agreement between the mathematical model and experimental data.

  6. A multi-scale study of Orthoptera species richness and human population size controlling for sampling effort

    NASA Astrophysics Data System (ADS)

    Cantarello, Elena; Steck, Claude E.; Fontana, Paolo; Fontaneto, Diego; Marini, Lorenzo; Pautasso, Marco

    2010-03-01

    Recent large-scale studies have shown that biodiversity-rich regions also tend to be densely populated areas. The most obvious explanation is that biodiversity and human beings tend to match the distribution of energy availability, environmental stability and/or habitat heterogeneity. However, the species-people correlation can also be an artefact, as more populated regions could show more species because of a more thorough sampling. Few studies have tested this sampling bias hypothesis. Using a newly collated dataset, we studied whether Orthoptera species richness is related to human population size in Italy’s regions (average area 15,000 km2) and provinces (2,900 km2). As expected, the observed number of species increases significantly with increasing human population size for both grain sizes, although the proportion of variance explained is minimal at the provincial level. However, variations in observed Orthoptera species richness are primarily associated with the available number of records, which is in turn well correlated with human population size (at least at the regional level). Estimated Orthoptera species richness (Chao2 and Jackknife) also increases with human population size both for regions and provinces. Both for regions and provinces, this increase is not significant when controlling for variation in area and number of records. Our study confirms the hypothesis that broad-scale human population-biodiversity correlations can in some cases be artefactual. More systematic sampling of less studied taxa such as invertebrates is necessary to ascertain whether biogeographical patterns persist when sampling effort is kept constant or included in models.

  7. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  8. Asymmetric flow field flow fractionation with light scattering detection - an orthogonal sensitivity analysis.

    PubMed

    Galyean, Anne A; Filliben, James J; Holbrook, R David; Vreeland, Wyatt N; Weinberg, Howard S

    2016-11-18

    Asymmetric flow field flow fractionation (AF 4 ) has several instrumental factors that may have a direct effect on separation performance. A sensitivity analysis was applied to ascertain the relative importance of AF 4 primary instrument factor settings for the separation of a complex environmental sample. The analysis evaluated the impact of instrumental factors namely, cross flow, ramp time, focus flow, injection volume, and run buffer concentration on the multi-angle light scattering measurement of natural organic matter (NOM) molar mass (MM). A 2 (5-1) orthogonal fractional factorial design was used to minimize analysis time while preserving the accuracy and robustness in the determination of the main effects and interactions between any two instrumental factors. By assuming that separations resulting in smaller MM measurements would be more accurate, the analysis produced a ranked list of effects estimates for factors and interactions of factors based on their relative importance in minimizing the MM. The most important and statistically significant AF 4 instrumental factors were buffer concentration and cross flow. The least important was ramp time. A parallel 2 (5-2) orthogonal fractional factorial design was also employed on five environmental factors for synthetic natural water samples containing silver nanoparticles (NPs), namely: NP concentration, NP size, NOM concentration, specific conductance, and pH. None of the water quality characteristic effects or interactions were found to be significant in minimizing the measured MM; however, the interaction between NP concentration and NP size was an important effect when considering NOM recovery. This work presents a structured approach for the rigorous assessment of AF 4 instrument factors and optimal settings for the separation of complex samples utilizing efficient orthogonal factional factorial design and appropriate graphical analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    PubMed

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  11. Systematic review of the cost-effectiveness of sample size maintenance programs in studies involving postal questionnaires reveals insufficient economic information.

    PubMed

    David, Michael C; Bensink, Mark; Higashi, Hideki; Boyd, Roslyn; Williams, Lesley; Ware, Robert S

    2012-10-01

    To identify and assess the existing cost-effectiveness evidence for sample size maintenance programs. Articles were identified by searching Cochrane Central Register of Controlled Trials Embase, CINAHL, PubMed, and Web of Science from 1966 to July 2011. Randomized controlled trials in which investigators evaluated program cost-effectiveness in postal questionnaires were eligible for inclusion. Fourteen studies from 13 articles, with 11,165 participants met the inclusion criteria. Thirty-one distinct programs were identified; each incorporated at least one strategy (reminders, incentives, modified questionnaires, or types of postage) aimed at minimizing attrition. Reminders, in the form of replacement questionnaires and cards, were the most commonly used strategies, with 15 and 11 studies reporting their usage, respectively. All strategies improved response, with financial incentives being the most costly. Heterogeneity between studies was too great to allow for meta-analysis of the results. The implementation of strategies such as no-obligation incentives, modified questionnaires, and personalized reply paid postage improved program cost-effectiveness. Analyses of attrition minimization programs need to consider both cost and effect in their evaluation. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Patency of paediatric endotracheal tubes for airway instrumentation.

    PubMed

    Elfgen, J; Buehler, P K; Thomas, J; Kemper, M; Imach, S; Weiss, M

    2017-01-01

    Airway exchange catheters (AEC) and fiberoptic bronchoscopes (FOB) for tracheal intubation are selected so that there is only a minimal gap between their outer and inner diameter of endotracheal tube (ETT) to minimize the risk of impingement during airway instrumentation. This study aimed to test the ease of passage of FOBs and AECs through paediatric ETT of different sizes and from different manufacturers when using current recommendations for dimensional equipment compatibility taken from text books and manufacturers information. Twelve different brands of cuffed and uncuffed ETT sized ID 2.5 to 5.0 mm were evaluated in an in vitro set-up. Ease of device passage as well as the locations of an impaired passage within the ETT were assessed. Redundant samples were used for same sized ETT and all measurements were triple-checked in randomized order. In total, 51 paired samples of uncuffed as well as cuffed paediatric ETT were tested. There were substantial differences in the ease of ETT passage concordantly for FOBs and AECs among different manufacturers, but also among the product lines from the same manufacturer for a given ID size. Restriction to passage most frequently was found near the endotracheal tube tip or as a gradually increasing resistance along the ETT shaft. Current recommendations for dimensional equipment compatibility AECs and FOBs with ETTs do not appear to be completely accurate for all ETT brands available. We recommend that specific equipment combinations always must be tested carefully together before attempting to use them in a patient. © 2016 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  13. Investigation on the structural characterization of pulsed p-type porous silicon

    NASA Astrophysics Data System (ADS)

    Wahab, N. H. Abd; Rahim, A. F. Abd; Mahmood, A.; Yusof, Y.

    2017-08-01

    P-type Porous silicon (PS) was sucessfully formed by using an electrochemical pulse etching (PC) and conventional direct current (DC) etching techniques. The PS was etched in the Hydrofluoric (HF) based solution at a current density of J = 10 mA/cm2 for 30 minutes from a crystalline silicon wafer with (100) orientation. For the PC process, the current was supplied through a pulse generator with 14 ms cycle time (T) with 10 ms on time (Ton) and pause time (Toff) of 4 ms respectively. FESEM, EDX, AFM, and XRD have been used to characterize the morphological properties of the PS. FESEM images showed that pulse PS (PPC) sample produces more uniform circular structures with estimated average pore sizes of 42.14 nm compared to DC porous (PDC) sample with estimated average size of 16.37nm respectively. The EDX spectrum for both samples showed higher Si content with minimal presence of oxide.

  14. Barostat testing of rectal sensation and compliance in humans: comparison of results across two centres and overall reproducibility.

    PubMed

    Cremonini, F; Houghton, L A; Camilleri, M; Ferber, I; Fell, C; Cox, V; Castillo, E J; Alpers, D H; Dewit, O E; Gray, E; Lea, R; Zinsmeister, A R; Whorwell, P J

    2005-12-01

    We assessed reproducibility of measurements of rectal compliance and sensation in health in studies conducted at two centres. We estimated samples size necessary to show clinically meaningful changes in future studies. We performed rectal barostat tests three times (day 1, day 1 after 4 h and 14-17 days later) in 34 healthy participants. We measured compliance and pressure thresholds for first sensation, urgency, discomfort and pain using ascending method of limits and symptom ratings for gas, urgency, discomfort and pain during four phasic distensions (12, 24, 36 and 48 mmHg) in random order. Results obtained at the two centres differed minimally. Reproducibility of sensory end points varies with type of sensation, pressure level and method of distension. Pressure threshold for pain and sensory ratings for non-painful sensations at 36 and 48 mmHg distension were most reproducible in the two centres. Sample size calculations suggested that crossover design is preferable in therapeutic trials: for each dose of medication tested, a sample of 21 should be sufficient to demonstrate 30% changes in all sensory thresholds and almost all sensory ratings. We conclude that reproducibility varies with sensation type, pressure level and distension method, but in a two-centre study, differences in observed results of sensation are minimal and pressure threshold for pain and sensory ratings at 36-48 mmHg of distension are reproducible.

  15. Sampling strategies for estimating acute and chronic exposures of pesticides in streams

    USGS Publications Warehouse

    Crawford, Charles G.

    2004-01-01

    The Food Quality Protection Act of 1996 requires that human exposure to pesticides through drinking water be considered when establishing pesticide tolerances in food. Several systematic and seasonally weighted systematic sampling strategies for estimating pesticide concentrations in surface water were evaluated through Monte Carlo simulation, using intensive datasets from four sites in northwestern Ohio. The number of samples for the strategies ranged from 4 to 120 per year. Sampling strategies with a minimal sampling frequency outside the growing season can be used for estimating time weighted mean and percentile concentrations of pesticides with little loss of accuracy and precision, compared to strategies with the same sampling frequency year round. Less frequent sampling strategies can be used at large sites. A sampling frequency of 10 times monthly during the pesticide runoff period at a 90 km 2 basin and four times monthly at a 16,400 km2 basin provided estimates of the time weighted mean, 90th, 95th, and 99th percentile concentrations that fell within 50 percent of the true value virtually all of the time. By taking into account basin size and the periodic nature of pesticide runoff, costs of obtaining estimates of time weighted mean and percentile pesticide concentrations can be minimized.

  16. Automated system measuring triple oxygen and nitrogen isotope ratios in nitrate using the bacterial method and N2 O decomposition by microwave discharge.

    PubMed

    Hattori, Shohei; Savarino, Joel; Kamezaki, Kazuki; Ishino, Sakiko; Dyckmans, Jens; Fujinawa, Tamaki; Caillon, Nicolas; Barbero, Albane; Mukotaka, Arata; Toyoda, Sakae; Well, Reinhard; Yoshida, Naohiro

    2016-12-30

    Triple oxygen and nitrogen isotope ratios in nitrate are powerful tools for assessing atmospheric nitrate formation pathways and their contribution to ecosystems. N 2 O decomposition using microwave-induced plasma (MIP) has been used only for measurements of oxygen isotopes to date, but it is also possible to measure nitrogen isotopes during the same analytical run. The main improvements to a previous system are (i) an automated distribution system of nitrate to the bacterial medium, (ii) N 2 O separation by gas chromatography before N 2 O decomposition using the MIP, (iii) use of a corundum tube for microwave discharge, and (iv) development of an automated system for isotopic measurements. Three nitrate standards with sample sizes of 60, 80, 100, and 120 nmol were measured to investigate the sample size dependence of the isotope measurements. The δ 17 O, δ 18 O, and Δ 17 O values increased with increasing sample size, although the δ 15 N value showed no significant size dependency. Different calibration slopes and intercepts were obtained with different sample amounts. The slopes and intercepts for the regression lines in different sample amounts were dependent on sample size, indicating that the extent of oxygen exchange is also dependent on sample size. The sample-size-dependent slopes and intercepts were fitted using natural log (ln) regression curves, and the slopes and intercepts can be estimated to apply to any sample size corrections. When using 100 nmol samples, the standard deviations of residuals from the regression lines for this system were 0.5‰, 0.3‰, and 0.1‰, respectively, for the δ 18 O, Δ 17 O, and δ 15 N values, results that are not inferior to those from other systems using gold tube or gold wire. An automated system was developed to measure triple oxygen and nitrogen isotopes in nitrate using N 2 O decomposition by MIP. This system enables us to measure both triple oxygen and nitrogen isotopes in nitrate with comparable precision and sample throughput (23 min per sample on average), and minimal manual treatment. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Factors Associated with the Performance and Cost-Effectiveness of Using Lymphatic Filariasis Transmission Assessment Surveys for Monitoring Soil-Transmitted Helminths: A Case Study in Kenya

    PubMed Central

    Smith, Jennifer L.; Sturrock, Hugh J. W.; Assefa, Liya; Nikolay, Birgit; Njenga, Sammy M.; Kihara, Jimmy; Mwandawiro, Charles S.; Brooker, Simon J.

    2015-01-01

    Transmission assessment surveys (TAS) for lymphatic filariasis have been proposed as a platform to assess the impact of mass drug administration (MDA) on soil-transmitted helminths (STHs). This study used computer simulation and field data from pre- and post-MDA settings across Kenya to evaluate the performance and cost-effectiveness of the TAS design for STH assessment compared with alternative survey designs. Variations in the TAS design and different sample sizes and diagnostic methods were also evaluated. The district-level TAS design correctly classified more districts compared with standard STH designs in pre-MDA settings. Aggregating districts into larger evaluation units in a TAS design decreased performance, whereas age group sampled and sample size had minimal impact. The low diagnostic sensitivity of Kato-Katz and mini-FLOTAC methods was found to increase misclassification. We recommend using a district-level TAS among children 8–10 years of age to assess STH but suggest that key consideration is given to evaluation unit size. PMID:25487730

  18. Determination of thorium by fluorescent x-ray spectrometry

    USGS Publications Warehouse

    Adler, I.; Axelrod, J.M.

    1955-01-01

    A fluorescent x-ray spectrographic method for the determination of thoria in rock samples uses thallium as an internal standard. Measurements are made with a two-channel spectrometer equipped with quartz (d = 1.817 A.) analyzing crystals. Particle-size effects are minimized by grinding the sample components with a mixture of silicon carbide and aluminum and then briquetting. Analyses of 17 samples showed that for the 16 samples containing over 0.7% thoria the average error, based on chemical results, is 4.7% and the maximum error, 9.5%. Because of limitations of instrumentation, 0.2% thoria is considered the lower limit of detection. An analysis can be made in about an hour.

  19. A compact, fast ozone UV photometer and sampling inlet for research aircraft

    NASA Astrophysics Data System (ADS)

    Gao, R. S.; Ballard, J.; Watts, L. A.; Thornberry, T. D.; Ciciora, S. J.; McLaughlin, R. J.; Fahey, D. W.

    2012-05-01

    In situ measurements of atmospheric ozone (O3) are performed routinely from many research aircraft platforms. The most common technique depends on the strong absorption of ultraviolet (UV) light by ozone. As atmospheric science advances to the widespread use of unmanned aircraft systems (UASs), there is an increasing requirement for minimizing instrument space, weight, and power while maintaining instrument accuracy, precision and time response. The design and use of a new, dual-beam, polarized, UV photometer instrument for in situ O3 measurements is described. The instrument has a fast sampling rate (2 Hz), high accuracy (3%), and precision (1.1 × 1010 O3 molecules cm-3). The size (36 l), weight (18 kg), and power (50-200 W) make the instrument suitable for many UAS and other airborne platforms. Inlet and exhaust configurations are also described for ambient sampling in the troposphere and lower stratosphere (1000-50 mb) that optimize the sample flow rate to increase time response while minimizing loss of precision due to induced turbulence in the sample cell. In-flight and laboratory intercomparisons with existing O3 instruments show that measurement accuracy is maintained in flight.

  20. Combined Bisulfite Restriction Analysis for brain tissue identification.

    PubMed

    Samsuwan, Jarunya; Muangsub, Tachapol; Yanatatsaneejit, Pattamawadee; Mutirangura, Apiwat; Kitkumthorn, Nakarin

    2018-05-01

    According to the tissue-specific methylation database (doi: 10.1016/j.gene.2014.09.060), methylation at CpG locus cg03096975 in EML2 has been preliminarily proven to be specific to brain tissue. In this study, we enlarged sample size and developed a technique for identifying brain tissue in aged samples. Combined Bisulfite Restriction Analysis-for EML2 (COBRA-EML2) technique was established and validated in various organ samples obtained from 108 autopsies. In addition, this technique was also tested for its reliability, minimal DNA concentration detected, and use in aged samples and in samples obtained from specific brain compartments and spinal cord. COBRA-EML2 displayed 100% sensitivity and specificity for distinguishing brain tissue from other tissues, showed high reliability, was capable of detecting minimal DNA concentration (0.015ng/μl), could be used for identifying brain tissue in aged samples. In summary, COBRA-EML2 is a technique to identify brain tissue. This analysis is useful in criminal cases since it can identify the vital organ tissues from small samples acquired from criminal scenes. The results from this analysis can be counted as a medical and forensic marker supporting criminal investigations, and as one of the evidences in court rulings. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Effects of storage time and temperature on pH, specific gravity, and crystal formation in urine samples from dogs and cats.

    PubMed

    Albasan, Hasan; Lulich, Jody P; Osborne, Carl A; Lekcharoensuk, Chalermpol; Ulrich, Lisa K; Carpenter, Kathleen A

    2003-01-15

    To determine effects of storage temperature and time on pH and specific gravity of and number and size of crystals in urine samples from dogs and cats. Randomized complete block design. 31 dogs and 8 cats. Aliquots of each urine sample were analyzed within 60 minutes of collection or after storage at room or refrigeration temperatures (20 vs 6 degrees C [68 vs 43 degrees F]) for 6 or 24 hours. Crystals formed in samples from 11 of 39 (28%) animals. Calcium oxalate (CaOx) crystals formed in vitro in samples from 1 cat and 8 dogs. Magnesium ammonium phosphate (MAP) crystals formed in vitro in samples from 2 dogs. Compared with aliquots stored at room temperature, refrigeration increased the number and size of crystals that formed in vitro; however, the increase in number and size of MAP crystals in stored urine samples was not significant. Increased storage time and decreased storage temperature were associated with a significant increase in number of CaOx crystals formed. Greater numbers of crystals formed in urine aliquots stored for 24 hours than in aliquots stored for 6 hours. Storage time and temperature did not have a significant effect on pH or specific gravity. Urine samples should be analyzed within 60 minutes of collection to minimize temperature- and time-dependent effects on in vitro crystal formation. Presence of crystals observed in stored samples should be validated by reevaluation of fresh urine.

  2. Interventions to Improve Medication Adherence in Hypertensive Patients: Systematic Review and Meta-analysis.

    PubMed

    Conn, Vicki S; Ruppar, Todd M; Chase, Jo-Ana D; Enriquez, Maithe; Cooper, Pamela S

    2015-12-01

    This systematic review applied meta-analytic procedures to synthesize medication adherence interventions that focus on adults with hypertension. Comprehensive searching located trials with medication adherence behavior outcomes. Study sample, design, intervention characteristics, and outcomes were coded. Random-effects models were used in calculating standardized mean difference effect sizes. Moderator analyses were conducted using meta-analytic analogues of ANOVA and regression to explore associations between effect sizes and sample, design, and intervention characteristics. Effect sizes were calculated for 112 eligible treatment-vs.-control group outcome comparisons of 34,272 subjects. The overall standardized mean difference effect size between treatment and control subjects was 0.300. Exploratory moderator analyses revealed interventions were most effective among female, older, and moderate- or high-income participants. The most promising intervention components were those linking adherence behavior with habits, giving adherence feedback to patients, self-monitoring of blood pressure, using pill boxes and other special packaging, and motivational interviewing. The most effective interventions employed multiple components and were delivered over many days. Future research should strive for minimizing risks of bias common in this literature, especially avoiding self-report adherence measures.

  3. Surrogate and clinical endpoints for studies in peripheral artery occlusive disease: Are statistics the brakes?

    PubMed

    Waliszewski, Matthias W; Redlich, Ulf; Breul, Victor; Tautenhahn, Jörg

    2017-04-30

    The aim of this review is to present the available clinical and surrogate endpoints that may be used in future studies performed in patients with peripheral artery occlusive disease (PAOD). Importantly, we describe statistical limitations of the most commonly used endpoints and offer some guidance with respect to study design for a given sample size. The proposed endpoints may be used in studies using surgical or interventional revascularization and/or drug treatments. Considering recently published study endpoints and designs, the usefulness of these endpoints for reimbursement is evaluated. Based on these potential study endpoints and patient sample size estimates with different non-inferiority or tests for difference hypotheses, a rating relative to their corresponding reimbursement values is attempted. As regards the benefit for the patients and for the payers, walking distance and the ankle brachial index (ABI) are the most feasible endpoints in a relatively small study samples given that other non-vascular impact factors can be controlled. Angiographic endpoints such as minimal lumen diameter (MLD) do not seem useful from a reimbursement standpoint despite their intuitiveness. Other surrogate endpoints, such as transcutaneous oxygen tension measurements, have yet to be established as useful endpoints in reasonably sized studies with patients with critical limb ischemia (CLI). From a reimbursement standpoint, WD and ABI are effective endpoints for a moderate study sample size given that non-vascular confounding factors can be controlled.

  4. Technical note: Alternatives to reduce adipose tissue sampling bias.

    PubMed

    Cruz, G D; Wang, Y; Fadel, J G

    2014-10-01

    Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.

  5. Diagnosing hyperuniformity in two-dimensional, disordered, jammed packings of soft spheres.

    PubMed

    Dreyfus, Remi; Xu, Ye; Still, Tim; Hough, L A; Yodh, A G; Torquato, Salvatore

    2015-01-01

    Hyperuniformity characterizes a state of matter for which (scaled) density fluctuations diminish towards zero at the largest length scales. However, the task of determining whether or not an image of an experimental system is hyperuniform is experimentally challenging due to finite-resolution, noise, and sample-size effects that influence characterization measurements. Here we explore these issues, employing video optical microscopy to study hyperuniformity phenomena in disordered two-dimensional jammed packings of soft spheres. Using a combination of experiment and simulation we characterize the possible adverse effects of particle polydispersity, image noise, and finite-size effects on the assignment of hyperuniformity, and we develop a methodology that permits improved diagnosis of hyperuniformity from real-space measurements. The key to this improvement is a simple packing reconstruction algorithm that incorporates particle polydispersity to minimize the free volume. In addition, simulations show that hyperuniformity in finite-sized samples can be ascertained more accurately in direct space than in reciprocal space. Finally, our experimental colloidal packings of soft polymeric spheres are shown to be effectively hyperuniform.

  6. Diagnosing hyperuniformity in two-dimensional, disordered, jammed packings of soft spheres

    NASA Astrophysics Data System (ADS)

    Dreyfus, Remi; Xu, Ye; Still, Tim; Hough, L. A.; Yodh, A. G.; Torquato, Salvatore

    2015-01-01

    Hyperuniformity characterizes a state of matter for which (scaled) density fluctuations diminish towards zero at the largest length scales. However, the task of determining whether or not an image of an experimental system is hyperuniform is experimentally challenging due to finite-resolution, noise, and sample-size effects that influence characterization measurements. Here we explore these issues, employing video optical microscopy to study hyperuniformity phenomena in disordered two-dimensional jammed packings of soft spheres. Using a combination of experiment and simulation we characterize the possible adverse effects of particle polydispersity, image noise, and finite-size effects on the assignment of hyperuniformity, and we develop a methodology that permits improved diagnosis of hyperuniformity from real-space measurements. The key to this improvement is a simple packing reconstruction algorithm that incorporates particle polydispersity to minimize the free volume. In addition, simulations show that hyperuniformity in finite-sized samples can be ascertained more accurately in direct space than in reciprocal space. Finally, our experimental colloidal packings of soft polymeric spheres are shown to be effectively hyperuniform.

  7. High-concentration zeta potential measurements using light-scattering techniques

    PubMed Central

    Kaszuba, Michael; Corbett, Jason; Watson, Fraser Mcneil; Jones, Andrew

    2010-01-01

    Zeta potential is the key parameter that controls electrostatic interactions in particle dispersions. Laser Doppler electrophoresis is an accepted method for the measurement of particle electrophoretic mobility and hence zeta potential of dispersions of colloidal size materials. Traditionally, samples measured by this technique have to be optically transparent. Therefore, depending upon the size and optical properties of the particles, many samples will be too concentrated and will require dilution. The ability to measure samples at or close to their neat concentration would be desirable as it would minimize any changes in the zeta potential of the sample owing to dilution. However, the ability to measure turbid samples using light-scattering techniques presents a number of challenges. This paper discusses electrophoretic mobility measurements made on turbid samples at high concentration using a novel cell with reduced path length. Results are presented on two different sample types, titanium dioxide and a polyurethane dispersion, as a function of sample concentration. For both of the sample types studied, the electrophoretic mobility results show a gradual decrease as the sample concentration increases and the possible reasons for these observations are discussed. Further, a comparison of the data against theoretical models is presented and discussed. Conclusions and recommendations are made from the zeta potential values obtained at high concentrations. PMID:20732896

  8. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    PubMed

    Kacmarczyk, Thadeous J; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  9. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection

    PubMed Central

    Kacmarczyk, Thadeous J.; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal. PMID:26066343

  10. A Fixed-Precision Sequential Sampling Plan for the Potato Tuberworm Moth, Phthorimaea operculella Zeller (Lepidoptera: Gelechidae), on Potato Cultivars.

    PubMed

    Shahbi, M; Rajabpour, A

    2017-08-01

    Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.

  11. Novel hybrid cryo-radial method: an emerging alternative to CT-guided biopsy in suspected lung cancer. A prospective case series and description of technique.

    PubMed

    Herath, Samantha; Yap, Elaine

    2018-02-01

    In diagnosing peripheral pulmonary lesions (PPL), radial endobronchial ultrasound (R-EBUS) is emerging as a safer method in comparison to CT-guided biopsy. Despite the better safety profile, the yield of R-EBUS remains lower (73%) than CT-guided biopsy (90%) due to the smaller size of samples. We adopted a hybrid method by adding cryobiopsy via the R-EBUS Guide Sheath (GS) to produce larger, non-crushed samples to improve diagnostic capability and enhance molecular testing. We report six prospective patients who underwent this procedure in our institution. R-EBUS samples were obtained via conventional sampling methods (needle aspiration, forceps biopsy, and cytology brush), followed by a cryobiopsy. An endobronchial blocker was placed near the planned area of biopsy in advance and inflated post-biopsy to minimize the risk of bleeding in all patients. A chest X-ray was performed 1 h post-procedure. All the PPLs were visualized with R-EBUS. The mean diameter of cryobiopsy samples was twice the size of forceps biopsy samples. In four patients, cryobiopsy samples were superior in size and the number of malignant cells per high power filed and was the preferred sample selected for mutation analysis and molecular testing. There was no pneumothorax or significant bleeding to report. Cryobiopsy samples were consistently larger and were the preferred samples for molecular testing, with an increase in the diagnostic yield and reduction in the need for repeat procedures, without hindering the marked safety profile of R-EBUS. Using an endobronchial blocker improves the safety of this procedure.

  12. Taxonomic minimalism.

    PubMed

    Beattle, A J; Oliver, I

    1994-12-01

    Biological surveys are in increasing demand while taxonomic resources continue to decline. How much formal taxonomy is required to get the job done? The answer depends on the kind of job but it is possible that taxonomic minimalism, especially (1) the use of higher taxonomic ranks, (2) the use of morphospecies rather than species (as identified by Latin binomials), and (3) the involvement of taxonomic specialists only for training and verification, may offer advantages for biodiversity assessment, environmental monitoring and ecological research. As such, formal taxonomy remains central to the process of biological inventory and survey but resources may be allocated more efficiently. For example, if formal Identification is not required, resources may be concentrated on replication and increasing sample sizes. Taxonomic minimalism may also facilitate the inclusion in these activities of important but neglected groups, especially among the invertebrates, and perhaps even microorganisms. Copyright © 1994. Published by Elsevier Ltd.

  13. The creation of digital thematic soil maps at the regional level (with the map of soil carbon pools in the Usa River basin as an example)

    NASA Astrophysics Data System (ADS)

    Pastukhov, A. V.; Kaverin, D. A.; Shchanov, V. M.

    2016-09-01

    A digital map of soil carbon pools was created for the forest-tundra ecotone in the Usa River basin with the use of ERDAS Imagine 2014 and ArcGIS 10.2 software. Supervised classification and thematic interpretation of satellite images and digital terrain models with the use of a georeferenced database on soil profiles were applied. Expert assessment of the natural diversity and representativeness of random samples for different soil groups was performed, and the minimal necessary size of the statistical sample was determined.

  14. 'Mitominis': multiplex PCR analysis of reduced size amplicons for compound sequence analysis of the entire mtDNA control region in highly degraded samples.

    PubMed

    Eichmann, Cordula; Parson, Walther

    2008-09-01

    The traditional protocol for forensic mitochondrial DNA (mtDNA) analyses involves the amplification and sequencing of the two hypervariable segments HVS-I and HVS-II of the mtDNA control region. The primers usually span fragment sizes of 300-400 bp each region, which may result in weak or failed amplification in highly degraded samples. Here we introduce an improved and more stable approach using shortened amplicons in the fragment range between 144 and 237 bp. Ten such amplicons were required to produce overlapping fragments that cover the entire human mtDNA control region. These were co-amplified in two multiplex polymerase chain reactions and sequenced with the individual amplification primers. The primers were carefully selected to minimize binding on homoplasic and haplogroup-specific sites that would otherwise result in loss of amplification due to mis-priming. The multiplexes have successfully been applied to ancient and forensic samples such as bones and teeth that showed a high degree of degradation.

  15. Taking Costs and Diagnostic Test Accuracy into Account When Designing Prevalence Studies: An Application to Childhood Tuberculosis Prevalence.

    PubMed

    Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence

    2017-11-01

    When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.

  16. Maximizing Sampling Efficiency and Minimizing Uncertainty in Presence/Absence Classification of Rare Salamander Populations

    DTIC Science & Technology

    2008-10-31

    of the Apalachicola River drainage. Although this proposed division in classification appears to be generally accepted by the herpetological community...breeding in small forest ponds. Herpetological Review 33(4):275-280. Carle, F. L. and M. R. Strub. 1978. A new method for estimating population size...gopher frogs (Rana capito) and southern leopard frogs (Rana sphenocephala). Journal of Herpetology 42: 97-103. Grevstad, F.S. 2005. Simulating

  17. Technical Factors Influencing Cone Packing Density Estimates in Adaptive Optics Flood Illuminated Retinal Images

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681

  18. Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images.

    PubMed

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic.

  19. Pore-scale simulations of drainage in granular materials: Finite size effects and the representative elementary volume

    NASA Astrophysics Data System (ADS)

    Yuan, Chao; Chareyre, Bruno; Darve, Félix

    2016-09-01

    A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the microstructure require frequent updates of the pore network.

  20. Bandwidth based methodology for designing a hybrid energy storage system for a series hybrid electric vehicle with limited all electric mode

    NASA Astrophysics Data System (ADS)

    Shahverdi, Masood

    The cost and fuel economy of hybrid electrical vehicles (HEVs) are significantly dependent on the power-train energy storage system (ESS). A series HEV with a minimal all-electric mode (AEM) permits minimizing the size and cost of the ESS. This manuscript, pursuing the minimal size tactic, introduces a bandwidth based methodology for designing an efficient ESS. First, for a mid-size reference vehicle, a parametric study is carried out over various minimal-size ESSs, both hybrid (HESS) and non-hybrid (ESS), for finding the highest fuel economy. The results show that a specific type of high power battery with 4.5 kWh capacity can be selected as the winning candidate to study for further minimization. In a second study, following the twin goals of maximizing Fuel Economy (FE) and improving consumer acceptance, a sports car class Series-HEV (SHEV) was considered as a potential application which requires even more ESS minimization. The challenge with this vehicle is to reduce the ESS size compared to 4.5 kWh, because the available space allocation is only one fourth of the allowed battery size in the mid-size study by volume. Therefore, an advanced bandwidth-based controller is developed that allows a hybridized Subaru BRZ model to be realized with a light ESS. The result allows a SHEV to be realized with 1.13 kWh ESS capacity. In a third study, the objective is to find optimum SHEV designs with minimal AEM assumption which cover the design space between the fuel economies in the mid-size car study and the sports car study. Maximizing FE while minimizing ESS cost is more aligned with customer acceptance in the current state of market. The techniques applied to manage the power flow between energy sources of the power-train significantly affect the results of this optimization. A Pareto Frontier, including ESS cost and FE, for a SHEV with limited AEM, is introduced using an advanced bandwidth-based control strategy teamed up with duty ratio control. This controller allows the series hybrid's advantage of tightly managing engine efficiency to be extended to lighter ESS, as compared to the size of the ESS in available products in the market.

  1. Statistical Methods in Assembly Quality Management of Multi-Element Products on Automatic Rotor Lines

    NASA Astrophysics Data System (ADS)

    Pries, V. V.; Proskuriakov, N. E.

    2018-04-01

    To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.

  2. Development and Characterization of Chitosan Cross-Linked With Tripolyphosphate as a Sustained Release Agent in Tablets, Part I: Design of Experiments and Optimization.

    PubMed

    Pinto, Colin A; Saripella, Kalyan K; Loka, Nikhil C; Neau, Steven H

    2018-04-01

    Certain issues with the use of particles of chitosan (Ch) cross-linked with tripolyphosphate (TPP) in sustained release formulations include inefficient drug loading, burst drug release, and incomplete drug release. Acetaminophen was added to Ch:TPP particles to test for advantages of drug addition extragranularly over drug addition made during cross-linking. The influences of Ch concentration, Ch:TPP ratio, temperature, ionic strength, and pH were assessed. Design of experiments allowed identification of factors and 2-factor interactions that have significant effects on average particle size and size distribution, yield, zeta potential, and true density of the particles, as well as drug release from the directly compressed tablets. Statistical model equations directed production of a control batch that minimized span, maximized yield, and targeted a t 50 of 90 min (sample A); sample B that differed by targeting a t 50 of 240-300 min to provide sustained release; and sample C that differed from sample B by maximizing span. Sample B maximized yield and provided its targeted t 50 and the smallest average particle size, with the higher zeta potential and the lower span of samples B and C. Extragranular addition of a drug to Ch:TPP particles achieved 100% drug loading, eliminated a burst drug release, and can accomplish complete drug release. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  3. Modeling the development of written language

    PubMed Central

    Puranik, Cynthia S.; Foorman, Barbara; Foster, Elizabeth; Wilson, Laura Gehron; Tschinkel, Erika; Kantor, Patricia Thatcher

    2011-01-01

    Alternative models of the structure of individual and developmental differences of written composition and handwriting fluency were tested using confirmatory factor analysis of writing samples provided by first- and fourth-grade students. For both groups, a five-factor model provided the best fit to the data. Four of the factors represented aspects of written composition: macro-organization (use of top sentence and number and ordering of ideas), productivity (number and diversity of words used), complexity (mean length of T-unit and syntactic density), and spelling and punctuation. The fifth factor represented handwriting fluency. Handwriting fluency was correlated with written composition factors at both grades. The magnitude of developmental differences between first grade and fourth grade expressed as effect sizes varied for variables representing the five constructs: large effect sizes were found for productivity and handwriting fluency variables; moderate effect sizes were found for complexity and macro-organization variables; and minimal effect sizes were found for spelling and punctuation variables. PMID:22228924

  4. Standard-less analysis of Zircaloy clad samples by an instrumental neutron activation method

    NASA Astrophysics Data System (ADS)

    Acharya, R.; Nair, A. G. C.; Reddy, A. V. R.; Goswami, A.

    2004-03-01

    A non-destructive method for analysis of irregular shape and size samples of Zircaloy has been developed using the recently standardized k0-based internal mono standard instrumental neutron activation analysis (INAA). The samples of Zircaloy-2 and -4 tubes, used as fuel cladding in Indian boiling water reactors (BWR) and pressurized heavy water reactors (PHWR), respectively, have been analyzed. Samples weighing in the range of a few tens of grams were irradiated in the thermal column of Apsara reactor to minimize neutron flux perturbations and high radiation dose. The method utilizes in situ relative detection efficiency using the γ-rays of selected activation products in the sample for overcoming γ-ray self-attenuation. Since the major and minor constituents (Zr, Sn, Fe, Cr and/or Ni) in these samples were amenable to NAA, the absolute concentrations of all the elements were determined using mass balance instead of using the concentration of the internal mono standard. Concentrations were also determined in a smaller size Zircaloy-4 sample by irradiating in the core position of the reactor to validate the present methodology. The results were compared with literature specifications and were found to be satisfactory. Values of sensitivities and detection limits have been evaluated for the elements analyzed.

  5. Novel hybrid cryo‐radial method: an emerging alternative to CT‐guided biopsy in suspected lung cancer. A prospective case series and description of technique

    PubMed Central

    Yap, Elaine

    2017-01-01

    In diagnosing peripheral pulmonary lesions (PPL), radial endobronchial ultrasound (R‐EBUS) is emerging as a safer method in comparison to CT‐guided biopsy. Despite the better safety profile, the yield of R‐EBUS remains lower (73%) than CT‐guided biopsy (90%) due to the smaller size of samples. We adopted a hybrid method by adding cryobiopsy via the R‐EBUS Guide Sheath (GS) to produce larger, non‐crushed samples to improve diagnostic capability and enhance molecular testing. We report six prospective patients who underwent this procedure in our institution. R‐EBUS samples were obtained via conventional sampling methods (needle aspiration, forceps biopsy, and cytology brush), followed by a cryobiopsy. An endobronchial blocker was placed near the planned area of biopsy in advance and inflated post‐biopsy to minimize the risk of bleeding in all patients. A chest X‐ray was performed 1 h post‐procedure. All the PPLs were visualized with R‐EBUS. The mean diameter of cryobiopsy samples was twice the size of forceps biopsy samples. In four patients, cryobiopsy samples were superior in size and the number of malignant cells per high power filed and was the preferred sample selected for mutation analysis and molecular testing. There was no pneumothorax or significant bleeding to report. Cryobiopsy samples were consistently larger and were the preferred samples for molecular testing, with an increase in the diagnostic yield and reduction in the need for repeat procedures, without hindering the marked safety profile of R‐EBUS. Using an endobronchial blocker improves the safety of this procedure. PMID:29321931

  6. Gear and seasonal bias associated with abundance and size structure estimates for lentic freshwater fishes

    USGS Publications Warehouse

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    All freshwater fish sampling methods are biased toward particular species, sizes, and sexes and are further influenced by season, habitat, and fish behavior changes over time. However, little is known about gear-specific biases for many common fish species because few multiple-gear comparison studies exist that have incorporated seasonal dynamics. We sampled six lakes and impoundments representing a diversity of trophic and physical conditions in Iowa, USA, using multiple gear types (i.e., standard modified fyke net, mini-modified fyke net, sinking experimental gill net, bag seine, benthic trawl, boat-mounted electrofisher used diurnally and nocturnally) to determine the influence of sampling methodology and season on fisheries assessments. Specifically, we describe the influence of season on catch per unit effort, proportional size distribution, and the number of samples required to obtain 125 stock-length individuals for 12 species of recreational and ecological importance. Mean catch per unit effort generally peaked in the spring and fall as a result of increased sampling effectiveness in shallow areas and seasonal changes in habitat use (e.g., movement offshore during summer). Mean proportional size distribution decreased from spring to fall for white bass Morone chrysops, largemouth bass Micropterus salmoides, bluegill Lepomis macrochirus, and black crappie Pomoxis nigromaculatus, suggesting selectivity for large and presumably sexually mature individuals in the spring and summer. Overall, the mean number of samples required to sample 125 stock-length individuals was minimized in the fall with sinking experimental gill nets, a boat-mounted electrofisher used at night, and standard modified nets for 11 of the 12 species evaluated. Our results provide fisheries scientists with relative comparisons between several recommended standard sampling methods and illustrate the effects of seasonal variation on estimates of population indices that will be critical to the future development of standardized sampling methods for freshwater fish in lentic ecosystems.

  7. Variation in aluminum, iron, and particle concentrations in oxic ground-water samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    USGS Publications Warehouse

    Szabo, Z.; Oden, J.H.; Gibs, J.; Rice, D.E.; Ding, Y.; ,

    2001-01-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering. Variations in concentrations aluminum and iron (1 -74 and 1-199 ug/L (micrograms per liter), respectively), common indicators of the presence of particulate-borne trace elements, were greatest in sample sets from individual wells with the greatest variations in turbidity and particle concentration. Differences in trace-element concentrations in sequentially collected unfiltered samples with variable turbidity were 5 to 10 times as great as those in concurrently collected samples that were passed through various filters. These results indicate that turbidity must be both reduced and stabilized even when low-flow sample-collection techniques are used in order to obtain water samples that do not contain considerable particulate artifacts. Currently (2001) available techniques need to be refined to ensure that the measured trace-element concentrations are representative of those that are mobile in the aquifer water.

  8. The Antaeus Project - An orbital quarantine facility for analysis of planetary return samples

    NASA Technical Reports Server (NTRS)

    Sweet, H. C.; Bagby, J. R.; Devincenzi, D. L.

    1983-01-01

    A design is presented for an earth-orbiting facility for the analysis of planetary return samples under conditions of maximum protection against contamination but minimal damage to the sample. The design is keyed to a Mars sample return mission profile, returning 1 kg of documented subsamples, to be analyzed in low earth orbit by a small crew aided by automated procedures, tissue culture and microassay. The facility itself would consist of Spacelab shells, formed into five modules of different sizes with purposes of power supply, habitation, supplies and waste storage, the linking of the facility, and both quarantine and investigation of the samples. Three barriers are envisioned to protect the biosphere from any putative extraterrestrial organisms: sealed biological containment cabinets within the Laboratory Module, the Laboratory Module itself, and the conditions of space surrounding the facility.

  9. Transport Loss Estimation of Fine Particulate Matter in Sampling Tube Based on Numerical Computation

    NASA Astrophysics Data System (ADS)

    Luo, L.; Cheng, Z.

    2016-12-01

    In-situ measurement of PM2.5 physical and chemical properties is one substantial approach for the mechanism investigation of PM2.5 pollution. Minimizing PM2.5 transport loss in sampling tube is essential for ensuring the accuracy of the measurement result. In order to estimate the integrated PM2.5 transport efficiency in sampling tube and optimize tube designs, the effects of different tube factors (length, bore size and bend number) on the PM2.5 transport were analyzed based on the numerical computation. The results shows that PM2.5 mass concentration transport efficiency of vertical tube with flowrate at 20.0 L·min-1, bore size at 4 mm, length at 1.0 m was 89.6%. However, the transport efficiency will increase to 98.3% when the bore size is increased to 14 mm. PM2.5 mass concentration transport efficiency of horizontal tube with flowrate at 1.0 L·min-1, bore size at 4mm, length at 10.0 m is 86.7%, increased to 99.2% with length at 0.5 m. Low transport efficiency of 85.2% for PM2.5 mass concentration is estimated in bend with flowrate at 20.0 L·min-1, bore size at 4mm, curvature angle at 90o. Laminar flow of air in tube through keeping the ratio of flowrate (L·min-1) and bore size (mm) less than 1.4 is beneficial to decrease the PM2.5 transport loss. For the target of PM2.5 transport efficiency higher than 97%, it is advised to use vertical sampling tubes with length less than 6.0 m for the flowrates of 2.5, 5.0, 10.0 L·min-1 and bore size larger than 12 mm for the flowrates of 16.7 or 20.0 L·min-1. For horizontal sampling tubes, tube length is decided by the ratio of flowrate and bore size. Meanwhile, it is suggested to decrease the amount of the bends in tube of turbulent flow.

  10. A method for nitrate collection for δ15N and δ18O analysis from waters with low nitrate concentrations

    USGS Publications Warehouse

    Chang, Cecily C.Y.; Langston, J.; Riggs, M.; Campbell, D.H.; Silva, S.R.; Kendall, C.

    1999-01-01

     Recently, methods have been developed to analyze NO3- for δ15N and δ18O, improving our ability to identify NO3- sources and transformations. However, none of the existing methods are suited for waters with low NO3- concentrations (0.7-10 µM). We describe an improved method for collecting and recovering NO3- on exchange columns. To overcome the lengthy collection loading times imposed by the large sample volumes (7-70 L), the sample was prefiltered (0.45 µm) with a large surface area filter. Switching to AG2X anion resin and using a coarser mesh size (100-200) than previous methods also enhanced sample flow. Placement of a cation column in front of the anion column minimized clogging of the anion column by dissolved organic carbon (DOC) accumulation. This also served to minimize transfer of unwanted oxygen atoms from DOC to the 18O portion of the NO3- sample, thereby contaminating the sample and shifting δ18O. The cat-AG2X method is suited for on-site sample collection, making it possible to collect and recover NO3- from low ionic strength waters with modest DOC concentrations (80-800 µM), relieves the investigator of transporting large volumes of water back to the laboratory, and offers a means of sampling rain, snow, snowmelt, and stream samples from access-limited sites.

  11. Efficient genotype compression and analysis of large genetic variation datasets

    PubMed Central

    Layer, Ryan M.; Kindlon, Neil; Karczewski, Konrad J.; Quinlan, Aaron R.

    2015-01-01

    Genotype Query Tools (GQT) is a new indexing strategy that expedites analyses of genome variation datasets in VCF format based on sample genotypes, phenotypes and relationships. GQT’s compressed genotype index minimizes decompression for analysis, and performance relative to existing methods improves with cohort size. We show substantial (up to 443 fold) performance gains over existing methods and demonstrate GQT’s utility for exploring massive datasets involving thousands to millions of genomes. PMID:26550772

  12. EFFECT OF SHORT-TERM ART INTERRUPTION ON LEVELS OF INTEGRATED HIV DNA.

    PubMed

    Strongin, Zachary; Sharaf, Radwa; VanBelzen, D Jake; Jacobson, Jeffrey M; Connick, Elizabeth; Volberding, Paul; Skiest, Daniel J; Gandhi, Rajesh T; Kuritzkes, Daniel R; O'Doherty, Una; Li, Jonathan Z

    2018-03-28

    Analytic treatment interruption (ATI) studies are required to evaluate strategies aimed at achieving ART-free HIV remission, but the impact of ATI on the viral reservoir remains unclear. We validated a DNA size selection-based assay for measuring levels of integrated HIV DNA and applied it to assess the effects of short-term ATI on the HIV reservoir. Samples from participants from four AIDS Clinical Trials Group (ACTG) ATI studies were assayed for integrated HIV DNA levels. Cryopreserved PBMCs were obtained for 12 participants with available samples pre-ATI and approximately 6 months after ART resumption. Four participants also had samples available during the ATI. The median duration of ATI was 12 weeks. Validation of the HIV Integrated DNA size-Exclusion (HIDE) assay was performed using samples spiked with unintegrated HIV DNA, HIV-infected cell lines, and participant PBMCs. The HIDE assay eliminated 99% of unintegrated HIV DNA species and strongly correlated with the established Alu- gag assay. For the majority of individuals, integrated DNA levels increased during ATI and subsequently declined upon ART resumption. There was no significant difference in levels of integrated HIV DNA between the pre- and post-ATI time points, with the median ratio of post:pre-ATI HIV DNA levels of 0.95. Using a new integrated HIV DNA assay, we found minimal change in the levels of integrated HIV DNA in participants who underwent an ATI followed by 6 months of ART. This suggests that short-term ATI can be conducted without a significant impact on levels of integrated proviral DNA in the peripheral blood. IMPORTANCE Interventions aimed at achieving sustained antiretroviral therapy (ART)-free HIV remission require treatment interruption trials to assess their efficacy. However, these trials are accompanied by safety concerns related to the expansion of the viral reservoir. We validated an assay that uses an automated DNA size-selection platform for quantifying levels of integrated HIV DNA and is less sample- and labor-intensive than current assays. Using stored samples from AIDS Clinical Trials Group studies, we found that short-term ART discontinuation had minimal impact on integrated HIV DNA levels after ART resumption, providing reassurance about the reservoir effects of short-term treatment interruption trials. Copyright © 2018 American Society for Microbiology.

  13. A variable-step-size robust delta modulator.

    NASA Technical Reports Server (NTRS)

    Song, C. L.; Garodnick, J.; Schilling, D. L.

    1971-01-01

    Description of an analytically obtained optimum adaptive delta modulator-demodulator configuration. The device utilizes two past samples to obtain a step size which minimizes the mean square error for a Markov-Gaussian source. The optimum system is compared, using computer simulations, with a linear delta modulator and an enhanced Abate delta modulator. In addition, the performance is compared to the rate distortion bound for a Markov source. It is shown that the optimum delta modulator is neither quantization nor slope-overload limited. The highly nonlinear equations obtained for the optimum transmitter and receiver are approximated by piecewise-linear equations in order to obtain system equations which can be transformed into hardware. The derivation of the experimental system is presented.

  14. Metallographic Characterization of Wrought Depleted Uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forsyth, Robert Thomas; Hill, Mary Ann

    Metallographic characterization was performed on wrought depleted uranium (DU) samples taken from the longitudinal and transverse orientations from specific locations on two specimens. Characterization of the samples included general microstructure, inclusion analysis, grain size analysis, and microhardness testing. Comparisons of the characterization results were made to determine any differences based on specimen, sample orientation, or sample location. In addition, the characterization results for the wrought DU samples were also compared with data obtained from the metallographic characterization of cast DU samples previously characterized. No differences were observed in microstructure, inclusion size, morphology, and distribution, or grain size in regard tomore » specimen, location, or orientation for the wrought depleted uranium samples. However, a small difference was observed in average hardness with regard to orientation at the same locations within the same specimen. The longitudinal samples were slightly harder than the transverse samples from the same location of the same specimen. This was true for both wrought DU specimens. Comparing the wrought DU sample data with the previously characterized cast DU sample data, distinct differences in microstructure, inclusion size, morphology and distribution, grain size, and microhardness were observed. As expected, the microstructure of the wrought DU samples consisted of small recrystallized grains which were uniform, randomly oriented, and equiaxed with minimal twinning observed in only a few grains. In contrast, the cast DU microstructure consisted of large irregularly shaped grains with extensive twinning observed in most grains. Inclusions in the wrought DU samples were elongated, broken and cracked and light and dark phases were observed in some inclusions. The mean inclusion area percentage for the wrought DU samples ranged from 0.08% to 0.34% and the average density from all wrought DU samples was 1.62E+04/cm 2. Inclusions in the cast DU samples were equiaxed and intact with light and dark phases observed in some inclusions. The mean inclusion area percentage for the cast DU samples ranged from 0.93% to 1.00% and the average density from all wrought DU samples was 2.83E+04/cm 2. The average mean grain area from all wrought DU samples was 141 μm 2 while the average mean grain area from all cast DU samples was 1.7 mm2. The average Knoop microhardness from all wrought DU samples was 215 HK and the average Knoop microhardness from all cast DU samples was 264 HK.« less

  15. Use of centrifugal-gravity concentration for rejection of talc and recovery improvement in base-metal flotation

    NASA Astrophysics Data System (ADS)

    Klein, Bern; Altun, Naci Emre; Ghaffari, Hassan

    2016-08-01

    The possibility of using a centrifugal-gravity concentrator to reject Mg-bearing minerals and minimize metal losses in the flotation of base metals was evaluated. Sample characterization, batch scoping tests, pilot-scale tests, and regrind-flotation tests were conducted on a Ni flotation tailings stream. Batch tests revealed that the Mg grade decreased dramatically in the concentrate products. Pilot-scale testing of a continuous centrifugal concentrator (Knelson CVD6) on the flotation tailings revealed that a concentrate with a low mass yield, low Mg content, and high Ni upgrade ratio could be achieved. Under optimum conditions, a concentrate at 6.7% mass yield was obtained with 0.85% Ni grade at 12.9% Ni recovery and with a low Mg distribution (1.7%). Size partition curves demonstrated that the CVD also operated as a size classifier, enhancing the rejection of talc fines. Overall, the CVD was capable of rejecting Mg-bearing minerals. Moreover, an opportunity exists for the novel use of centrifugal-gravity concentration for scavenging flotation tailings and/or after comminution to minimize amount of Mg-bearing minerals reporting to flotation.

  16. Extinction-sedimentation inversion technique for measuring size distribution of artificial fogs

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Vaughan, O. H.

    1978-01-01

    In measuring the size distribution of artificial fog particles, it is important that the natural state of the particles not be disturbed by the measuring device, such as occurs when samples are drawn through tubes. This paper describes a method for carrying out such a measurement by allowing the fog particles to settle in quiet air inside an enclosure through which traverses a parallel beam of light for measuring the optical depth as a function of time. An analytic function fit to the optical depth time decay curve can be directly inverted to yield the size distribution. Results of one such experiment performed on artificial fogs are shown as an example. The forwardscattering corrections to the measured extinction coefficient are also discussed with the aim of optimizing the experimental design so that the error due to forwardscattering is minimized.

  17. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  18. (Sample) Size Matters: Best Practices for Defining Error in Planktic Foraminiferal Proxy Records

    NASA Astrophysics Data System (ADS)

    Lowery, C.; Fraass, A. J.

    2016-02-01

    Paleoceanographic research is a vital tool to extend modern observational datasets and to study the impact of climate events for which there is no modern analog. Foraminifera are one of the most widely used tools for this type of work, both as paleoecological indicators and as carriers for geochemical proxies. However, the use of microfossils as proxies for paleoceanographic conditions brings about a unique set of problems. This is primarily due to the fact that groups of individual foraminifera, which usually live about a month, are used to infer average conditions for time periods ranging from hundreds to tens of thousands of years. Because of this, adequate sample size is very important for generating statistically robust datasets, particularly for stable isotopes. In the early days of stable isotope geochemistry, instrumental limitations required hundreds of individual foraminiferal tests to return a value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. While this has many advantages, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB, or 1°C. Here, we demonstrate the use of this tool to quantify error in micropaleontological datasets, and suggest best practices for minimizing error when generating stable isotope data with foraminifera.

  19. Cost-efficient designs for three-arm trials with treatment delivered by health professionals: Sample sizes for a combination of nested and crossed designs

    PubMed Central

    Moerbeek, Mirjam

    2018-01-01

    Background This article studies the design of trials that compare three treatment conditions that are delivered by two types of health professionals. The one type of health professional delivers one treatment, and the other type delivers two treatments, hence, this design is a combination of a nested and crossed design. As each health professional treats multiple patients, the data have a nested structure. This nested structure has thus far been ignored in the design of such trials, which may result in an underestimate of the required sample size. In the design stage, the sample sizes should be determined such that a desired power is achieved for each of the three pairwise comparisons, while keeping costs or sample size at a minimum. Methods The statistical model that relates outcome to treatment condition and explicitly takes the nested data structure into account is presented. Mathematical expressions that relate sample size to power are derived for each of the three pairwise comparisons on the basis of this model. The cost-efficient design achieves sufficient power for each pairwise comparison at lowest costs. Alternatively, one may minimize the total number of patients. The sample sizes are found numerically and an Internet application is available for this purpose. The design is also compared to a nested design in which each health professional delivers just one treatment. Results Mathematical expressions show that this design is more efficient than the nested design. For each pairwise comparison, power increases with the number of health professionals and the number of patients per health professional. The methodology of finding a cost-efficient design is illustrated using a trial that compares treatments for social phobia. The optimal sample sizes reflect the costs for training and supervising psychologists and psychiatrists, and the patient-level costs in the three treatment conditions. Conclusion This article provides the methodology for designing trials that compare three treatment conditions while taking the nesting of patients within health professionals into account. As such, it helps to avoid underpowered trials. To use the methodology, a priori estimates of the total outcome variances and intraclass correlation coefficients must be obtained from experts’ opinions or findings in the literature. PMID:29316807

  20. On the influence of crystal size and wavelength on native SAD phasing.

    PubMed

    Liebschner, Dorothee; Yamada, Yusuke; Matsugaki, Naohiro; Senda, Miki; Senda, Toshiya

    2016-06-01

    Native SAD is an emerging phasing technique that uses the anomalous signal of native heavy atoms to obtain crystallographic phases. The method does not require specific sample preparation to add anomalous scatterers, as the light atoms contained in the native sample are used as marker atoms. The most abundant anomalous scatterer used for native SAD, which is present in almost all proteins, is sulfur. However, the absorption edge of sulfur is at low energy (2.472 keV = 5.016 Å), which makes it challenging to carry out native SAD phasing experiments as most synchrotron beamlines are optimized for shorter wavelength ranges where the anomalous signal of sulfur is weak; for longer wavelengths, which produce larger anomalous differences, the absorption of X-rays by the sample, solvent, loop and surrounding medium (e.g. air) increases tremendously. Therefore, a compromise has to be found between measuring strong anomalous signal and minimizing absorption. It was thus hypothesized that shorter wavelengths should be used for large crystals and longer wavelengths for small crystals, but no thorough experimental analyses have been reported to date. To study the influence of crystal size and wavelength, native SAD experiments were carried out at different wavelengths (1.9 and 2.7 Å with a helium cone; 3.0 and 3.3 Å with a helium chamber) using lysozyme and ferredoxin reductase crystals of various sizes. For the tested crystals, the results suggest that larger sample sizes do not have a detrimental effect on native SAD data and that long wavelengths give a clear advantage with small samples compared with short wavelengths. The resolution dependency of substructure determination was analyzed and showed that high-symmetry crystals with small unit cells require higher resolution for the successful placement of heavy atoms.

  1. A simplified approach to the determination of N-nitroso glyphosate in technical glyphosate using HPLC with post-derivatization and colorimetric detection.

    PubMed

    Kim, Manuela; Stripeikis, Jorge; Iñón, Fernando; Tudino, Mabel

    2007-05-15

    A simple and sensitive HPLC post-derivatization method with colorimetric detection has been developed for the determination of N-nitroso glyphosate in samples of technical glyphosate. Separation of the analyte was accomplished using an anionic exchange resin (2.50mmx4.00mm i.d., 15mum particle size, functional group: quaternary ammonium salt) with Na(2)SO(4) 0.0075M (pH 11.5) (flow rate: 1.0mLmin(-1)) as mobile phase. After separation, the eluate was derivatized with a colorimetric reagent containing sulfanilamide 0.3% (w/v), [N-(1-naphtil)ethilendiamine] 0.03% (w/v) and HCl 4.5M in a thermostatized bath at 95 degrees C. Detection was performed at 546nm. All stages of the analytical procedure were optimized taking into account the concept of analytical minimalism: less operation times and costs; lower sample, reagents and energy consumption and minimal waste. The limit of detection (k=3) calculated for 10 blank replicates was 0.04mgL(-1) (0.8mgkg(-1)) in the solid sample which is lower than the maximum tolerable accepted by the Food and Agriculture Organization of the United Nations.

  2. Minimal perceptrons for memorizing complex patterns

    NASA Astrophysics Data System (ADS)

    Pastor, Marissa; Song, Juyong; Hoang, Danh-Tai; Jo, Junghyo

    2016-11-01

    Feedforward neural networks have been investigated to understand learning and memory, as well as applied to numerous practical problems in pattern classification. It is a rule of thumb that more complex tasks require larger networks. However, the design of optimal network architectures for specific tasks is still an unsolved fundamental problem. In this study, we consider three-layered neural networks for memorizing binary patterns. We developed a new complexity measure of binary patterns, and estimated the minimal network size for memorizing them as a function of their complexity. We formulated the minimal network size for regular, random, and complex patterns. In particular, the minimal size for complex patterns, which are neither ordered nor disordered, was predicted by measuring their Hamming distances from known ordered patterns. Our predictions agree with simulations based on the back-propagation algorithm.

  3. Linear Combinations of Multiple Outcome Measures to Improve the Power of Efficacy Analysis ---Application to Clinical Trials on Early Stage Alzheimer Disease

    PubMed Central

    Xiong, Chengjie; Luo, Jingqin; Morris, John C; Bateman, Randall

    2018-01-01

    Modern clinical trials on Alzheimer disease (AD) focus on the early symptomatic stage or even the preclinical stage. Subtle disease progression at the early stages, however, poses a major challenge in designing such clinical trials. We propose a multivariate mixed model on repeated measures to model the disease progression over time on multiple efficacy outcomes, and derive the optimum weights to combine multiple outcome measures by minimizing the sample sizes to adequately power the clinical trials. A cross-validation simulation study is conducted to assess the accuracy for the estimated weights as well as the improvement in reducing the sample sizes for such trials. The proposed methodology is applied to the multiple cognitive tests from the ongoing observational study of the Dominantly Inherited Alzheimer Network (DIAN) to power future clinical trials in the DIAN with a cognitive endpoint. Our results show that the optimum weights to combine multiple outcome measures can be accurately estimated, and that compared to the individual outcomes, the combined efficacy outcome with these weights significantly reduces the sample size required to adequately power clinical trials. When applied to the clinical trial in the DIAN, the estimated linear combination of six cognitive tests can adequately power the clinical trial. PMID:29546251

  4. Effect of Frozen Storage Temperature on the Quality of Premium Ice Cream.

    PubMed

    Park, Sung Hee; Jo, Yeon-Ji; Chun, Ji-Yeon; Hong, Geun-Pyo; Davaatseren, Munkhtugs; Choi, Mi-Jung

    2015-01-01

    The market sales of premium ice cream have paralleled the growth in consumer desire for rich flavor and taste. Storage temperature is a major consideration in preserving the quality attributes of premium ice cream products for both the manufacturer and retailers during prolonged storage. We investigated the effect of storage temperature (-18℃, -30℃, -50℃, and -70℃) and storage times, up to 52 wk, on the quality attributes of premium ice cream. Quality attributes tested included ice crystal size, air cell size, melting resistance, and color. Ice crystal size increased from 40.3 μm to 100.1 μm after 52 wk of storage at -18℃. When ice cream samples were stored at -50℃ or -70℃, ice crystal size slightly increased from 40.3 μm to 57-58 μm. Initial air cell size increased from 37.1 μm to 87.7 μm after storage at -18℃ for 52 wk. However, for storage temperatures of -50℃ and -70℃, air cell size increased only slightly from 37.1 μm to 46-47 μm. Low storage temperature (-50℃ and -70℃) resulted in better melt resistance and minimized color changes in comparison to high temperature storage (-18℃ and -30℃). In our study, quality changes in premium ice cream were gradually minimized according to decrease in storage temperature up to-50℃. No significant beneficial effect of -70℃ storage was found in quality attributes. In the scope of our experiment, we recommend a storage temperature of -50℃ to preserve the quality attributes of premium ice cream.

  5. Effect of Frozen Storage Temperature on the Quality of Premium Ice Cream

    PubMed Central

    Park, Sung Hee; Jo, Yeon-Ji; Chun, Ji-Yeon; Hong, Geun-Pyo

    2015-01-01

    The market sales of premium ice cream have paralleled the growth in consumer desire for rich flavor and taste. Storage temperature is a major consideration in preserving the quality attributes of premium ice cream products for both the manufacturer and retailers during prolonged storage. We investigated the effect of storage temperature (−18℃, −30℃, −50℃, and −70℃) and storage times, up to 52 wk, on the quality attributes of premium ice cream. Quality attributes tested included ice crystal size, air cell size, melting resistance, and color. Ice crystal size increased from 40.3 μm to 100.1 μm after 52 wk of storage at −18℃. When ice cream samples were stored at −50℃ or −70℃, ice crystal size slightly increased from 40.3 μm to 57-58 μm. Initial air cell size increased from 37.1 μm to 87.7 μm after storage at −18℃ for 52 wk. However, for storage temperatures of −50℃ and −70℃, air cell size increased only slightly from 37.1 μm to 46-47 μm. Low storage temperature (−50℃ and −70℃) resulted in better melt resistance and minimized color changes in comparison to high temperature storage (−18℃ and −30℃). In our study, quality changes in premium ice cream were gradually minimized according to decrease in storage temperature up to−50℃. No significant beneficial effect of −70℃ storage was found in quality attributes. In the scope of our experiment, we recommend a storage temperature of −50℃ to preserve the quality attributes of premium ice cream. PMID:26877639

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirtley, John R., E-mail: jkirtley@stanford.edu; Rosenberg, Aaron J.; Palmstrom, Johanna C.

    Superconducting QUantum Interference Device (SQUID) microscopy has excellent magnetic field sensitivity, but suffers from modest spatial resolution when compared with other scanning probes. This spatial resolution is determined by both the size of the field sensitive area and the spacing between this area and the sample surface. In this paper we describe scanning SQUID susceptometers that achieve sub-micron spatial resolution while retaining a white noise floor flux sensitivity of ≈2μΦ{sub 0}/Hz{sup 1/2}. This high spatial resolution is accomplished by deep sub-micron feature sizes, well shielded pickup loops fabricated using a planarized process, and a deep etch step that minimizes themore » spacing between the sample surface and the SQUID pickup loop. We describe the design, modeling, fabrication, and testing of these sensors. Although sub-micron spatial resolution has been achieved previously in scanning SQUID sensors, our sensors not only achieve high spatial resolution but also have integrated modulation coils for flux feedback, integrated field coils for susceptibility measurements, and batch processing. They are therefore a generally applicable tool for imaging sample magnetization, currents, and susceptibilities with higher spatial resolution than previous susceptometers.« less

  7. Computer modelling of grain microstructure in three dimensions

    NASA Astrophysics Data System (ADS)

    Narayan, K. Lakshmi

    We present a program that generates the two-dimensional micrographs of a three dimensional grain microstructure. The code utilizes a novel scanning, pixel mapping technique to secure statistical distributions of surface areas, grain sizes, aspect ratios, perimeters, number of nearest neighbors and volumes of the randomly nucleated particles. The program can be used for comparing the existing theories of grain growth, and interpretation of two-dimensional microstructure of three-dimensional samples. Special features have been included to minimize the computation time and resource requirements.

  8. Development of an X-ray fluorescence holographic measurement system for protein crystals

    NASA Astrophysics Data System (ADS)

    Sato-Tomita, Ayana; Shibayama, Naoya; Happo, Naohisa; Kimura, Koji; Okabe, Takahiro; Matsushita, Tomohiro; Park, Sam-Yong; Sasaki, Yuji C.; Hayashi, Kouichi

    2016-06-01

    Experimental procedure and setup for obtaining X-ray fluorescence hologram of crystalline metalloprotein samples are described. Human hemoglobin, an α2β2 tetrameric metalloprotein containing the Fe(II) heme active-site in each chain, was chosen for this study because of its wealth of crystallographic data. A cold gas flow system was introduced to reduce X-ray radiation damage of protein crystals that are usually fragile and susceptible to damage. A χ-stage was installed to rotate the sample while avoiding intersection between the X-ray beam and the sample loop or holder, which is needed for supporting fragile protein crystals. Huge hemoglobin crystals (with a maximum size of 8 × 6 × 3 mm3) were prepared and used to keep the footprint of the incident X-ray beam smaller than the sample size during the entire course of the measurement with the incident angle of 0°-70°. Under these experimental and data acquisition conditions, we achieved the first observation of the X-ray fluorescence hologram pattern from the protein crystals with minimal radiation damage, opening up a new and potential method for investigating the stereochemistry of the metal active-sites in biomacromolecules.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sato-Tomita, Ayana, E-mail: ayana.sato@jichi.ac.jp, E-mail: shibayam@jichi.ac.jp, E-mail: hayashi.koichi@nitech.ac.jp; Shibayama, Naoya, E-mail: ayana.sato@jichi.ac.jp, E-mail: shibayam@jichi.ac.jp, E-mail: hayashi.koichi@nitech.ac.jp; Okabe, Takahiro

    Experimental procedure and setup for obtaining X-ray fluorescence hologram of crystalline metalloprotein samples are described. Human hemoglobin, an α{sub 2}β{sub 2} tetrameric metalloprotein containing the Fe(II) heme active-site in each chain, was chosen for this study because of its wealth of crystallographic data. A cold gas flow system was introduced to reduce X-ray radiation damage of protein crystals that are usually fragile and susceptible to damage. A χ-stage was installed to rotate the sample while avoiding intersection between the X-ray beam and the sample loop or holder, which is needed for supporting fragile protein crystals. Huge hemoglobin crystals (with amore » maximum size of 8 × 6 × 3 mm{sup 3}) were prepared and used to keep the footprint of the incident X-ray beam smaller than the sample size during the entire course of the measurement with the incident angle of 0°-70°. Under these experimental and data acquisition conditions, we achieved the first observation of the X-ray fluorescence hologram pattern from the protein crystals with minimal radiation damage, opening up a new and potential method for investigating the stereochemistry of the metal active-sites in biomacromolecules.« less

  10. Porosity dependence of terahertz emission of porous silicon investigated using reflection geometry terahertz time-domain spectroscopy

    NASA Astrophysics Data System (ADS)

    Mabilangan, Arvin I.; Lopez, Lorenzo P.; Faustino, Maria Angela B.; Muldera, Joselito E.; Cabello, Neil Irvin F.; Estacio, Elmer S.; Salvador, Arnel A.; Somintac, Armando S.

    2016-12-01

    Porosity dependent terahertz emission of porous silicon (PSi) was studied. The PSi samples were fabricated via electrochemical etching of boron-doped (100) silicon in a solution containing 48% hydrofluoric acid, deionized water and absolute ethanol in a 1:3:4 volumetric ratio. The porosity was controlled by varying the supplied anodic current for each sample. The samples were then optically characterized via normal incidence reflectance spectroscopy to obtain values for their respective refractive indices and porosities. Absorbance of each sample was also computed using the data from its respective reflectance spectrum. Terahertz emission of each sample was acquired through terahertz - time domain spectroscopy. A decreasing trend in the THz signal power was observed as the porosity of each PSi was increased. This was caused by the decrease in the absorption strength as the silicon crystallite size in the PSi was minimized.

  11. Multi-Mission System Analysis for Planetary Entry (M-SAPE) Version 1

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid; Glaab, Louis; Winski, Richard G.; Maddock, Robert W.; Emmett, Anjie L.; Munk, Michelle M.; Agrawal, Parul; Sepka, Steve; Aliaga, Jose; Zarchi, Kerry; hide

    2014-01-01

    This report describes an integrated system for Multi-mission System Analysis for Planetary Entry (M-SAPE). The system in its current form is capable of performing system analysis and design for an Earth entry vehicle suitable for sample return missions. The system includes geometry, mass sizing, impact analysis, structural analysis, flight mechanics, TPS, and a web portal for user access. The report includes details of M-SAPE modules and provides sample results. Current M-SAPE vehicle design concept is based on Mars sample return (MSR) Earth entry vehicle design, which is driven by minimizing risk associated with sample containment (no parachute and passive aerodynamic stability). By M-SAPE exploiting a common design concept, any sample return mission, particularly MSR, will benefit from significant risk and development cost reductions. The design provides a platform by which technologies and design elements can be evaluated rapidly prior to any costly investment commitment.

  12. Time versus energy minimization migration strategy varies with body size and season in long-distance migratory shorebirds.

    PubMed

    Zhao, Meijuan; Christie, Maureen; Coleman, Jonathan; Hassell, Chris; Gosbell, Ken; Lisovski, Simeon; Minton, Clive; Klaassen, Marcel

    2017-01-01

    Migrants have been hypothesised to use different migration strategies between seasons: a time-minimization strategy during their pre-breeding migration towards the breeding grounds and an energy-minimization strategy during their post-breeding migration towards the wintering grounds. Besides season, we propose body size as a key factor in shaping migratory behaviour. Specifically, given that body size is expected to correlate negatively with maximum migration speed and that large birds tend to use more time to complete their annual life-history events (such as moult, breeding and migration), we hypothesise that large-sized species are time stressed all year round. Consequently, large birds are not only likely to adopt a time-minimization strategy during pre-breeding migration, but also during post-breeding migration, to guarantee a timely arrival at both the non-breeding (i.e. wintering) and breeding grounds. We tested this idea using individual tracks across six long-distance migratory shorebird species (family Scolopacidae) along the East Asian-Australasian Flyway varying in size from 50 g to 750 g lean body mass. Migration performance was compared between pre- and post-breeding migration using four quantifiable migratory behaviours that serve to distinguish between a time- and energy-minimization strategy, including migration speed, number of staging sites, total migration distance and step length from one site to the next. During pre- and post-breeding migration, the shorebirds generally covered similar distances, but they tended to migrate faster, used fewer staging sites, and tended to use longer step lengths during pre-breeding migration. These seasonal differences are consistent with the prediction that a time-minimization strategy is used during pre-breeding migration, whereas an energy-minimization strategy is used during post-breeding migration. However, there was also a tendency for the seasonal difference in migration speed to progressively disappear with an increase in body size, supporting our hypothesis that larger species tend to use time-minimization strategies during both pre- and post-breeding migration. Our study highlights that body size plays an important role in shaping migratory behaviour. Larger migratory bird species are potentially time constrained during not only the pre- but also the post-breeding migration. Conservation of their habitats during both seasons may thus be crucial for averting further population declines.

  13. Anomalous permittivity in fine-grain barium titanate

    NASA Astrophysics Data System (ADS)

    Ostrander, Steven Paul

    Fine-grain barium titanate capacitors exhibit anomalously large permittivity. It is often observed that these materials will double or quadruple the room temperature permittivity of a coarse-grain counterpart. However, aside from a general consensus on this permittivity enhancement, the properties of the fine-grain material are poorly understood. This thesis examines the effect of grain size on dielectric properties of a self-consistent set of high density undoped barium titanate capacitors. This set included samples with grain sizes ranging from submicron to ˜20 microns, and with densities generally above 95% of the theoretical. A single batch of well characterized powder was milled, dry-pressed then isostatically-pressed. Compacts were fast-fired, but sintering temperature alone was used to control the grain size. With this approach, the extrinsic influences are minimized within the set of samples, but more importantly, they are normalized between samples. That is, with a single batch of powder and with identical green processing, uniform impurity concentration is expected. The fine-grain capacitors exhibited a room temperature permittivity of ˜5500 and dielectric losses of ˜2%. The Curie-temperature decreased by {˜}5sp°C from that of the coarse-grain material, and the two ferroelectric-ferroelectric phase transition temperatures increased by {˜}10sp°C. The grain size induced permittivity enhancement was only active in the tetragonal and orthorhombic phases. Strong dielectric anomalies were observed in samples with grain size as small as {˜}0.4\\ mum. It is suggested that the strong first-order character observed in the present data is related to control of microstructure and stoichiometry. Grain size effects on conductivity losses, ferroelectric losses, ferroelectric dispersion, Maxwell-Wagner dispersion, and dielectric aging of permittivity and loss were observed. For the fine-grain material, these observations suggest the suppression of domain wall motion below the Curie transition, and the suppression of conductivity above the Curie transition.

  14. Feline mitochondrial DNA sampling for forensic analysis: when enough is enough!

    PubMed

    Grahn, Robert A; Alhaddad, Hasan; Alves, Paulo C; Randi, Ettore; Waly, Nashwa E; Lyons, Leslie A

    2015-05-01

    Pet hair has a demonstrated value in resolving legal issues. Cat hair is chronically shed and it is difficult to leave a home with cats without some level of secondary transfer. The power of cat hair as an evidentiary resource may be underused because representative genetic databases are not available for exclusionary purposes. Mitochondrial control region databases are highly valuable for hair analyses and have been developed for the cat. In a representative worldwide data set, 83% of domestic cat mitotypes belong to one of twelve major types. Of the remaining 17%, 7.5% are unique within the published 1394 sample database. The current research evaluates the sample size necessary to establish a representative population for forensic comparison of the mitochondrial control region for the domestic cat. For most worldwide populations, randomly sampling 50 unrelated local individuals will achieve saturation at 95%. The 99% saturation is achieved by randomly sampling 60-170 cats, depending on the numbers of mitotypes available in the population at large. Likely due to the recent domestication of the cat and minimal localized population substructure, fewer cats are needed to meet mitochondria DNA control region database practical saturation than for humans or dogs. Coupled with the available worldwide feline control region database of nearly 1400 cats, minimal local sampling will be required to establish an appropriate comparative representative database and achieve significant exclusionary power. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. A study on the cytotoxicity of carbon-based materials

    DOE PAGES

    Saha, Dipendu; Heldt, Caryn L.; Gencoglu, Maria F.; ...

    2016-05-25

    With an aim to understand the origin and key contributing factors towards carboninduced cytotoxicity, we have studied five different carbon samples with diverse surface area, pore width, shape and size, conductivity and surface functionality. All the carbon materials were characterized with surface area and pore size distribution, x-ray photoelectron spectroscopy (XPS) and electron microscopic imaging. We performed cytotoxicity study in Caco-2 cells by colorimetric assay, oxidative stress analysis by reactive oxygen species (ROX) detection, cellular metabolic activity measurement by adenosine triphosphate (ATP) depletion and visualization of cellular internalization by TEM imaging. The carbon materials demonstrated a varying degree of cytotoxicitymore » in contact with Caco-2 cells. The lowest cell survival rate was observed for nanographene, which possessed the minimal size amongst all the carbon samples under study. None of the carbons induced oxidative stress to the cells as indicated by the ROX generation results. Cellular metabolic activity study revealed that the carbon materials caused ATP depletion in cells and nanographene caused the highest depletion. Visual observation by TEM imaging indicated the cellular internalization of nanographene. This study confirmed that the size is the key cause of carbon-induced cytotoxicity and it is probably caused by the ATP depletion within the cell.« less

  16. VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.

    2016-12-01

    VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.

  17. Transition to collective oscillations in finite Kuramoto ensembles

    NASA Astrophysics Data System (ADS)

    Peter, Franziska; Pikovsky, Arkady

    2018-03-01

    We present an alternative approach to finite-size effects around the synchronization transition in the standard Kuramoto model. Our main focus lies on the conditions under which a collective oscillatory mode is well defined. For this purpose, the minimal value of the amplitude of the complex Kuramoto order parameter appears as a proper indicator. The dependence of this minimum on coupling strength varies due to sampling variations and correlates with the sample kurtosis of the natural frequency distribution. The skewness of the frequency sample determines the frequency of the resulting collective mode. The effects of kurtosis and skewness hold in the thermodynamic limit of infinite ensembles. We prove this by integrating a self-consistency equation for the complex Kuramoto order parameter for two families of distributions with controlled kurtosis and skewness, respectively.

  18. Studies on the electrical transport properties of carbon nanotube composites

    NASA Astrophysics Data System (ADS)

    Tarlton, Taylor Warren

    This work presents a probabilistic approach to model the electrical transport properties of carbon nanotube composite materials. A pseudo-random generation method is presented with the ability to generate 3-D samples with a variety of different configurations. Periodic boundary conditions are employed in the directions perpendicular to transport to minimize edge effects. Simulations produce values for drift velocity, carrier mobility, and conductivity in samples that account for geometrical features resembling those found in the lab. All results show an excellent agreement to the well-known power law characteristic of percolation processes, which is used to compare across simulations. The effect of sample morphology, like nanotube waviness and aspect ratio, and agglomeration on charge transport within CNT composites is evaluated within this model. This study determines the optimum simulation box-sizes that lead to minimize size-effects without rendering the simulation unaffordable. In addition, physical parameters within the model are characterized, involving various density functional theory calculations within Atomistix Toolkit. Finite element calculations have been performed to solve Maxwell's Equations for static fields in the COMSOL Multiphysics software package in order to better understand the behavior of the electric field within the composite material to further improve the model within this work. The types of composites studied within this work are often studied for use in electromagnetic shielding, electrostatic reduction, or even monitoring structural changes due to compression, stretching, or damage through their effect on the conductivity. However, experimental works have shown that based on various processing techniques the electrical properties of specific composites can vary widely. Therefore, the goal of this work has been to form a model with the ability to accurately predict the conductive properties as a function physical characteristics of the composite material in order to aid in the design of these composites.

  19. Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission

    NASA Technical Reports Server (NTRS)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.

    2015-01-01

    The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multiwavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.

  20. Particle systems for adaptive, isotropic meshing of CAD models

    PubMed Central

    Levine, Joshua A.; Whitaker, Ross T.

    2012-01-01

    We present a particle-based approach for generating adaptive triangular surface and tetrahedral volume meshes from computer-aided design models. Input shapes are treated as a collection of smooth, parametric surface patches that can meet non-smoothly on boundaries. Our approach uses a hierarchical sampling scheme that places particles on features in order of increasing dimensionality. These particles reach a good distribution by minimizing an energy computed in 3D world space, with movements occurring in the parametric space of each surface patch. Rather than using a pre-computed measure of feature size, our system automatically adapts to both curvature as well as a notion of topological separation. It also enforces a measure of smoothness on these constraints to construct a sizing field that acts as a proxy to piecewise-smooth feature size. We evaluate our technique with comparisons against other popular triangular meshing techniques for this domain. PMID:23162181

  1. Experimental evidence for stochastic switching of supercooled phases in NdNiO3 nanostructures

    NASA Astrophysics Data System (ADS)

    Kumar, Devendra; Rajeev, K. P.; Alonso, J. A.

    2018-03-01

    A first-order phase transition is a dynamic phenomenon. In a multi-domain system, the presence of multiple domains of coexisting phases averages out the dynamical effects, making it nearly impossible to predict the exact nature of phase transition dynamics. Here, we report the metal-insulator transition in samples of sub-micrometer size NdNiO3 where the effect of averaging is minimized by restricting the number of domains under study. We observe the presence of supercooled metallic phases with supercooling of 40 K or more. The transformation from the supercooled metallic to the insulating state is a stochastic process that happens at different temperatures and times in different experimental runs. The experimental results are understood without incorporating material specific properties, suggesting that the behavior is of universal nature. The size of the sample needed to observe individual switching of supercooled domains, the degree of supercooling, and the time-temperature window of switching are expected to depend on the parameters such as quenched disorder, strain, and magnetic field.

  2. Titanium distribution in swimming pool water is dominated by dissolved species.

    PubMed

    David Holbrook, R; Motabar, Donna; Quiñones, Oscar; Stanford, Benjamin; Vanderford, Brett; Moss, Donna

    2013-10-01

    The increased use of titanium dioxide nanoparticles (nano-TiO2) in consumer products such as sunscreen has raised concerns about their possible risk to human and environmental health. In this work, we report the occurrence, size fractionation and behavior of titanium (Ti) in a children's swimming pool. Size-fractionated samples were analyzed for Ti using ICP-MS. Total titanium concentrations ([Ti]) in the pool water ranged between 21 μg/L and 60 μg/L and increased throughout the 101-day sampling period while [Ti] in tap water remained relatively constant. The majority of [Ti] was found in the dissolved phase (<1 kDa), with only a minor fraction of total [Ti] being considered either particulate or microparticulate. Simple models suggest that evaporation may account for the observed variation in [Ti], while sunscreen may be a relevant source of particulate and microparticule Ti. Compared to diet, incidental ingestion of nano-Ti from swimming pool water is minimal. Published by Elsevier Ltd.

  3. Successful Sampling Strategy Advances Laboratory Studies of NMR Logging in Unconsolidated Aquifers

    NASA Astrophysics Data System (ADS)

    Behroozmand, Ahmad A.; Knight, Rosemary; Müller-Petke, Mike; Auken, Esben; Barfod, Adrian A. S.; Ferré, Ty P. A.; Vilhelmsen, Troels N.; Johnson, Carole D.; Christiansen, Anders V.

    2017-11-01

    The nuclear magnetic resonance (NMR) technique has become popular in groundwater studies because it responds directly to the presence and mobility of water in a porous medium. There is a need to conduct laboratory experiments to aid in the development of NMR hydraulic conductivity models, as is typically done in the petroleum industry. However, the challenge has been obtaining high-quality laboratory samples from unconsolidated aquifers. At a study site in Denmark, we employed sonic drilling, which minimizes the disturbance of the surrounding material, and extracted twelve 7.6 cm diameter samples for laboratory measurements. We present a detailed comparison of the acquired laboratory and logging NMR data. The agreement observed between the laboratory and logging data suggests that the methodologies proposed in this study provide good conditions for studying NMR measurements of unconsolidated near-surface aquifers. Finally, we show how laboratory sample size and condition impact the NMR measurements.

  4. Rapid and non-invasive analysis of deoxynivalenol in durum and common wheat by Fourier-Transform Near Infrared (FT-NIR) spectroscopy.

    PubMed

    De Girolamo, A; Lippolis, V; Nordkvist, E; Visconti, A

    2009-06-01

    Fourier transform near-infrared spectroscopy (FT-NIR) was used for rapid and non-invasive analysis of deoxynivalenol (DON) in durum and common wheat. The relevance of using ground wheat samples with a homogeneous particle size distribution to minimize measurement variations and avoid DON segregation among particles of different sizes was established. Calibration models for durum wheat, common wheat and durum + common wheat samples, with particle size <500 microm, were obtained by using partial least squares (PLS) regression with an external validation technique. Values of root mean square error of prediction (RMSEP, 306-379 microg kg(-1)) were comparable and not too far from values of root mean square error of cross-validation (RMSECV, 470-555 microg kg(-1)). Coefficients of determination (r(2)) indicated an "approximate to good" level of prediction of the DON content by FT-NIR spectroscopy in the PLS calibration models (r(2) = 0.71-0.83), and a "good" discrimination between low and high DON contents in the PLS validation models (r(2) = 0.58-0.63). A "limited to good" practical utility of the models was ascertained by range error ratio (RER) values higher than 6. A qualitative model, based on 197 calibration samples, was developed to discriminate between blank and naturally contaminated wheat samples by setting a cut-off at 300 microg kg(-1) DON to separate the two classes. The model correctly classified 69% of the 65 validation samples with most misclassified samples (16 of 20) showing DON contamination levels quite close to the cut-off level. These findings suggest that FT-NIR analysis is suitable for the determination of DON in unprocessed wheat at levels far below the maximum permitted limits set by the European Commission.

  5. Comparative forensic soil analysis of New Jersey state parks using a combination of simple techniques with multivariate statistics.

    PubMed

    Bonetti, Jennifer; Quarino, Lawrence

    2014-05-01

    This study has shown that the combination of simple techniques with the use of multivariate statistics offers the potential for the comparative analysis of soil samples. Five samples were obtained from each of twelve state parks across New Jersey in both the summer and fall seasons. Each sample was examined using particle-size distribution, pH analysis in both water and 1 M CaCl2 , and a loss on ignition technique. Data from each of the techniques were combined, and principal component analysis (PCA) and canonical discriminant analysis (CDA) were used for multivariate data transformation. Samples from different locations could be visually differentiated from one another using these multivariate plots. Hold-one-out cross-validation analysis showed error rates as low as 3.33%. Ten blind study samples were analyzed resulting in no misclassifications using Mahalanobis distance calculations and visual examinations of multivariate plots. Seasonal variation was minimal between corresponding samples, suggesting potential success in forensic applications. © 2014 American Academy of Forensic Sciences.

  6. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio

  7. Survey methods for assessing land cover map accuracy

    USGS Publications Warehouse

    Nusser, S.M.; Klaas, E.E.

    2003-01-01

    The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.

  8. Minimizing Artifacts and Biases in Chamber-Based Measurements of Soil Respiration

    NASA Astrophysics Data System (ADS)

    Davidson, E. A.; Savage, K.

    2001-05-01

    Soil respiration is one of the largest and most important fluxes of carbon in terrestrial ecosystems. The objectives of this paper are to review concerns about uncertainties of chamber-based measurements of CO2 emissions from soils, to evaluate the direction and magnitude of these potential errors, and to explain procedures that minimize these errors and biases. Disturbance of diffusion gradients cause underestimate of fluxes by less than 15% in most cases, and can be partially corrected for with curve fitting and/or can be minimized by using brief measurement periods. Under-pressurization or over-pressurization of the chamber caused by flow restrictions in air circulating designs can cause large errors, but can also be avoided with properly sized chamber vents and unrestricted flows. Somewhat larger pressure differentials are observed under windy conditions, and the accuracy of measurements made under such conditions needs more research. Spatial and temporal heterogeneity can be addressed with appropriate chamber sizes and numbers and frequency of sampling. For example, means of 8 randomly chosen flux measurements from a population of 36 measurements made with 300 cm2 chambers in tropical forests and pastures were within 25% of the full population mean 98% of the time and were within 10% of the full population mean 70% of the time. Comparisons of chamber-based measurements with tower-based measurements of total ecosystem respiration require analysis of the scale of variation within the purported tower footprint. In a forest at Howland, Maine, the differences in soil respiration rates among very poorly drained and well drained soils were large, but they mostly were fortuitously cancelled when evaluated for purported tower footprints of 600-2100 m length. While all of these potential sources of measurement error and sampling biases must be carefully considered, properly designed and deployed chambers provide a reliable means of accurately measuring soil respiration in terrestrial ecosystems.

  9. Optimized Geometry for Superconducting Sensing Coils

    NASA Technical Reports Server (NTRS)

    Eom, Byeong Ho; Pananen, Konstantin; Hahn, Inseob

    2008-01-01

    An optimized geometry has been proposed for superconducting sensing coils that are used in conjunction with superconducting quantum interference devices (SQUIDs) in magnetic resonance imaging (MRI), magnetoencephalography (MEG), and related applications in which magnetic fields of small dipoles are detected. In designing a coil of this type, as in designing other sensing coils, one seeks to maximize the sensitivity of the detector of which the coil is a part, subject to geometric constraints arising from the proximity of other required equipment. In MRI or MEG, the main benefit of maximizing the sensitivity would be to enable minimization of measurement time. In general, to maximize the sensitivity of a detector based on a sensing coil coupled with a SQUID sensor, it is necessary to maximize the magnetic flux enclosed by the sensing coil while minimizing the self-inductance of this coil. Simply making the coil larger may increase its self-inductance and does not necessarily increase sensitivity because it also effectively increases the distance from the sample that contains the source of the signal that one seeks to detect. Additional constraints on the size and shape of the coil and on the distance from the sample arise from the fact that the sample is at room temperature but the coil and the SQUID sensor must be enclosed within a cryogenic shield to maintain superconductivity.

  10. Use of a single-bowl continuous-flow centrifuge for dewatering suspended sediments: effect on sediment physical and chemical characteristics

    USGS Publications Warehouse

    Rees, T.F.; Leenheer, J.A.; Ranville, J.F.

    1991-01-01

    Sediment-recovery efficiency of 86-91% is comparable to that of other types of CFC units. The recovery efficiency is limited by the particle-size distribution of the feed water and by the limiting particle diameter that is retained in the centrifuge bowl. Contamination by trace metals and organics is minimized by coating all surfaces that come in contact with the sample with either FEP or PFA Teflon and using a removable FEP Teflon liner in the centrifuge bowl. -from Authors

  11. Coalescent: an open-science framework for importance sampling in coalescent theory.

    PubMed

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.

  12. A procedure for partitioning bulk sediments into distinct grain-size fractions for geochemical analysis

    USGS Publications Warehouse

    Barbanti, A.; Bothner, Michael H.

    1993-01-01

    A method to separate sediments into discrete size fractions for geochemical analysis has been tested. The procedures were chosen to minimize the destruction or formation of aggregates and involved gentle sieving and settling of wet samples. Freeze-drying and sonication pretreatments, known to influence aggregates, were used for comparison. Freeze-drying was found to increase the silt/clay ratio by an average of 180 percent compared to analysis of a wet sample that had been wet sieved only. Sonication of a wet sample decreased the silt/clay ratio by 51 percent. The concentrations of metals and organic carbon in the separated fractions changed depending on the pretreatment procedures in a manner consistent with the hypothesis that aggregates consist of fine-grained organic- and metal-rich particles. The coarse silt fraction of a freeze-dried sample contained 20–44 percent higher concentrations of Zn, Cu, and organic carbon than the coarse silt fraction of the wet sample. Sonication resulted in concentrations of these analytes that were 18–33 percent lower in the coarse silt fraction than found in the wet sample. Sonication increased the concentration of lead in the clay fraction by an average of 40 percent compared to an unsonicated sample. Understanding the magnitude of change caused by different analysis protocols is an aid in designing future studies that seek to interpret the spatial distribution of contaminated sediments and their transport mechanisms.

  13. Experimental and environmental factors affect spurious detection of ecological thresholds

    USGS Publications Warehouse

    Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.

    2012-01-01

    Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.

  14. High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Juan; Zou, Qingze, E-mail: qzzou@rci.rutgers.edu

    In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized inmore » a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.« less

  15. High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force.

    PubMed

    Ren, Juan; Zou, Qingze

    2014-07-01

    In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized in a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.

  16. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    PubMed

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  17. Flow cytometry for feline lymphoma: a retrospective study regarding pre-analytical factors possibly affecting the quality of samples.

    PubMed

    Martini, Valeria; Bernardi, Serena; Marelli, Priscilla; Cozzi, Marzia; Comazzi, Stefano

    2018-06-01

    Objectives Flow cytometry (FC) is becoming increasingly popular among veterinary oncologists for the diagnosis of lymphoma or leukaemia. It is accurate, fast and minimally invasive. Several studies of FC have been carried out in canine oncology and applied with great results, whereas there is limited knowledge and use of this technique in feline patients. This is mainly owing to the high prevalence of intra-abdominal lymphomas in this species and the difficulty associated with the diagnostic procedures needed to collect the sample. The purpose of the present study is to investigate whether any pre-analytical factor might affect the quality of suspected feline lymphoma samples for FC analysis. Methods Ninety-seven consecutive samples of suspected feline lymphoma were retrospectively selected from the authors' institution's FC database. The referring veterinarians were contacted and interviewed about several different variables, including signalment, appearance of the lesion, features of the sampling procedure and the experience of veterinarians performing the sampling. Statistical analyses were performed to assess the possible influence of these variables on the cellularity of the samples and the likelihood of it being finally processed for FC. Results Sample cellularity is a major factor in the likelihood of the sample being processed. Moreover, sample cellularity was significantly influenced by the needle size, with 21 G needles providing the highest cellularity. Notably, the sample cellularity and the likelihood of being processed did not vary between peripheral and intra-abdominal lesions. Approximately half of the cats required pharmacological restraint. Side effects were reported in one case only (transient swelling after peripheral lymph node sampling). Conclusions and relevance FC can be safely applied to cases of suspected feline lymphomas, including intra-abdominal lesions. A 21 G needle should be preferred for sampling. This study provides the basis for the increased use of this minimally invasive, fast and cost-effective technique in feline medicine.

  18. An unbiased adaptive sampling algorithm for the exploration of RNA mutational landscapes under evolutionary pressure.

    PubMed

    Waldispühl, Jérôme; Ponty, Yann

    2011-11-01

    The analysis of the relationship between sequences and structures (i.e., how mutations affect structures and reciprocally how structures influence mutations) is essential to decipher the principles driving molecular evolution, to infer the origins of genetic diseases, and to develop bioengineering applications such as the design of artificial molecules. Because their structures can be predicted from the sequence data only, RNA molecules provide a good framework to study this sequence-structure relationship. We recently introduced a suite of algorithms called RNAmutants which allows a complete exploration of RNA sequence-structure maps in polynomial time and space. Formally, RNAmutants takes an input sequence (or seed) to compute the Boltzmann-weighted ensembles of mutants with exactly k mutations, and sample mutations from these ensembles. However, this approach suffers from major limitations. Indeed, since the Boltzmann probabilities of the mutations depend of the free energy of the structures, RNAmutants has difficulties to sample mutant sequences with low G+C-contents. In this article, we introduce an unbiased adaptive sampling algorithm that enables RNAmutants to sample regions of the mutational landscape poorly covered by classical algorithms. We applied these methods to sample mutations with low G+C-contents. These adaptive sampling techniques can be easily adapted to explore other regions of the sequence and structural landscapes which are difficult to sample. Importantly, these algorithms come at a minimal computational cost. We demonstrate the insights offered by these techniques on studies of complete RNA sequence structures maps of sizes up to 40 nucleotides. Our results indicate that the G+C-content has a strong influence on the size and shape of the evolutionary accessible sequence and structural spaces. In particular, we show that low G+C-contents favor the apparition of internal loops and thus possibly the synthesis of tertiary structure motifs. On the other hand, high G+C-contents significantly reduce the size of the evolutionary accessible mutational landscapes.

  19. Native Environment Modulates Leaf Size and Response to Simulated Foliar Shade across Wild Tomato Species

    PubMed Central

    Filiault, Daniele L.; Kumar, Ravi; Jiménez-Gómez, José M.; Schrager, Amanda V.; Park, Daniel S.; Peng, Jie; Sinha, Neelima R.; Maloof, Julin N.

    2012-01-01

    The laminae of leaves optimize photosynthetic rates by serving as a platform for both light capture and gas exchange, while minimizing water losses associated with thermoregulation and transpiration. Many have speculated that plants maximize photosynthetic output and minimize associated costs through leaf size, complexity, and shape, but a unifying theory linking the plethora of observed leaf forms with the environment remains elusive. Additionally, the leaf itself is a plastic structure, responsive to its surroundings, further complicating the relationship. Despite extensive knowledge of the genetic mechanisms underlying angiosperm leaf development, little is known about how phenotypic plasticity and selective pressures converge to create the diversity of leaf shapes and sizes across lineages. Here, we use wild tomato accessions, collected from locales with diverse levels of foliar shade, temperature, and precipitation, as a model to assay the extent of shade avoidance in leaf traits and the degree to which these leaf traits correlate with environmental factors. We find that leaf size is correlated with measures of foliar shade across the wild tomato species sampled and that leaf size and serration correlate in a species-dependent fashion with temperature and precipitation. We use far-red induced changes in leaf length as a proxy measure of the shade avoidance response, and find that shade avoidance in leaves negatively correlates with the level of foliar shade recorded at the point of origin of an accession. The direction and magnitude of these correlations varies across the leaf series, suggesting that heterochronic and/or ontogenic programs are a mechanism by which selective pressures can alter leaf size and form. This study highlights the value of wild tomato accessions for studies of both morphological and light-regulated development of compound leaves, and promises to be useful in the future identification of genes regulating potentially adaptive plastic leaf traits. PMID:22253737

  20. Single-pipetting microfluidic assay device for rapid detection of Salmonella from poultry package.

    PubMed

    Fronczek, Christopher F; You, David J; Yoon, Jeong-Yeol

    2013-02-15

    A direct, sensitive, near-real-time, handheld optical immunoassay device was developed to detect Salmonella typhimurium in the naturally occurring liquid from fresh poultry packages (hereafter "chicken matrix"), with just single pipetting of sample (i.e., no filtration, culturing and/or isolation, thus reducing the assay time and the error associated with them). Carboxylated, polystyrene microparticles were covalently conjugated with anti-Salmonella, and the immunoagglutination due to the presence of Salmonella was detected by reading the Mie scatter signals from the microfluidic channels using a handheld device. The presence of chicken matrix did not affect the light scatter signal, since the optical parameters (particle size d, wavelength of incident light λ and scatter angle θ) were optimized to minimize the effect of sample matrix (animal tissues and blood proteins, etc.). The sample was loaded into a microfluidic chip that was split into two channels, one pre-loaded with vacuum-dried, antibody-conjugated particles and the other with vacuum-dried, bovine serum albumin-conjugated particles. This eliminated the need for a separate negative control, effectively minimizing chip-to-chip and sample-to-sample variations. Particles and the sample were diffused in-channel through chemical agitation by Tween 80, also vacuum-dried within the microchannels. Sequential mixing of the sample to the reagents under a strict laminar flow condition synergistically improved the reproducibility and linearity of the assay. In addition, dried particles were shown to successfully detect lower Salmonella concentrations for up to 8 weeks. The handheld device contains simplified circuitry eliminating unnecessary adjustment stages, providing a stable signal, thus maximizing sensitivity. Total assay time was 10 min, and the detection limit 10 CFU mL(-1) was observed in all matrices, demonstrating the suitability of this device for field assays. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Accuracy in the estimation of quantitative minimal area from the diversity/area curve.

    PubMed

    Vives, Sergi; Salicrú, Miquel

    2005-05-01

    The problem of representativity is fundamental in ecological studies. A qualitative minimal area that gives a good representation of species pool [C.M. Bouderesque, Methodes d'etude qualitative et quantitative du benthos (en particulier du phytobenthos), Tethys 3(1) (1971) 79] can be discerned from a quantitative minimal area which reflects the structural complexity of community [F.X. Niell, Sobre la biologia de Ascophyllum nosodum (L.) Le Jolis en Galicia, Invest. Pesq. 43 (1979) 501]. This suggests that the populational diversity can be considered as the value of the horizontal asymptote corresponding to the curve sample diversity/biomass [F.X. Niell, Les applications de l'index de Shannon a l'etude de la vegetation interdidale, Soc. Phycol. Fr. Bull. 19 (1974) 238]. In this study we develop a expression to determine minimal areas and use it to obtain certain information about the community structure based on diversity/area curve graphs. This expression is based on the functional relationship between the expected value of the diversity and the sample size used to estimate it. In order to establish the quality of the estimation process, we obtained the confidence intervals as a particularization of the functional (h-phi)-entropies proposed in [M. Salicru, M.L. Menendez, D. Morales, L. Pardo, Asymptotic distribution of (h,phi)-entropies, Commun. Stat. (Theory Methods) 22 (7) (1993) 2015]. As an example used to demonstrate the possibilities of this method, and only for illustrative purposes, data about a study on the rocky intertidal seawed populations in the Ria of Vigo (N.W. Spain) are analyzed [F.X. Niell, Estudios sobre la estructura, dinamica y produccion del Fitobentos intermareal (Facies rocosa) de la Ria de Vigo. Ph.D. Mem. University of Barcelona, Barcelona, 1979].

  2. Developmental changes in maternal education and minimal exposure effects on vocabulary in English- and Spanish-learning toddlers.

    PubMed

    Friend, Margaret; DeAnda, Stephanie; Arias-Trejo, Natalia; Poulin-Dubois, Diane; Zesiger, Pascal

    2017-12-01

    The current research follows up on two previous findings: that children with minimal dual-language exposure have smaller receptive vocabularies at 16months of age and that maternal education is a predictor of vocabulary when the dominant language is English but not when it is Spanish. The current study extends this research to 22-month-olds to assess the developmental effects of minimal exposure and maternal education on direct and parent-report measures of vocabulary size. The effects of minimal exposure on vocabulary size are no longer present at 22months of age, whereas maternal education effects remain but only for English speakers. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Comparison of Bootstrapping and Markov Chain Monte Carlo for Copula Analysis of Hydrological Droughts

    NASA Astrophysics Data System (ADS)

    Yang, P.; Ng, T. L.; Yang, W.

    2015-12-01

    Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC yield the same level of performance, and for medium sample sizes (~100), bootstrapping is better. For cases with a large sample size (~200), there is little difference between the CIs generated using bootstrapping and MCMC regardless of whether or not an informative prior exists.

  4. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF PRINTED LABELS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  5. Use of simulation to compare the performance of minimization with stratified blocked randomization.

    PubMed

    Toorawa, Robert; Adena, Michael; Donovan, Mark; Jones, Steve; Conlon, John

    2009-01-01

    Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations. Copyright (c) 2008 John Wiley & Sons, Ltd.

  6. Silicon microneedle array for minimally invasive human health monitoring

    NASA Astrophysics Data System (ADS)

    Smith, Rosemary L.; Collins, Scott D.; Duy, Janice; Minogue, Timothy D.

    2018-02-01

    A silicon microneedle array with integrated microfluidic channels is presented, which is designed to extract dermal interstitial fluid (ISF) for biochemical analysis. ISF is a cell-free biofluid that is known to contain many of the same constituents as blood plasma, but the scope and dynamics of biomarker similarities are known for only a few components, most notably glucose. Dermal ISF is accessible just below the outer skin layer (epidermis), which can be reached and extracted with minimal sensation and tissue trauma by using a microneedle array. The microneedle arrays presented here are being developed to extract dermal ISF for off-chip profiling of nucleic acid constituents in order to identify potential biomarkers of disease. In order to assess sample volume requirements, preliminary RNA profiling was performed with suction blister ISF. The microneedles are batch fabricated using established silicon technology (low cost), are small in size, and can be integrated with sensors for on-chip analysis. This approach portends a more rapid, less expensive, self-administered assessment of human health than is currently achievable with blood sampling, especially in non-clinical and austere settings. Ultimately, a wearable device for monitoring a person's health in any setting is envisioned.

  7. Feasibility Studies on Pipeline Disposal of Concentrated Copper Tailings Slurry for Waste Minimization

    NASA Astrophysics Data System (ADS)

    Senapati, Pradipta Kumar; Mishra, Barada Kanta

    2017-06-01

    The conventional lean phase copper tailings slurry disposal systems create pollution all around the disposal area through seepage and flooding of waste slurry water. In order to reduce water consumption and minimize pollution, the pipeline disposal of these waste slurries at high solids concentrations may be considered as a viable option. The paper presents the rheological and pipeline flow characteristics of copper tailings samples in the solids concentration range of 65-72 % by weight. The tailings slurry indicated non-Newtonian behaviour at these solids concentrations and the rheological data were best fitted by Bingham plastic model. The influence of solids concentration on yield stress and plastic viscosity for the copper tailings samples were discussed. Using a high concentration test loop, pipeline experiments were conducted in a 50 mm nominal bore (NB) pipe by varying the pipe flow velocity from 1.5 to 3.5 m/s. A non-Newtonian Bingham plastic pressure drop model predicted the experimental data reasonably well for the concentrated tailings slurry. The pressure drop model was used for higher size pipes and the operating conditions for pipeline disposal of concentrated copper tailings slurry in a 200 mm NB pipe with respect to specific power consumption were discussed.

  8. Transmission Electron Microscopy of Vacuum Sensitive, Radiation Sensitive, and Structurally Delicate Materials

    NASA Astrophysics Data System (ADS)

    Levin, Barnaby

    The transmission electron microscope (TEM) is a powerful tool for characterizing the nanoscale and atomic structure of materials, offering insights into their fundamental physical properties. However, TEM characterization requires very thin samples of material to be placed in a high vacuum environment, and exposed to electron radiation. The high vacuum will induce some materials to evaporate or sublimate, preventing them from being accurately characterized, radiation may damage the sample, causing mass loss, or altering its structure, and structurally delicate samples may collapse and break apart when they are thinned for TEM imaging. This dissertation discusses three different projects in which each of these three difficulties pose challenges to TEM characterization of samples. Firstly, we outline strategies for minimizing radiation damage when characterizing materials in TEM at atomic resolution. We consider types of radiation damage, such as vacancy enhanced displacement, that are not included in some previous discussions of beam damage, and we consider how to minimize damage when using new imaging techniques such as annular bright-field scanning TEM. Our methodology emphasizes the general principle that variation of both signal strength and damage cross section must be considered when choosing an experimental electron beam voltage to minimize damage. Secondly, we consider samples containing sulfur, which is prone to sublimation in high vacuum. TEM is routinely used to attempt to characterize the sulfur distribution in lithium-sulfur battery electrodes, but sublimation artifacts can give misleading results. We demonstrate that sulfur sublimation can be suppressed by using cryogenic TEM to characterize sulfur at very low temperatures, or by using the recently developed airSEM to characterize sulfur without exposing it to vacuum. Finally, we discuss the characterization of aging cadmium yellow paint from early 20th century art masterpieces. The binding medium holding paint particles together bends and curls as sample thickness is reduced to 100 nm, making high resolution characterization challenging. We acquire lattice resolution images of the pigment particles through the binder using high voltage zero-loss energy filtered TEM, allowing us to measure the pigment particle size and determine the pigment crystal structure, providing insight into why the paint is aging and how it was synthesized.

  9. [Influence on microstructure of dental zirconia ceramics prepared by two-step sintering].

    PubMed

    Jian, Chao; Li, Ning; Wu, Zhikai; Teng, Jing; Yan, Jiazhen

    2013-10-01

    To investigate the microstructure of dental zirconia ceramics prepared by two-step sintering. Nanostructured zirconia powder was dry compacted, cold isostatic pressed, and pre-sintered. The pre-sintered discs were cut processed into samples. Conventional sintering, single-step sintering, and two-step sintering were carried out, and density and grain size of the samples were measured. Afterward, T1 and/or T2 of two-step sintering ranges were measured. Effects on microstructure of different routes, which consisted of two-step sintering and conventional sintering were discussed. The influence of T1 and/or T2 on density and grain size were analyzed as well. The range of T1 was between 1450 degrees C and 1550 degrees C, and the range of T2 was between 1250 degrees C and 1350 degrees C. Compared with conventional sintering, finer microstructure of higher density and smaller grain could be obtained by two-step sintering. Grain growth was dependent on T1, whereas density was not much related with T1. However, density was dependent on T2, and grain size was minimally influenced. Two-step sintering could ensure a sintering body with high density and small grain, which is good for optimizing the microstructure of dental zirconia ceramics.

  10. Waste Minimization Assessment for Multilayered Printed Circuit Board Manufacturing

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium- size manu facturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at s...

  11. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A PAINT MANUFACTURING PLANT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium- size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at se...

  12. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF REFURBISHED RAILCAR ASSEMBLIES

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected ...

  13. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF PROTOTYPE PRINTED CIRCUIT BOARDS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  14. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF CAN-MANUFACTURING EQUIPMENT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but who lack the expertise to do so. aste Minimization Assessment Centers (WMACs) were established at ...

  15. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF SPEED REDUCTION EQUIPMENT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  16. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF CUSTOM MOLDED PLASTIC PRODUCTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected ...

  17. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A BUMPER REFINISHING PLANT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at se...

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altabet, Y. Elia; Debenedetti, Pablo G., E-mail: pdebene@princeton.edu; Stillinger, Frank H.

    In particle systems with cohesive interactions, the pressure-density relationship of the mechanically stable inherent structures sampled along a liquid isotherm (i.e., the equation of state of an energy landscape) will display a minimum at the Sastry density ρ{sub S}. The tensile limit at ρ{sub S} is due to cavitation that occurs upon energy minimization, and previous characterizations of this behavior suggested that ρ{sub S} is a spinodal-like limit that separates all homogeneous and fractured inherent structures. Here, we revisit the phenomenology of Sastry behavior and find that it is subject to considerable finite-size effects, and the development of the inherentmore » structure equation of state with system size is consistent with the finite-size rounding of an athermal phase transition. What appears to be a continuous spinodal-like point at finite system sizes becomes discontinuous in the thermodynamic limit, indicating behavior akin to a phase transition. We also study cavitation in glassy packings subjected to athermal expansion. Many individual expansion trajectories averaged together produce a smooth equation of state, which we find also exhibits features of finite-size rounding, and the examples studied in this work give rise to a larger limiting tension than for the corresponding landscape equation of state.« less

  19. Imaging samples larger than the field of view: the SLS experience

    NASA Astrophysics Data System (ADS)

    Vogiatzis Oikonomidis, Ioannis; Lovric, Goran; Cremona, Tiziana P.; Arcadu, Filippo; Patera, Alessandra; Schittny, Johannes C.; Stampanoni, Marco

    2017-06-01

    Volumetric datasets with micrometer spatial and sub-second temporal resolutions are nowadays routinely acquired using synchrotron X-ray tomographic microscopy (SRXTM). Although SRXTM technology allows the examination of multiple samples with short scan times, many specimens are larger than the field-of-view (FOV) provided by the detector. The extension of the FOV in the direction perpendicular to the rotation axis remains non-trivial. We present a method that can efficiently increase the FOV merging volumetric datasets obtained by region-of-interest tomographies in different 3D positions of the sample with a minimal amount of artefacts and with the ability to handle large amounts of data. The method has been successfully applied for the three-dimensional imaging of a small number of mouse lung acini of intact animals, where pixel sizes down to the micrometer range and short exposure times are required.

  20. Methods for Investigating Mercury Speciation, Transport, Methylation, and Bioaccumulation in Watersheds Affected by Historical Mining

    NASA Astrophysics Data System (ADS)

    Alpers, C. N.; Marvin-DiPasquale, M. C.; Fleck, J.; Ackerman, J. T.; Eagles-Smith, C.; Stewart, A. R.; Windham-Myers, L.

    2016-12-01

    Many watersheds in the western U.S. have mercury (Hg) contamination from historical mining of Hg and precious metals (gold and silver), which were concentrated using Hg amalgamation (mid 1800's to early 1900's). Today, specialized sampling and analytical protocols for characterizing Hg and methylmercury (MeHg) in water, sediment, and biota generate high-quality data to inform management of land, water, and biological resources. Collection of vertically and horizontally integrated water samples in flowing streams and use of a Teflon churn splitter or cone splitter ensure that samples and subsamples are representative. Both dissolved and particulate components of Hg species in water are quantified because each responds to different hydrobiogeochemical processes. Suspended particles trapped on pre-combusted (Hg-free) glass- or quartz-fiber filters are analyzed for total mercury (THg), MeHg, and reactive divalent mercury. Filtrates are analyzed for THg and MeHg to approximate the dissolved fraction. The sum of concentrations in particulate and filtrate fractions represents whole water, equivalent to an unfiltered sample. This approach improves upon analysis of filtered and unfiltered samples and computation of particulate concentration by difference; volume filtered is adjusted based on suspended-sediment concentration to minimize particulate non-detects. Information from bed-sediment sampling is enhanced by sieving into multiple size fractions and determining detailed grain-size distribution. Wet sieving ensures particle disaggregation; sieve water is retained and fines are recovered by centrifugation. Speciation analysis by sequential extraction and examination of heavy mineral concentrates by scanning electron microscopy provide additional information regarding Hg mineralogy and geochemistry. Biomagnification of MeHg in food webs is tracked using phytoplankton, zooplankton, aquatic and emergent vegetation, invertebrates, fish, and birds. Analysis of zooplankton in multiple size fractions from multiple depths in reservoirs can provide insight into food-web dynamics. The presentation will highlight application of these methods in several Hg-contaminated watersheds, with emphasis on understanding seasonal variability in designing effective sampling strategies.

  1. Ethics and Animal Numbers: Informal Analyses, Uncertain Sample Sizes, Inefficient Replications, and Type I Errors

    PubMed Central

    2011-01-01

    To obtain approval for the use vertebrate animals in research, an investigator must assure an ethics committee that the proposed number of animals is the minimum necessary to achieve a scientific goal. How does an investigator make that assurance? A power analysis is most accurate when the outcome is known before the study, which it rarely is. A ‘pilot study’ is appropriate only when the number of animals used is a tiny fraction of the numbers that will be invested in the main study because the data for the pilot animals cannot legitimately be used again in the main study without increasing the rate of type I errors (false discovery). Traditional significance testing requires the investigator to determine the final sample size before any data are collected and then to delay analysis of any of the data until all of the data are final. An investigator often learns at that point either that the sample size was larger than necessary or too small to achieve significance. Subjects cannot be added at this point in the study without increasing type I errors. In addition, journal reviewers may require more replications in quantitative studies than are truly necessary. Sequential stopping rules used with traditional significance tests allow incremental accumulation of data on a biomedical research problem so that significance, replicability, and use of a minimal number of animals can be assured without increasing type I errors. PMID:21838970

  2. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTUERE OF OUTDOOR ILLUMINATED SIGNS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium- size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at se...

  3. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF SHEET METAL COMPONENTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization ssessment Cente...

  4. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF BRAZED ALUMINUM OIL COOLERS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  5. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF ALUMINUM CANS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium- size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at se...

  6. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF PRINTED CIRCUIT BOARDS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium- size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at se...

  7. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF IRON CASTINGS AND FABRICATED SHEET METAL PARTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected u...

  8. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION FOR A MANUFACTURER OF ALUMINUM AND STEEL PARTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  9. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF ALUMINUM AND STEEL PARTS

    EPA Science Inventory

    The U.S.Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-sized manufacturers who want to minimize their generation of waste but who lack the expertise to do so. In an effort to assist these manufacturers, Waste Minimization Assessment Ce...

  10. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF PENNY BLANKS AND ZINC PRODUCTS

    EPA Science Inventory

    The U.S. EnvIronmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. aste Minimization Assessment Centers (WMACs) were established at selected u...

  11. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A METAL PARTS COATING PLANT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  12. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF CORN SYRUP AND CORN STARCH

    EPA Science Inventory

    The U.S.Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their geneation of waste but who lack the expertise to do so. In an effort to assist these manufacturers, Waste Minimization Assessment Cent...

  13. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF CUTTING AND WELDING EQUIPMENT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot program to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so in an effort to assist these manufacturers Waste Minimization Assessment Cent...

  14. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF SILICON-CONTROLLED RECTIFIERS AND SCHOTTKY RECTIFIERS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. In an effort to assist these manufacturers Waste Minimization Assessment Ce...

  15. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF REBUILT RAILWAY CARS AND COMPONENTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium- size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at se...

  16. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF PRINTED PLASTIC BAGS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established ...

  17. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION FOR A MANUFACTURER OF COMPRESSED AIR EQUIPMENT COMPONENTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  18. Human resource configurations: investigating fit with the organizational context.

    PubMed

    Toh, Soo Min; Morgeson, Frederick P; Campion, Michael A

    2008-07-01

    The present study investigated how key organizational contextual factors relate to bundles of human resource (HR) practices. In a two-phase study of a sample of 661 organizations representing a full range of industries and organizational size, the authors found that organizations use 1 of 5 HR bundles: cost minimizers, contingent motivators, competitive motivators, resource makers, and commitment maximizers. In addition, the authors showed that the organizations that use a given type of HR bundle may be distinguished by the organizational values they pursue and their organizational structure, thus suggesting that HR choices are related to the context within which organizations operate.

  19. Galaxy evolution by color-log(n) type since redshift unity in the Hubble Ultra Deep Field

    NASA Astrophysics Data System (ADS)

    Cameron, E.; Driver, S. P.

    2009-01-01

    Aims: We explore the use of the color-log(n) (where n is the global Sérsic index) plane as a tool for subdividing the galaxy population in a physically-motivated manner out to redshift unity. We thereby aim to quantify surface brightness evolution by color-log(n) type, accounting separately for the specific selection and measurement biases against each. Methods: We construct (u-r) color-log(n) diagrams for distant galaxies in the Hubble Ultra Deep Field (UDF) within a series of volume-limited samples to z=1.5. The color-log(n) distributions of these high redshift galaxies are compared against that measured for nearby galaxies in the Millennium Galaxy Catalogue (MGC), as well as to the results of visual morphological classification. Based on this analysis we divide our sample into three color-structure classes. Namely, “red, compact”, “blue, diffuse” and “blue, compact”. Luminosity-size diagrams are constructed for members of the two largest classes (“red, compact” and “blue, diffuse”), both in the UDF and the MGC. Artificial galaxy simulations (for systems with exponential and de Vaucouleurs profile shapes alternately) are used to identify “bias-free” regions of the luminosity-size plane in which galaxies are detected with high completeness, and their fluxes and sizes recovered with minimal surface brightness-dependent biases. Galaxy evolution is quantified via comparison of the low and high redshift luminosity-size relations within these “bias-free” regions. Results: We confirm the correlation between color-log(n) plane position and visual morphological type observed locally and in other high redshift studies in the color and/or structure domain. The combined effects of observational uncertainties, the morphological K-correction and cosmic variance preclude a robust statistical comparison of the shape of the MGC and UDF color-log(n) distributions. However, in the interval 0.75 < z <1.0 where the UDF i-band samples close to rest-frame B-band light (i.e., the morphological K-correction between our samples is negligible) we are able to present tentative evidence of bimodality, albiet for a very small sample size (17 galaxies). Our unique approach to quantifying selection and measurement biases in the luminosity-size plane highlights the need to consider errors in the recovery of both magnitudes and sizes, and their dependence on profile shape. Motivated by these results we divide our sample into the three color-structure classes mentioned above and quantify luminosity-size evolution by galaxy type. Specifically, we detect decreases in B-band, surface brightness of 1.57 ± 0.22 mag arcsec-2 and 1.65 ± 0.22 mag arcsec-2 for our “blue, diffuse” and “red, compact” classes respectively between redshift unity and the present day.

  20. CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.

  1. Comprehensive particle characterization of modern gasoline and diesel passenger cars at low ambient temperatures

    NASA Astrophysics Data System (ADS)

    Mathis, Urs; Mohr, Martin; Forss, Anna-Maria

    Particle measurements were performed in the exhaust of five light-duty vehicles (Euro-3) at +23, -7, and -20 °C ambient temperatures. The characterization included measurements of particle number, active surface area, number size distribution, and mass size distribution. We investigated two port-injection spark-ignition (PISI) vehicles, a direct-injection spark-ignition (DISI) vehicle, a compressed ignition (CI) vehicle with diesel particle filter (DPF), and a CI vehicle without DPF. To minimize sampling effects, particles were directly sampled from the tailpipe with a novel porous tube diluter at controlled sampling parameters. The diluted exhaust was split into two branches to measure either all or only non-volatile particles. Effect of ambient temperature was investigated on particle emission for cold and warmed-up engine. For the gasoline vehicles and the CI vehicle with DPF, the main portion of particle emission was found in the first minutes of the driving cycle at cold engine start. The particle emission of the CI vehicle without DPF was hardly affected by cold engine start. For the PISI vehicles, particle number emissions were superproportionally increased in the diameter size range from 0.1 to 0.3 μm during cold start at low ambient temperature. Based on the particle mass size distribution, the DPF removed smaller particles ( dp<0.5μm) more efficiently than larger particles ( dp>0.5μm). No significant effect of ambient temperature was observed when the engine was warmed up. Peak emission of volatile nanoparticles only took place at specific conditions and was poorly repeatable. Nucleation of particles was predominately observed during or after strong acceleration at high speed and during regeneration of the DPF.

  2. Nickel speciation in several serpentine (ultramafic) topsoils via bulk synchrotron-based techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siebecker, Matthew G.; Chaney, Rufus L.; Sparks, Donald L.

    2017-07-01

    Serpentine soils have elevated concentrations of trace metals including nickel, cobalt, and chromium compared to non-serpentine soils. Identifying the nickel bearing minerals allows for prediction of potential mobility of nickel. Synchrotron-based techniques can identify the solid-phase chemical forms of nickel with minimal sample treatment. Element concentrations are known to vary among soil particle sizes in serpentine soils. Sonication is a useful method to physically disperse sand, silt and clay particles in soils. Synchrotron-based techniques and sonication were employed to identify nickel species in discrete particle size fractions in several serpentine (ultramafic) topsoils to better understand solid-phase nickel geochemistry. Nickel commonlymore » resided in primary serpentine parent material such as layered-phyllosilicate and chain-inosilicate minerals and was associated with iron oxides. In the clay fractions, nickel was associated with iron oxides and primary serpentine minerals, such as lizardite. Linear combination fitting (LCF) was used to characterize nickel species. Total metal concentration did not correlate with nickel speciation and is not an indicator of the major nickel species in the soil. Differences in soil texture were related to different nickel speciation for several particle size fractionated samples. A discussion on LCF illustrates the importance of choosing standards based not only on statistical methods such as Target Transformation but also on sample mineralogy and particle size. Results from the F-test (Hamilton test), which is an underutilized tool in the literature for LCF in soils, highlight its usefulness to determine the appropriate number of standards to for LCF. EXAFS shell fitting illustrates that destructive interference commonly found for light and heavy elements in layered double hydroxides and in phyllosilicates also can occur in inosilicate minerals, causing similar structural features and leading to false positive results in LCF.« less

  3. Development of a modified cortisol extraction procedure for intermediately sized fish not amenable to whole-body or plasma extraction methods.

    PubMed

    Guest, Taylor W; Blaylock, Reginald B; Evans, Andrew N

    2016-02-01

    The corticosteroid hormone cortisol is the central mediator of the teleost stress response. Therefore, the accurate quantification of cortisol in teleost fishes is a vital tool for addressing fundamental questions about an animal's physiological response to environmental stressors. Conventional steroid extraction methods using plasma or whole-body homogenates, however, are inefficient within an intermediate size range of fish that are too small for phlebotomy and too large for whole-body steroid extractions. To assess the potential effects of hatchery-induced stress on survival of fingerling hatchery-reared Spotted Seatrout (Cynoscion nebulosus), we developed a novel extraction procedure for measuring cortisol in intermediately sized fish (50-100 mm in length) that are not amenable to standard cortisol extraction methods. By excising a standardized portion of the caudal peduncle, this tissue extraction procedure allows for a small portion of a larger fish to be sampled for cortisol, while minimizing the potential interference from lipids that may be extracted using whole-body homogenization procedures. Assay precision was comparable to published plasma and whole-body extraction procedures, and cortisol quantification over a wide range of sample dilutions displayed parallelism versus assay standards. Intra-assay %CV was 8.54%, and average recovery of spiked samples was 102%. Also, tissue cortisol levels quantified using this method increase 30 min after handling stress and are significantly correlated with blood values. We conclude that this modified cortisol extraction procedure provides an excellent alternative to plasma and whole-body extraction procedures for intermediately sized fish, and will facilitate the efficient assessment of cortisol in a variety of situations ranging from basic laboratory research to industrial and field-based environmental health applications.

  4. Single exosome detection in serum using microtoroid optical resonators (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Su, Judith

    2016-03-01

    Recently exosomes have attracted interest due to their potential as cancer biomarkers. We report the real time, label-free sensing of single exosomes in serum using microtoroid optical resonators. We use this approach to assay the progression of tumors implanted in mice by specifically detecting low concentrations of tumor-derived exosomes. Our approach measures the adsorption of individual exosomes onto a functionalized silica microtoroid by tracking changes in the optical resonant frequency of the microtoroid. When exosomes land on the microtoroid, they perturb its refractive index in the evanescent field and thus shift its resonance frequency. Through digital frequency locking, we are able to rapidly track these shifts with accuracies of better than 10 attometers (one part in 10^11). Samples taken from tumor-implanted mice from later weeks generated larger frequency shifts than those from earlier weeks. Control samples taken from a mouse with no tumor generated no such increase in signal between subsequent weeks. Analysis of shifts from tumor-implanted mouse samples show a distribution of unitary steps, with the maximum step having a height of ~1.2 fm, corresponding to an exosome size of 44 ± 4.8 nm. This size range corresponds to that found by performing nanoparticle tracking analysis on the same samples. Our results demonstrate development towards a minimally-invasive tumor "biopsy" that eliminates the need to find and access a tumor.

  5. Motion mitigation for lung cancer patients treated with active scanning proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grassberger, Clemens, E-mail: Grassberger.Clemens@mgh.harvard.edu; Dowdell, Stephen; Sharp, Greg

    2015-05-15

    Purpose: Motion interplay can affect the tumor dose in scanned proton beam therapy. This study assesses the ability of rescanning and gating to mitigate interplay effects during lung treatments. Methods: The treatments of five lung cancer patients [48 Gy(RBE)/4fx] with varying tumor size (21.1–82.3 cm{sup 3}) and motion amplitude (2.9–30.6 mm) were simulated employing 4D Monte Carlo. The authors investigated two spot sizes (σ ∼ 12 and ∼3 mm), three rescanning techniques (layered, volumetric, breath-sampled volumetric) and respiratory gating with a 30% duty cycle. Results: For 4/5 patients, layered rescanning 6/2 times (for the small/large spot size) maintains equivalent uniformmore » dose within the target >98% for a single fraction. Breath sampling the timing of rescanning is ∼2 times more effective than the same number of continuous rescans. Volumetric rescanning is sensitive to synchronization effects, which was observed in 3/5 patients, though not for layered rescanning. For the large spot size, rescanning compared favorably with gating in terms of time requirements, i.e., 2x-rescanning is on average a factor ∼2.6 faster than gating for this scenario. For the small spot size however, 6x-rescanning takes on average 65% longer compared to gating. Rescanning has no effect on normal lung V{sub 20} and mean lung dose (MLD), though it reduces the maximum lung dose by on average 6.9 ± 2.4/16.7 ± 12.2 Gy(RBE) for the large and small spot sizes, respectively. Gating leads to a similar reduction in maximum dose and additionally reduces V{sub 20} and MLD. Breath-sampled rescanning is most successful in reducing the maximum dose to the normal lung. Conclusions: Both rescanning (2–6 times, depending on the beam size) as well as gating was able to mitigate interplay effects in the target for 4/5 patients studied. Layered rescanning is superior to volumetric rescanning, as the latter suffers from synchronization effects in 3/5 patients studied. Gating minimizes the irradiated volume of normal lung more efficiently, while breath-sampled rescanning is superior in reducing maximum doses to organs at risk.« less

  6. Dialysis Extraction for Chromatography

    NASA Technical Reports Server (NTRS)

    Jahnsen, V. J.

    1985-01-01

    Chromatographic-sample pretreatment by dialysis detects traces of organic contaminants in water samples analyzed in field with minimal analysis equipment and minimal quantities of solvent. Technique also of value wherever aqueous sample and solvent must not make direct contact.

  7. Characterisation and discrimination of various types of lac resin using gas chromatography mass spectrometry techniques with quaternary ammonium reagents.

    PubMed

    Sutherland, K; del Río, J C

    2014-04-18

    A variety of lac resin samples obtained from artists' suppliers, industrial manufacturers, and museum collections were analysed using gas chromatography mass spectrometry (GCMS) and reactive pyrolysis GCMS with quaternary ammonium reagents. These techniques allowed a detailed chemical characterisation of microgram-sized samples, based on the detection and identification of derivatives of the hydroxy aliphatic and cyclic (sesquiterpene) acids that compose the resin. Differences in composition could be related to the nature of the resin, e.g. wax-containing (unrefined), bleached, or aged samples. Furthermore, differences in the relative abundances of aliphatic hydroxyacids appear to be associated with the biological source of the resin. The diagnostic value of newly characterised lac components, including 8-hydroxyacids, is discussed here for the first time. Identification of derivatised components was aided by AMDIS deconvolution software, and discrimination of samples was enhanced by statistical evaluation of data using principal component analysis. The robustness of the analyses, together with the minimal sample size required, make these very powerful approaches for the characterisation of lac resin in museum objects. The value of such analyses for enhancing the understanding of museum collections is illustrated by two case studies of objects in the collection of the Philadelphia Museum of Art: a restorer's varnish on a painting by Luca Signorelli, and a pictorial inlay in an early nineteenth-century High Chest by George Dyer. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Current trends in treatment of hypertension in Karachi and cost minimization possibilities.

    PubMed

    Hussain, Izhar M; Naqvi, Baqir S; Qasim, Rao M; Ali, Nasir

    2015-01-01

    This study finds out drug usage trends in Stage I Hypertensive Patients without any compelling indications in Karachi, deviations of current practices from evidence based antihypertensive therapeutic guidelines and looks for cost minimization opportunities. In the present study conducted during June 2012 to August 2012, two sets were used. Randomized stratified independent surveys were conducted in doctors and general population - including patients, using pretested questionnaires. Sample sizes for doctors and general population were 100 and 400 respectively. Statistical analysis was conducted on Statistical Package for Social Science (SPSS). Financial impact was also analyzed. On the basis of patients' doctors' feedback, Beta Blockers, and Angiotensin Converting Enzyme Inhibitors were used more frequently than other drugs. Thiazides and low-priced generics were hardly prescribed. Beta blockers were prescribed widely and considered cost effective. This trend increases cost by two to ten times. Feedbacks showed that therapeutic guidelines were not followed by the doctors practicing in the community and hospitals in Karachi. Thiazide diuretics were hardly used. Beta blockers were widely prescribed. High priced market leaders or expensive branded generics were commonly prescribed. Therefore, there are great opportunities for cost minimization by using evidence-based clinically effective and safe medicines.

  9. Effects of Melt Convection and Solid Transport on Macrosegregation and Grain Structure in Equiaxed Al-Cu Alloys

    NASA Technical Reports Server (NTRS)

    Rerko, Rodney S.; deGroh, Henry C., III; Beckermann, Christoph

    2000-01-01

    Macrosegregation in metal casting can be caused by thermal and solutal melt convection, and the transport of unattached solid crystals resulting from nucleation in the bulk liquid or dendrite fragmentation. To develop a comprehensive numerical model for the casting of alloys, an experimental study has been conducted to generate benchmark data with which such a solidification model could be tested. The objectives were: (1) experimentally study the effects of solid transport and thermosolutal convection on macrosegregation and grain size; and (2) provide a complete set of boundary conditions temperature data, segregation data, and grain size data - to validate numerical models. Through the control of end cooling and side wall heating, radial temperature gradients in the sample and furnace were minimized. Thus the vertical crucible wall was adiabatic. Samples at room temperature were 24 cc and 95 mm long. The alloys used were Al-1 wt. pct. Cu, and Al- 10 wt. pct. Cu; the starting point for solidification was isothermal at 710 and 685 C respectively. To induce an equiaxed structure various amounts of the grain refiner TiB2 were added. Samples were either cooled from the top, or the bottom. Several trends in the data stand out. In attempting to model these experiments, concentrating on these trends or differences may be beneficial.

  10. A Mars Sample Return Sample Handling System

    NASA Technical Reports Server (NTRS)

    Wilson, David; Stroker, Carol

    2013-01-01

    We present a sample handling system, a subsystem of the proposed Dragon landed Mars Sample Return (MSR) mission [1], that can return to Earth orbit a significant mass of frozen Mars samples potentially consisting of: rock cores, subsurface drilled rock and ice cuttings, pebble sized rocks, and soil scoops. The sample collection, storage, retrieval and packaging assumptions and concepts in this study are applicable for the NASA's MPPG MSR mission architecture options [2]. Our study assumes a predecessor rover mission collects samples for return to Earth to address questions on: past life, climate change, water history, age dating, understanding Mars interior evolution [3], and, human safety and in-situ resource utilization. Hence the rover will have "integrated priorities for rock sampling" [3] that cover collection of subaqueous or hydrothermal sediments, low-temperature fluidaltered rocks, unaltered igneous rocks, regolith and atmosphere samples. Samples could include: drilled rock cores, alluvial and fluvial deposits, subsurface ice and soils, clays, sulfates, salts including perchlorates, aeolian deposits, and concretions. Thus samples will have a broad range of bulk densities, and require for Earth based analysis where practical: in-situ characterization, management of degradation such as perchlorate deliquescence and volatile release, and contamination management. We propose to adopt a sample container with a set of cups each with a sample from a specific location. We considered two sample cups sizes: (1) a small cup sized for samples matching those submitted to in-situ characterization instruments, and, (2) a larger cup for 100 mm rock cores [4] and pebble sized rocks, thus providing diverse samples and optimizing the MSR sample mass payload fraction for a given payload volume. We minimize sample degradation by keeping them frozen in the MSR payload sample canister using Peltier chip cooling. The cups are sealed by interference fitted heat activated memory alloy caps [5] if the heating does not affect the sample, or by crimping caps similar to bottle capping. We prefer cap sealing surfaces be external to the cup rim to prevent sample dust inside the cups interfering with sealing, or, contamination of the sample by Teflon seal elements (if adopted). Finally the sample collection rover, or a Fetch rover, selects cups with best choice samples and loads them into a sample tray, before delivering it to the Earth Return Vehicle (ERV) in the MSR Dragon capsule as described in [1] (Fig 1). This ensures best use of the MSR payload mass allowance. A 3 meter long jointed robot arm is extended from the Dragon capsule's crew hatch, retrieves the sample tray and inserts it into the sample canister payload located on the ERV stage. The robot arm has capacity to obtain grab samples in the event of a rover failure. The sample canister has a robot arm capture casting to enable capture by crewed or robot spacecraft when it returns to Earth orbit

  11. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF HEATING, VENTILATING, AND AIR CONDITIONING EQUIPMENT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  12. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF BASEBALL BATS AND GOLF CLUBS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. aste Minimization Assessment Centers (WMACS) were established at selected un...

  13. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF IRON CASTINGS AND FABRICATED SHEET METAL PARTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expense to do so. aste Minimization Assessment Centers (WMACS) were established at selected univ...

  14. Geographic and host size variations as indicators of Anisakis pegreffii infection in European pilchard (Sardina pilchardus) from the Mediterranean Sea: Food safety implications.

    PubMed

    Bušelić, Ivana; Botić, Antonela; Hrabar, Jerko; Stagličić, Nika; Cipriani, Paolo; Mattiucci, Simonetta; Mladineo, Ivona

    2018-02-02

    European pilchards are traditionally eaten marinated or salted in the Mediterranean countries often without thermal processing or gutting due to small size. Since ingestion of live third stage Anisakis larvae represents a causing agent in the onset of anisakiasis, the aim of our study was to assess prevalence and intensity of Anisakis infection in European pilchards originating from different Mediterranean regions in a three-year sampling period (2013-2015). A total of 1564 specimens of European pilchard collected from two geographically distinct sampling regions (western Mediterranean and Adriatic Sea) were examined using the UV-Press method, which utilises the fluorescence of frozen anisakids in flattened and subsequently frozen fillets and viscera. A subsample of 67 isolated larvae was identified as A. pegreffii by diagnostic allozyme markers and sequence analyses of the mtDNA cox2 locus. The overall prevalence in pilchards was 12.2% (range 0-44.9% for different sampling points) and mean intensity 1.8. More importantly, we have observed an overall larval prevalence of 1.5% in fillets. The highest prevalence (44.9%) was recorded in pilchards caught in western parts of the Mediterranean. As fish host size was a significant predictor of parasite abundance, it should be highlighted that these pilchards were also the largest (mean total length 173.2mm); on average >2cm larger than the rest of the samples. Other isolated nematode species included Hysterothylacium sp. in viscera, showing almost a double of A. pegreffii prevalence, 20.1%. In summary, our study demonstrates that the presence of A. pegreffii in European pilchards from the Mediterranean Sea is highly influenced by both geographic and host size variation. This implies that, before future risk management measures are developed, these variables should be assessed in order to minimize public health concerns. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Human Visual Search Does Not Maximize the Post-Saccadic Probability of Identifying Targets

    PubMed Central

    Morvan, Camille; Maloney, Laurence T.

    2012-01-01

    Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively. PMID:22319428

  16. Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste

    NASA Astrophysics Data System (ADS)

    Batyaev, V. F.; Skliarov, S. V.

    2018-01-01

    The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW). The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration), meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g) confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.

  17. Improving image quality in laboratory x-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.

    2017-03-01

    Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.

  18. Non-parametric estimation of population size changes from the site frequency spectrum.

    PubMed

    Waltoft, Berit Lindum; Hobolth, Asger

    2018-06-11

    Changes in population size is a useful quantity for understanding the evolutionary history of a species. Genetic variation within a species can be summarized by the site frequency spectrum (SFS). For a sample of size n, the SFS is a vector of length n - 1 where entry i is the number of sites where the mutant base appears i times and the ancestral base appears n - i times. We present a new method, CubSFS, for estimating the changes in population size of a panmictic population from an observed SFS. First, we provide a straightforward proof for the expression of the expected site frequency spectrum depending only on the population size. Our derivation is based on an eigenvalue decomposition of the instantaneous coalescent rate matrix. Second, we solve the inverse problem of determining the changes in population size from an observed SFS. Our solution is based on a cubic spline for the population size. The cubic spline is determined by minimizing the weighted average of two terms, namely (i) the goodness of fit to the observed SFS, and (ii) a penalty term based on the smoothness of the changes. The weight is determined by cross-validation. The new method is validated on simulated demographic histories and applied on unfolded and folded SFS from 26 different human populations from the 1000 Genomes Project.

  19. Robust DNA Isolation and High-throughput Sequencing Library Construction for Herbarium Specimens.

    PubMed

    Saeidi, Saman; McKain, Michael R; Kellogg, Elizabeth A

    2018-03-08

    Herbaria are an invaluable source of plant material that can be used in a variety of biological studies. The use of herbarium specimens is associated with a number of challenges including sample preservation quality, degraded DNA, and destructive sampling of rare specimens. In order to more effectively use herbarium material in large sequencing projects, a dependable and scalable method of DNA isolation and library preparation is needed. This paper demonstrates a robust, beginning-to-end protocol for DNA isolation and high-throughput library construction from herbarium specimens that does not require modification for individual samples. This protocol is tailored for low quality dried plant material and takes advantage of existing methods by optimizing tissue grinding, modifying library size selection, and introducing an optional reamplification step for low yield libraries. Reamplification of low yield DNA libraries can rescue samples derived from irreplaceable and potentially valuable herbarium specimens, negating the need for additional destructive sampling and without introducing discernible sequencing bias for common phylogenetic applications. The protocol has been tested on hundreds of grass species, but is expected to be adaptable for use in other plant lineages after verification. This protocol can be limited by extremely degraded DNA, where fragments do not exist in the desired size range, and by secondary metabolites present in some plant material that inhibit clean DNA isolation. Overall, this protocol introduces a fast and comprehensive method that allows for DNA isolation and library preparation of 24 samples in less than 13 h, with only 8 h of active hands-on time with minimal modifications.

  20. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhary, Kenny; Najm, Habib N.

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  1. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE PAGES

    Chowdhary, Kenny; Najm, Habib N.

    2016-04-13

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  2. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  3. A low cost solution for post-biopsy complications using available RFA generator and coaxial core biopsy needle.

    PubMed

    Azlan, C A; Mohd Nasir, N F; Saifizul, A A; Faizul, M S; Ng, K H; Abdullah, B J J

    2007-12-01

    Percutaneous image-guided needle biopsy is typically performed in highly vascular organs or in tumours with rich macroscopic and microscopic blood supply. The main risks related to this procedure are haemorrhage and implantation of tumour cells in the needle tract after the biopsy needle is withdrawn. From numerous conducted studies, it was found that heating the needle tract using alternating current in radiofrequency (RF) range has a potential to minimize these effects. However, this solution requires the use of specially designed needles, which would make the procedure relatively expensive and complicated. Thus, we propose a simple solution by using readily available coaxial core biopsy needles connected to a radiofrequency ablation (RFA) generator. In order to do so, we have designed and developed an adapter to interface between these two devices. For evaluation purpose, we used a bovine liver as a sample tissue. The experimental procedure was done to study the effect of different parameter settings on the size of coagulation necrosis caused by the RF current heating on the subject. The delivery of the RF energy was varied by changing the values for delivered power, power delivery duration, and insertion depth. The results showed that the size of the coagulation necrosis is affected by all of the parameters tested. In general, the size of the region is enlarged with higher delivery of RF power, longer duration of power delivery, and shallower needle insertion and become relatively constant after a certain value. We also found that the solution proposed provides a low cost and practical way to minimizes unwanted post-biopsy effects.

  4. Robotic unclamped "minimal-margin" partial nephrectomy: ongoing refinement of the anatomic zero-ischemia concept.

    PubMed

    Satkunasivam, Raj; Tsai, Sheaumei; Syan, Sumeet; Bernhard, Jean-Christophe; de Castro Abreu, Andre Luis; Chopra, Sameer; Berger, Andre K; Lee, Dennis; Hung, Andrew J; Cai, Jie; Desai, Mihir M; Gill, Inderbir S

    2015-10-01

    Anatomic partial nephrectomy (PN) techniques aim to decrease or eliminate global renal ischemia. To report the technical feasibility of completely unclamped "minimal-margin" robotic PN. We also illustrate the stepwise evolution of anatomic PN surgery with related outcomes data. This study was a retrospective analysis of 179 contemporary patients undergoing anatomic PN at a tertiary academic institution between October 2009 and February 2013. Consecutive consented patients were grouped into three cohorts: group 1, with superselective clamping and developmental-curve experience (n = 70); group 2, with superselective clamping and mature experience (n = 60); and group 3, which had completely unclamped, minimal-margin PN (n = 49). Patients in groups 1 and 2 underwent superselective tumor-specific devascularization, whereas patients in group 3 underwent completely unclamped minimal-margin PN adjacent to the tumor edge, a technique that takes advantage of the radially oriented intrarenal architecture and anatomy. Primary outcomes assessed the technical feasibility of robotic, completely unclamped, minimal-margin PN; short-term changes in estimated glomerular filtration rate (eGFR); and development of new-onset chronic kidney disease (CKD) stage >3. Secondary outcome measures included perioperative variables, 30-d complications, and histopathologic outcomes. Demographic data were similar among groups. For similarly sized tumors (p = 0.13), percentage of kidney preserved was greater (p = 0.047) and margin width was narrower (p = 0.0004) in group 3. In addition, group 3 had less blood loss (200, 225, and 150ml; p = 0.04), lower transfusion rates (21%, 23%, and 4%; p = 0.008), and shorter hospital stay (p = 0.006), whereas operative time and 30-d complication rates were similar. At 1-mo postoperatively, median percentage reduction in eGFR was similar (7.6%, 0%, and 3.0%; p = 0.53); however, new-onset CKD stage >3 occurred less frequently in group 3 (23%, 10%, and 2%; p = 0.003). Study limitations included retrospective analysis, small sample size, and short follow-up. We developed an anatomically based technique of robotic, unclamped, minimal-margin PN. This evolution from selective clamped to unclamped PN may further optimize functional outcomes but requires external validation and longer follow-up. The technical evolution of partial nephrectomy surgery is aimed at eliminating global renal damage from the cessation of blood flow. An unclamped minimal-margin technique is described and may offer renal functional advantage but requires long-term follow-up and validation at other institutions. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  5. Microanalyzer for Biomonitoring of Lead (Pb) in Blood and Urine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yantasee, Wassana; Timchalk, Chuck; Lin, Yuehe

    2007-01-01

    Biomonitoring of lead (Pb) in blood and urine enables quantitative evaluation of human occupational and environmental exposures to Pb. The state-of-the-art ICP-MS instruments analyze metals in laboratories, resulting in lengthy turn around time, and are expensive. In response to the growing need for metal analyzer for on-site, real-time monitoring of trace metals in individuals, we developed a portable microanalyzer based on flow-injection/adsorptive stripping voltammetry and used it to analyze Pb in rat blood and urine. Fouling of electrodes by proteins often prevents the effective use of electrochemical sensors in biological matrices. Minimization of such fouling was accomplished with the suitablemore » sample pretreatment and the turbulent flowing of Pb contained blood and urine onto the glassy electrode inside the microanalyzer, which resulted in no apparent electrode fouling even when the samples contained 50% urine or 10% blood by volume. There was no matrix effect on the voltammetric Pb signals even when the samples contained 10% blood or 10% urine. The microanalyzer offered linear concentration range relevant to Pb exposure levels in human (0-20 ppb in 10%-blood samples, 0-50 ppb in 50%-urine samples). The device had excellent sensitivity and reproducibility; Pb detection limits were 0.54 ppb and 0.42 ppb, and % RSDs were 4.9 and 2.4 in 50%-urine and 10%-blood samples, respectively. It offered a high throughput (3 min per sample) and had economical use of samples (60 ?L per measurement), making the collection of blood being less invasive especially to children, and had low reagent consumption (1 ?g of Hg per measurement), thus minimizing the health concerns of mercury use. Being miniaturized in size, the microanalyzer is portable and field-deployable. Thus, it has a great potential to be the next-generation analyzer for biomonitoring of toxic metals.« less

  6. Novel strategies to construct complex synthetic vectors to produce DNA molecular weight standards.

    PubMed

    Chen, Zhe; Wu, Jianbing; Li, Xiaojuan; Ye, Chunjiang; Wenxing, He

    2009-05-01

    DNA molecular weight standards (DNA markers, nucleic acid ladders) are commonly used in molecular biology laboratories as references to estimate the size of various DNA samples in electrophoresis process. One method of DNA marker production is digestion of synthetic vectors harboring multiple DNA fragments of known sizes by restriction enzymes. In this article, we described three novel strategies-sequential DNA fragment ligation, screening of ligation products by polymerase chain reaction (PCR) with end primers, and "small fragment accumulation"-for constructing complex synthetic vectors and minimizing the mass differences between DNA fragments produced from restrictive digestion of synthetic vectors. The strategy could be applied to construct various complex synthetic vectors to produce any type of low-range DNA markers, usually available commercially. In addition, the strategy is useful for single-step ligation of multiple DNA fragments for construction of complex synthetic vectors and other applications in molecular biology field.

  7. Image re-sampling detection through a novel interpolation kernel.

    PubMed

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Semantic-gap-oriented active learning for multilabel image annotation.

    PubMed

    Tang, Jinhui; Zha, Zheng-Jun; Tao, Dacheng; Chua, Tat-Seng

    2012-04-01

    User interaction is an effective way to handle the semantic gap problem in image annotation. To minimize user effort in the interactions, many active learning methods were proposed. These methods treat the semantic concepts individually or correlatively. However, they still neglect the key motivation of user feedback: to tackle the semantic gap. The size of the semantic gap of each concept is an important factor that affects the performance of user feedback. User should pay more efforts to the concepts with large semantic gaps, and vice versa. In this paper, we propose a semantic-gap-oriented active learning method, which incorporates the semantic gap measure into the information-minimization-based sample selection strategy. The basic learning model used in the active learning framework is an extended multilabel version of the sparse-graph-based semisupervised learning method that incorporates the semantic correlation. Extensive experiments conducted on two benchmark image data sets demonstrated the importance of bringing the semantic gap measure into the active learning process.

  9. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    NASA Astrophysics Data System (ADS)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  10. The WAIS Melt Monitor: An automated ice core melting system for meltwater sample handling and the collection of high resolution microparticle size distribution data

    NASA Astrophysics Data System (ADS)

    Breton, D. J.; Koffman, B. G.; Kreutz, K. J.; Hamilton, G. S.

    2010-12-01

    Paleoclimate data are often extracted from ice cores by careful geochemical analysis of meltwater samples. The analysis of the microparticles found in ice cores can also yield unique clues about atmospheric dust loading and transport, dust provenance and past environmental conditions. Determination of microparticle concentration, size distribution and chemical makeup as a function of depth is especially difficult because the particle size measurement either consumes or contaminates the meltwater, preventing further geochemical analysis. Here we describe a microcontroller-based ice core melting system which allows the collection of separate microparticle and chemistry samples from the same depth intervals in the ice core, while logging and accurately depth-tagging real-time electrical conductivity and particle size distribution data. This system was designed specifically to support microparticle analysis of the WAIS Divide WDC06A deep ice core, but many of the subsystems are applicable to more general ice core melting operations. Major system components include: a rotary encoder to measure ice core melt displacement with 0.1 millimeter accuracy, a meltwater tracking system to assign core depths to conductivity, particle and sample vial data, an optical debubbler level control system to protect the Abakus laser particle counter from damage due to air bubbles, a Rabbit 3700 microcontroller which communicates with a host PC, collects encoder and optical sensor data and autonomously operates Gilson peristaltic pumps and fraction collectors to provide automatic sample handling, melt monitor control software operating on a standard PC allowing the user to control and view the status of the system, data logging software operating on the same PC to collect data from the melting, electrical conductivity and microparticle measurement systems. Because microparticle samples can easily be contaminated, we use optical air bubble sensors and high resolution ice core density profiles to guide the melting process. The combination of these data allow us to analyze melt head performance, minimize outer-to-inner fraction contamination and avoid melt head flooding. The WAIS Melt Monitor system allows the collection of real-time, sub-annual microparticle and electrical conductivity data while producing and storing enough sample for traditional Coulter-Counter particle measurements as well long term acid leaching of bioactive metals (e.g., Fe, Co, Cd, Cu, Zn) prior to chemical analysis.

  11. [Analysis of the patient safety culture in hospitals of the Spanish National Health System].

    PubMed

    Saturno, P J; Da Silva Gama, Z A; de Oliveira-Sousa, S L; Fonseca, Y A; de Souza-Oliveira, A C; Castillo, Carmen; López, M José; Ramón, Teresa; Carrillo, Andrés; Iranzo, M Dolores; Soria, Victor; Saturno, Pedro J; Parra, Pedro; Gomis, Rafael; Gascón, Juan José; Martinez, José; Arellano, Carmen; Gama, Zenewton A Da Silva; de Oliveira-Sousa, Silvana L; de Souza-Oliveira, Adriana C; Fonseca, Yadira A; Ferreira, Marta Sobral

    2008-12-01

    A safety culture is essential to minimize errors and adverse events. Its measurement is needed to design activities in order to improve it. This paper describes the methods and main results of a study on safety climate in a nation-wide representative sample of public hospitals of the Spanish NHS. The Hospital Survey on Patient Safety Culture questionnaire was distributed to a random sample of health professionals in a representative sample of 24 hospitals, proportionally stratified by hospital size. Results are analyzed to provide a description of safety climate, its strengths and weaknesses. Differences by hospital size, type of health professional and service are analyzed using ANOVA. A total of 2503 responses are analyzed (response rate: 40%, (93% from professionals with direct patient contact). A total of 50% gave patient safety a score from 6 to 8 (on a 10-point scale); 95% reported < 2 events last year. Dimensions "Teamwork within hospital units" (71.8 [1.8]) and "Supervisor/Manager expectations and actions promoting safety" (61.8 [1.7]) have the highest percentage of positive answers. "Staffing", "Teamwork across hospital units", "Overall perceptions of safety" and "Hospital management support for patient safety" could be identified as weaknesses. Significant differences by hospital size, type of professional and service suggest a generally more positive attitude in small hospitals and Pharmacy services, and a more negative one in physicians. Strengths and weaknesses of the safety climate in the hospitals of the Spanish NHS have been identified and they are used to design appropriate strategies for improvement.

  12. Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data

    NASA Astrophysics Data System (ADS)

    Glüsenkamp, Thorsten

    2018-06-01

    Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.

  13. Nanoscale surface modification of glass using a 1064 nm pulsed laser

    NASA Astrophysics Data System (ADS)

    Theppakuttai, Senthil; Chen, Shaochen

    2003-07-01

    We report a method to produce nanopatterns on borosilicate glass by a Nd:yttrium-aluminum-garnet laser (10 ns, 1064 nm), using silica nanospheres. Nonlinear absorption of the enhanced optical field between the spheres and glass sample is believed to be the primary reason for the creation of nanofeatures on the glass substrate. By shining the laser beam from the backside of the glass sample, the scattering effects are minimized and only the direct field enhancement due to the spheres is utilized for surface patterning. To confirm this, calculations based on the Mie scattering theory were performed, and the resulting intensity as a function of scattering angles are presented. The nanofeatures thus obtained by this method are 350 nm in diameter and the distance between them is around 640 nm, which is same as the size of spheres used.

  14. Susceptibility of Haemophilus influenzae to chloramphenicol and eight beta-lactam antibiotics.

    PubMed Central

    Thirumoorthi, M C; Kobos, D M; Dajani, A S

    1981-01-01

    We examined the minimal inhibitory concentrations and minimal bactericidal concentrations of chloramphenicol, ampicillin, ticarcillin, cefamandole, cefazolin, cefoxitin, cefotaxime, ceforanide, and moxalactam for 100 isolates of Haemophilus influenzae, 25 of which produced beta-lactamase. Susceptibility was not influenced by the capsular characteristic of the organism. The mean minimal inhibitory concentrations of cefamandole, ticarcillin, and ampicillin for beta-lactamase-producing strains were 3-, 120-, and 400-fold higher than their respective mean minimal inhibitory concentrations for beta-lactamase-negative strains. No such difference was noted for the other antibiotics. We performed time-kill curve studies, using chloramphenicol, ampicillin, cefamandole, cefotaxime, and moxalactam with two concentrations of the antimicrobial agents (4 or 20 times the minimal inhibitory concentrations) and two inoculum sizes (10(4) or 10(6) colony-forming units per ml). The inoculum size had no appreciable effect on the rate of killing of beta-lactamase-negative strains. The rates at which beta-lactamase-producing strains were killed by chloramphenicol, cefotaxime, and moxalactam was not influenced by the inoculum size. Whereas cefamandole in high concentrations was able to kill at 10(6) colony-forming units/ml of inoculum, it had only a temporary inhibiting effect at low drug concentrations. Methicillin and the beta-lactamase inhibitor CP-45,899 were able to neutralize the inactivation of cefamandole by a large inoculum of beta-lactamase-producing H. influenzae. PMID:6974541

  15. Microfocusing at the PG1 beamline at FLASH

    DOE PAGES

    Dziarzhytski, Siarhei; Gerasimova, Natalia; Goderich, Rene; ...

    2016-01-01

    The Kirkpatrick–Baez (KB) refocusing mirror system installed at the PG1 branch of the plane-grating monochromator beamline at the soft X-ray/XUV free-electron laser in Hamburg (FLASH) is designed to provide tight aberration-free focusing down to 4 µm × 6 µm full width at half-maximum (FWHM) on the sample. Such a focal spot size is mandatory to achieve ultimate resolution and to guarantee best performance of the vacuum-ultraviolet (VUV) off-axis parabolic double-monochromator Raman spectrometer permanently installed at the PG1 beamline as an experimental end-station. The vertical beam size on the sample of the Raman spectrometer, which operates without entrance slit, defines andmore » limits the energy resolution of the instrument which has an unprecedented design value of 2 meV for photon energies below 70 eV and about 15 meV for higher energies up to 200 eV. In order to reach the designed focal spot size of 4 µm FWHM (vertically) and to hold the highest spectrometer resolution, special fully motorized in-vacuum manipulators for the KB mirror holders have been developed and the optics have been aligned employing wavefront-sensing techniques as well as ablative imprints analysis. Lastly, aberrations like astigmatism were minimized. In this article the design and layout of the KB mirror manipulators, the alignment procedure as well as microfocus optimization results are presented.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dziarzhytski, Siarhei; Gerasimova, Natalia; Goderich, Rene

    The Kirkpatrick–Baez (KB) refocusing mirror system installed at the PG1 branch of the plane-grating monochromator beamline at the soft X-ray/XUV free-electron laser in Hamburg (FLASH) is designed to provide tight aberration-free focusing down to 4 µm × 6 µm full width at half-maximum (FWHM) on the sample. Such a focal spot size is mandatory to achieve ultimate resolution and to guarantee best performance of the vacuum-ultraviolet (VUV) off-axis parabolic double-monochromator Raman spectrometer permanently installed at the PG1 beamline as an experimental end-station. The vertical beam size on the sample of the Raman spectrometer, which operates without entrance slit, defines andmore » limits the energy resolution of the instrument which has an unprecedented design value of 2 meV for photon energies below 70 eV and about 15 meV for higher energies up to 200 eV. In order to reach the designed focal spot size of 4 µm FWHM (vertically) and to hold the highest spectrometer resolution, special fully motorized in-vacuum manipulators for the KB mirror holders have been developed and the optics have been aligned employing wavefront-sensing techniques as well as ablative imprints analysis. Lastly, aberrations like astigmatism were minimized. In this article the design and layout of the KB mirror manipulators, the alignment procedure as well as microfocus optimization results are presented.« less

  17. FAST: Size-Selective, Clog-Free Isolation of Rare Cancer Cells from Whole Blood at a Liquid-Liquid Interface.

    PubMed

    Kim, Tae-Hyeong; Lim, Minji; Park, Juhee; Oh, Jung Min; Kim, Hyeongeun; Jeong, Hyunjin; Lee, Sun Ju; Park, Hee Chul; Jung, Sungmok; Kim, Byung Chul; Lee, Kyusang; Kim, Mi-Hyun; Park, Do Youn; Kim, Gwang Ha; Cho, Yoon-Kyoung

    2017-01-17

    Circulating tumor cells (CTCs) have great potential to provide minimally invasive ways for the early detection of cancer metastasis and for the response monitoring of various cancer treatments. Despite the clinical importance and progress of CTC-based cancer diagnostics, most of the current methods of enriching CTCs are difficult to implement in general hospital settings due to complex and time-consuming protocols. Among existing technologies, size-based isolation methods provide antibody-independent, relatively simple, and high throughput protocols. However, the clogging issues and lower than desired recovery rates and purity are the key challenges. In this work, inspired by antifouling membranes with liquid-filled pores in nature, clog-free, highly sensitive (95.9 ± 3.1% recovery rate), selective (>2.5 log depletion of white blood cells), rapid (>3 mL/min), and label-free isolation of viable CTCs from whole blood without prior sample treatment is achieved using a stand-alone lab-on-a-disc system equipped with fluid-assisted separation technology (FAST). Numerical simulation and experiments show that this method provides uniform, clog-free, ultrafast cell enrichment with pressure drops much less than in conventional size-based filtration, at 1 kPa. We demonstrate the clinical utility of the point-of-care detection of CTCs with samples taken from 142 patients suffering from breast, stomach, or lung cancer.

  18. From decimeter- to centimeter-sized mobile microrobots: the development of the MINIMAN system

    NASA Astrophysics Data System (ADS)

    Woern, Heinz; Schmoeckel, Ferdinand; Buerkle, Axel; Samitier, Josep; Puig-Vidal, Manel; Johansson, Stefan A. I.; Simu, Urban; Meyer, Joerg-Uwe; Biehl, Margit

    2001-10-01

    Based on small mobile robots the presented MINIMAN system provides a platform for micro-manipulation tasks in very different kinds of applications. Three exemplary applications demonstrate the capabilities of the system. Both the high precision assembly of an optical system consisting of three millimeter-sized parts and the positioning of single 20-μm-cells under the light microscope as well as the handling of tiny samples inside the scanning electron microscope are done by the same kind of robot. For the different tasks, the robot is equipped with appropriate tools such as micro-pipettes or grippers with force and tactile sensors. For the extension to a multi-robot system, it is necessary to further reduce the size of robots. For the above mentioned robot prototypes a slip-stick driving principle is employed. While this design proves to work very well for the described decimeter-sized robots, it is not suitable for further miniaturized robots because of their reduced inertia. Therefore, the developed centimeter-sized robot is driven by multilayered piezoactuators performing defined steps without a slipping phase. To reduce the number of connecting wires the microrobot has integrated circuits on board. They include high voltage drivers and a serial communication interface for a minimized number of wires.

  19. A cavitation transition in the energy landscape of simple cohesive liquids and glasses

    NASA Astrophysics Data System (ADS)

    Altabet, Y. Elia; Stillinger, Frank H.; Debenedetti, Pablo G.

    2016-12-01

    In particle systems with cohesive interactions, the pressure-density relationship of the mechanically stable inherent structures sampled along a liquid isotherm (i.e., the equation of state of an energy landscape) will display a minimum at the Sastry density ρS. The tensile limit at ρS is due to cavitation that occurs upon energy minimization, and previous characterizations of this behavior suggested that ρS is a spinodal-like limit that separates all homogeneous and fractured inherent structures. Here, we revisit the phenomenology of Sastry behavior and find that it is subject to considerable finite-size effects, and the development of the inherent structure equation of state with system size is consistent with the finite-size rounding of an athermal phase transition. What appears to be a continuous spinodal-like point at finite system sizes becomes discontinuous in the thermodynamic limit, indicating behavior akin to a phase transition. We also study cavitation in glassy packings subjected to athermal expansion. Many individual expansion trajectories averaged together produce a smooth equation of state, which we find also exhibits features of finite-size rounding, and the examples studied in this work give rise to a larger limiting tension than for the corresponding landscape equation of state.

  20. Effectiveness of radiation processing for elimination of Salmonella Typhimurium from minimally processed pineapple (Ananas comosus Merr.).

    PubMed

    Shashidhar, Ravindranath; Dhokane, Varsha S; Hajare, Sachin N; Sharma, Arun; Bandekar, Jayant R

    2007-04-01

    The microbiological quality of market samples of minimally processed (MP) pineapple was examined. The effectiveness of radiation treatment in eliminating Salmonella Typhimurium from laboratory inoculated ready-to-eat pineapple slices was also studied. Microbiological quality of minimally processed pineapple samples from Mumbai market was poor; 8.8% of the samples were positive for Salmonella. D(10) (the radiation dose required to reduce bacterial population by 90%) value for S. Typhimurium inoculated in pineapple was 0.242 kGy. Inoculated pack studies in minimally processed pineapple showed that the treatment with a 2-kGy dose of gamma radiation could eliminate 5 log CFU/g of S. Typhimurium. The pathogen was not detected from radiation-processed samples up to 12 d during storage at 4 and 10 degrees C. The processing of market samples with 1 and 2 kGy was effective in improving the microbiological quality of these products.

  1. Erosion of an ancient mountain range, the Great Smoky Mountains, North Carolina and Tennessee

    USGS Publications Warehouse

    Matmon, A.; Bierman, P.R.; Larsen, J.; Southworth, S.; Pavich, M.; Finkel, R.; Caffee, M.

    2003-01-01

    Analysis of 10Be and 26Al in bedrock (n=10), colluvium (n=5 including grain size splits), and alluvial sediments (n=59 including grain size splits), coupled with field observations and GIS analysis, suggest that erosion rates in the Great Smoky Mountains are controlled by subsurface bedrock erosion and diffusive slope processes. The results indicate rapid alluvial transport, minimal alluvial storage, and suggest that most of the cosmogenic nuclide inventory in sediments is accumulated while they are eroding from bedrock and traveling down hill slopes. Spatially homogeneous erosion rates of 25 - 30 mm Ky-1 are calculated throughout the Great Smoky Mountains using measured concentrations of cosmogenic 10Be and 26Al in quartz separated from alluvial sediment. 10Be and 26Al concentrations in sediments collected from headwater tributaries that have no upstream samples (n=18) are consistent with an average erosion rate of 28 ?? 8 mm Ky-1, similar to that of the outlet rivers (n=16, 24 ?? 6 mm Ky-1), which carry most of the sediment out of the mountain range. Grain-size-specific analysis of 6 alluvial sediment samples shows higher nuclide concentrations in smaller grain sizes than in larger ones. The difference in concentrations arises from the large elevation distribution of the source of the smaller grains compared with the narrow and relatively low source elevation of the large grains. Large sandstone clasts disaggregate into sand-size grains rapidly during weathering and downslope transport; thus, only clasts from the lower parts of slopes reach the streams. 26Al/10Be ratios do not suggest significant burial periods for our samples. However, alluvial samples have lower 26Al/10Be ratios than bedrock and colluvial samples, a trend consistent with a longer integrated cosmic ray exposure history that includes periods of burial during down-slope transport. The results confirm some of the basic ideas embedded in Davis' geographic cycle model, such as the reduction of relief through slope processes, and of Hack's dynamic equilibrium model such as the similarity of erosion rates across different lithologies. Comparing cosmogenic nuclide data with other measured and calculated erosion rates for the Appalachians, we conclude that rates of erosion, integrated over varying time periods from decades to a hundred million years are similar, the result of equilibrium between erosion and isostatic uplift in the southern Appalachian Mountains.

  2. The effect of cyclodextrin on both the agglomeration and the in vitro characteristics of drug loaded and targeted silica nanoparticles

    NASA Astrophysics Data System (ADS)

    Khattabi, Areen M.; Alqdeimat, Diala A.

    2018-02-01

    One of the problems in the use of nanoparticles (NPs) as carriers in drug delivery systems is their agglomeration which mainly appears due to their high surface energy. This results in formation of NPs with different sizes leading to differences in their distribution and bioavailability. The surface coating of NPs with certain compounds can be used to prevent or minimize this problem. In this study, the effect of cyclodextrin (CD) on the agglomeration state and hence on the in vitro characteristics of drug loaded and targeted silica NPs was investigated. A sample of NPs was loaded with anticancer agents, then modified with a long polymer, carboxymethyl-β-cyclodextrin (CM-β-CD) and folic acid (FA), respectively. Another sample was modified similarly but without CD. The surface modification was characterized using fourier transform infrared spectroscopy (FT-IR). The polydispersity (PD) was measured using dynamic light scattering (DLS) and was found to be smaller for CD modified NPs. The results of the in vitro drug release showed that the release rate from both samples exhibited similar pattern for the first 5 hours, however the rate was faster from CD modified NPs after 24 hours. The in vitro cell viability assay confirmed that CD modified NPs were about 30% more toxic to HeLa cells. These findings suggest that CD has a clear effect in minimizing the agglomeration of such modified silica NPs, accelerating their drug release rate and enhancing their targeting effect.

  3. Searching for microbial protein over-expression in a complex matrix using automated high throughput MS-based proteomics tools.

    PubMed

    Akeroyd, Michiel; Olsthoorn, Maurien; Gerritsma, Jort; Gutker-Vermaas, Diana; Ekkelkamp, Laurens; van Rij, Tjeerd; Klaassen, Paul; Plugge, Wim; Smit, Ed; Strupat, Kerstin; Wenzel, Thibaut; van Tilborg, Marcel; van der Hoeven, Rob

    2013-03-10

    In the discovery of new enzymes genomic and cDNA expression libraries containing thousands of differential clones are generated to obtain biodiversity. These libraries need to be screened for the activity of interest. Removing so-called empty and redundant clones significantly reduces the size of these expression libraries and therefore speeds up new enzyme discovery. Here, we present a sensitive, generic workflow for high throughput screening of successful microbial protein over-expression in microtiter plates containing a complex matrix based on mass spectrometry techniques. MALDI-LTQ-Orbitrap screening followed by principal component analysis and peptide mass fingerprinting was developed to obtain a throughput of ∼12,000 samples per week. Alternatively, a UHPLC-MS(2) approach including MS(2) protein identification was developed for microorganisms with a complex protein secretome with a throughput of ∼2000 samples per week. TCA-induced protein precipitation enhanced by addition of bovine serum albumin is used for protein purification prior to MS detection. We show that this generic workflow can effectively reduce large expression libraries from fungi and bacteria to their minimal size by detection of successful protein over-expression using MS. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. The Shawmere anorthosite and OB-1 as lunar highland regolith simulants

    NASA Astrophysics Data System (ADS)

    Battler, Melissa M.; Spray, John G.

    2009-12-01

    Anorthosite constitutes a major component of the lunar crust and comprises an important, if not dominant, ingredient of the lunar regolith. Given the need for highland regolith simulants in preparation for lunar surface engineering activities, we have selected an appropriate terrestrial anorthosite and performed crushing trials to generate a particle size distribution comparable to Apollo 16 regolith sample 64 500. The root simulant is derived from a granoblastic facies of the Archean Shawmere Complex of the Kapuskasing Structural Zone of Ontario, Canada. The Shawmere exhibits minimal retrogression, is homogeneous and has an average plagioclase composition of An 78 (bytownite). Previous industrial interest in this calcic anorthosite has resulted in quarrying operations, which provide ease of extraction and access for potential large-scale simulant production. A derivative of the Shawmere involves the addition of olivine slag, crushed to yield a particle size distribution similar to that of the agglutinate and glass components of the Apollo sample. This simulant is referred to as OB-1. The Shawmere and OB-1 regolith simulants are lunar highland analogues, conceived to produce geotechnical properties of benefit to designing and testing drilling, excavation and construction equipment for future lunar surface operations.

  5. Process R&D for Particle Size Control of Molybdenum Oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Sujat; Dzwiniel, Trevor; Pupek, Krzysztof

    The primary goal of this study was to produce MoO 3 powder with a particle size range of 50 to 200 μm for use in targets for production of the medical isotope 99Mo. Molybdenum metal powder is commercially produced by thermal reduction of oxides in a hydrogen atmosphere. The most common source material is MoO 3, which is derived by the thermal decomposition of ammonium heptamolybdate (AHM). However, the particle size of the currently produced MoO 3 is too small, resulting in Mo powder that is too fine to properly sinter and press into the desired target. In this study,more » effects of heating rate, heating temperature, gas type, gas flow rate, and isothermal heating were investigated for the decomposition of AHM. The main conclusions were as follows: lower heating rate (2-10°C/min) minimizes breakdown of aggregates, recrystallized samples with millimeter-sized aggregates are resistant to various heat treatments, extended isothermal heating at >600°C leads to significant sintering, and inert gas and high gas flow rate (up to 2000 ml/min) did not significantly affect particle size distribution or composition. In addition, attempts to recover AHM from an aqueous solution by several methods (spray drying, precipitation, and low temperature crystallization) failed to achieve the desired particle size range of 50 to 200 μm. Further studies are planned.« less

  6. Planning Risk-Based SQC Schedules for Bracketed Operation of Continuous Production Analyzers.

    PubMed

    Westgard, James O; Bayat, Hassan; Westgard, Sten A

    2018-02-01

    To minimize patient risk, "bracketed" statistical quality control (SQC) is recommended in the new CLSI guidelines for SQC (C24-Ed4). Bracketed SQC requires that a QC event both precedes and follows (brackets) a group of patient samples. In optimizing a QC schedule, the frequency of QC or run size becomes an important planning consideration to maintain quality and also facilitate responsive reporting of results from continuous operation of high production analytic systems. Different plans for optimizing a bracketed SQC schedule were investigated on the basis of Parvin's model for patient risk and CLSI C24-Ed4's recommendations for establishing QC schedules. A Sigma-metric run size nomogram was used to evaluate different QC schedules for processes of different sigma performance. For high Sigma performance, an effective SQC approach is to employ a multistage QC procedure utilizing a "startup" design at the beginning of production and a "monitor" design periodically throughout production. Example QC schedules are illustrated for applications with measurement procedures having 6-σ, 5-σ, and 4-σ performance. Continuous production analyzers that demonstrate high σ performance can be effectively controlled with multistage SQC designs that employ a startup QC event followed by periodic monitoring or bracketing QC events. Such designs can be optimized to minimize the risk of harm to patients. © 2017 American Association for Clinical Chemistry.

  7. A continuous high-throughput bioparticle sorter based on 3D traveling-wave dielectrophoresis.

    PubMed

    Cheng, I-Fang; Froude, Victoria E; Zhu, Yingxi; Chang, Hsueh-Chia; Chang, Hsien-Chang

    2009-11-21

    We present a high throughput (maximum flow rate approximately 10 microl/min or linear velocity approximately 3 mm/s) continuous bio-particle sorter based on 3D traveling-wave dielectrophoresis (twDEP) at an optimum AC frequency of 500 kHz. The high throughput sorting is achieved with a sustained twDEP particle force normal to the continuous through-flow, which is applied over the entire chip by a single 3D electrode array. The design allows continuous fractionation of micron-sized particles into different downstream sub-channels based on differences in their twDEP mobility on both sides of the cross-over. Conventional DEP is integrated upstream to focus the particles into a single levitated queue to allow twDEP sorting by mobility difference and to minimize sedimentation and field-induced lysis. The 3D electrode array design minimizes the offsetting effect of nDEP (negative DEP with particle force towards regions with weak fields) on twDEP such that both forces increase monotonically with voltage to further increase the throughput. Effective focusing and separation of red blood cells from debris-filled heterogeneous samples are demonstrated, as well as size-based separation of poly-dispersed liposome suspensions into two distinct bands at 2.3 to 4.6 microm and 1.5 to 2.7 microm, at the highest throughput recorded in hand-held chips of 6 microl/min.

  8. Sampling of temporal networks: Methods and biases

    NASA Astrophysics Data System (ADS)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  9. Microsatellite genetic distances between oceanic populations of the humpback whale (Megaptera novaeangliae).

    PubMed

    Valsecchi, E; Palsbøll, P; Hale, P; Glockner-Ferrari, D; Ferrari, M; Clapham, P; Larsen, F; Mattila, D; Sears, R; Sigurjonsson, J; Brown, M; Corkeron, P; Amos, B

    1997-04-01

    Mitochondrial DNA haplotypes of humpback whales show strong segregation between oceanic populations and between feeding grounds within oceans, but this highly structured pattern does not exclude the possibility of extensive nuclear gene flow. Here we present allele frequency data for four microsatellite loci typed across samples from four major oceanic regions: the North Atlantic (two mitochondrially distinct populations), the North Pacific, and two widely separated Antarctic regions, East Australia and the Antarctic Peninsula. Allelic diversity is a little greater in the two Antarctic samples, probably indicating historically greater population sizes. Population subdivision was examined using a wide range of measures, including Fst, various alternative forms of Slatkin's Rst, Goldstein and colleagues' delta mu, and a Monte Carlo approximation to Fisher's exact test. The exact test revealed significant heterogeneity in all but one of the pairwise comparisons between geographically adjacent populations, including the comparison between the two North Atlantic populations, suggesting that gene flow between oceans is minimal and that dispersal patterns may sometimes be restricted even in the absence of obvious barriers, such as land masses, warm water belts, and antitropical migration behavior. The only comparison where heterogeneity was not detected was the one between the two Antarctic population samples. It is unclear whether failure to find a difference here reflects gene flow between the regions or merely lack of statistical power arising from the small size of the Antarctic Peninsula sample. Our comparison between measures of population subdivision revealed major discrepancies between methods, with little agreement about which populations were most and least separated. We suggest that unbiased Rst (URst, see Goodman 1995) is currently the most reliable statistic, probably because, unlike the other methods, it allows for unequal sample sizes. However, in view of the fact that these alternative measures often contradict one another, we urge caution in the use of microsatellite data to quantify genetic distance.

  10. Size really does matter: effects of filter fractionation on microbial community structure in a model oxygen minimum zone.

    NASA Astrophysics Data System (ADS)

    Torres Beltran, M.

    2016-02-01

    The Scientific Committee on Oceanographic Research (SCOR) Working Group 144 "Microbial Community Responses to Ocean Deoxygenation" workshop held in Vancouver, British Columbia in July 2014 had the primary objective of kick-starting the establishment of a minimal core of technologies, techniques and standard operating procedures (SOPs) to enable compatible process rate and multi-molecular data (DNA, RNA and protein) collection in marine oxygen minimum zones (OMZs) and other oxygen starved waters. Experimental activities conducted in Saanich Inlet, a seasonally anoxic fjord on Vancouver Island British Columbia, were designed to compare and cross-calibrate in situ sampling devices (McLane PPS system) with conventional bottle sampling and incubation methods. Bottle effects on microbial community composition, and activity were tested using different filter combinations and sample volumes to compare PPS/IPS (0.4 µm) versus Sterivex (0.22 µm) filtration methods with and without prefilters (2.7 µm). Resulting biomass was processed for small subunit ribosomal RNA gene sequencing across all three domains of life on the 454 platform followed by downstream community structure analyses. Significant community shifts occurred within and between filter fractions for in situ versus on-ship processed samples. For instance, the relative abundance of several bacterial phyla including Bacteroidetes, Delta and Gammaproteobacteria decreased five-fold on-ship when compared to in situ filtration. Similarly, experimental mesocosms showed similar community structure and activity to in situ filtered samples indicating the need to cross-calibrate incubations to constrain bottle effects. In addition, alpha and beta diversity significantly changed as function of filter size and volume, as well as the operational taxonomic units identified using indicator species analysis for each filter size. Our results provide statistical support that microbial community structure is systematically biased by filter fraction methods and highlight the need for establishing compatible techniques among researchers that facilitate comparative and reproducible science for the whole community.

  11. Pressure-induced transition in the grain boundary of diamond

    NASA Astrophysics Data System (ADS)

    Chen, J.; Tang, L.; Ma, C.; Fan, D.; Yang, B.; Chu, Q.; Yang, W.

    2017-12-01

    Equation of state of diamond powder with different average grain sizes was investigated using in situ synchrotron x-ray diffraction and a diamond anvil cell (DAC). Comparison of compression curves was made for two samples with average grain size of 50nm and 100nm. The two specimens were pre-pressed into pellets and loaded in the sample pressure chamber of the DAC separately to minimized differences of possible systematic errors for the two samples. Neon gas was used as pressure medium and ruby spheres as pressure calibrant. Experiments were conducted at room temperature and high pressures up to 50 GPa. Fitting the compression data in the full pressure range into the third order Birch-Murnaghan equation of state yields bulk modulus (K) and its pressure derivative (K') of 392 GPa and 5.3 for 50nm sample and 398GPa and 4.5 for 100nm sample respectively. Using a simplified core-shell grain model, this result indicates that the grain boundary has an effective bulk modulus of 54 GPa. This value is similar to that observed for carbon nanotube[1] validating the recent theoretical diamond surface modeling[2]. Differential analysis of the compression cures demonstrates clear relative compressibility change at the pressure about 20 GPa. When fit the compression data below and above this pressure separately, the effect of grain size on bulk modulus reverses in the pressure range above 20 GPa. This observation indicates a possible transition of grain boundary structure, likely from sp2 hybridization at the surface[2] towards sp3like orbital structure which behaves alike the inner crystal. [1] Jie Tang, Lu-Chang Qin, Taizo Sasaki, Masako Yudasaka, Akiyuki Matsushita, and Sumio Iijima, Compressibility and Polygonization of Single-Walled Carbon Nanotubes under Hydrostatic Pressure, Physical Review Letters, 85(9), 1187-1198, 2000. [2] Shaohua Lu, Yanchao Wang, Hanyu Liu, Mao-sheng Miao, and Yanming Ma, Self-assembled ultrathin nanotubes on diamond (100) surface, Nature Communications, DOI: 10.1038/ncomms4666, 2014

  12. Highly sensitive molecular diagnosis of prostate cancer using surplus material washed off from biopsy needles

    PubMed Central

    Bermudo, R; Abia, D; Mozos, A; García-Cruz, E; Alcaraz, A; Ortiz, Á R; Thomson, T M; Fernández, P L

    2011-01-01

    Introduction: Currently, final diagnosis of prostate cancer (PCa) is based on histopathological analysis of needle biopsies, but this process often bears uncertainties due to small sample size, tumour focality and pathologist's subjective assessment. Methods: Prostate cancer diagnostic signatures were generated by applying linear discriminant analysis to microarray and real-time RT–PCR (qRT–PCR) data from normal and tumoural prostate tissue samples. Additionally, after removal of biopsy tissues, material washed off from transrectal biopsy needles was used for molecular profiling and discriminant analysis. Results: Linear discriminant analysis applied to microarray data for a set of 318 genes differentially expressed between non-tumoural and tumoural prostate samples produced 26 gene signatures, which classified the 84 samples used with 100% accuracy. To identify signatures potentially useful for the diagnosis of prostate biopsies, surplus material washed off from routine biopsy needles from 53 patients was used to generate qRT–PCR data for a subset of 11 genes. This analysis identified a six-gene signature that correctly assigned the biopsies as benign or tumoural in 92.6% of the cases, with 88.8% sensitivity and 96.1% specificity. Conclusion: Surplus material from prostate needle biopsies can be used for minimal-size gene signature analysis for sensitive and accurate discrimination between non-tumoural and tumoural prostates, without interference with current diagnostic procedures. This approach could be a useful adjunct to current procedures in PCa diagnosis. PMID:22009027

  13. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  14. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  15. Choosing a design to fit the situation: how to improve specificity and positive predictive values using Bayesian lot quality assurance sampling.

    PubMed

    Olives, Casey; Pagano, Marcello

    2013-02-01

    Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined. We present Bayesian-LQAS (B-LQAS), an approach to incorporating prior information into the choice of the LQAS sample size and decision rule, and explore its properties through a numerical study. Additionally, we analyse vaccination coverage data from UNICEF's State of the World's Children in 1968-1989 and 2008 to exemplify the performance of LQAS and B-LQAS. Results of our numerical study show that the choice of LQAS sample size and decision rule is sensitive to the distribution of prior information, as well as to individual beliefs about the importance of correct classification. Application of the B-LQAS approach to the UNICEF data improves specificity and PPV in both time periods (1968-1989 and 2008) with minimal reductions in sensitivity and negative predictive value. LQAS is shown to be a robust tool that is not necessarily prone to poor specificity and PPV as previously alleged. In situations where prior or historical data are available, B-LQAS can lead to improvements in expected performance.

  16. Static Grain Growth in Contact Metamorphic Calcite: A Cathodoluminescence Study.

    NASA Astrophysics Data System (ADS)

    Vogt, B.; Heilbronner, R.; Herwegh, M.; Ramseyer, K.

    2009-04-01

    In the Adamello contact aureole, monomineralic mesozoic limestones were investigated in terms of grain size evolution and compared to results on numerical modeling performed by Elle. The sampled area shows no deformation and therefore represents an appropriate natural laboratory for the study of static grain growth (Herwegh & Berger, 2003). For this purpose, samples were collected at different distances to the contact to the pluton, covering a temperature range between 270 to 630°C. In these marbles, the grain sizes increase with temperature from 5 µm to about 1 cm as one approaches the contact (Herwegh & Berger, 2003). In some samples, photomicrographs show domains of variable cathodoluminescence (CL) intensities, which are interpreted to represent growth zonations. Microstructures show grains that contain cores and in some samples even several growth stages. The cores are usually not centered and the zones not concentric. They may be in touch with grain boundaries. These zonation patterns are consistent within a given aggregate but differ among the samples even if they come from the same location. Relative CL intensities depend on the Mn/Fe ratio. We assume that changes in trace amounts of Mn/Fe must have occurred during the grain size evolution, preserving local geochemical trends and their variations with time. Changes in Mn/Fe ratios can either be explained by (a) locally derived fluids (e.g. hydration reactions of sheet silicate rich marbles in the vicinity) or (b) by the infiltration of the calcite aggregates by externally derived (magmatic?) fluids. At the present stage, we prefer a regional change in fluid composition (b) because the growth zonations only occur at distances of 750-1250 m from the pluton contact (350-450°C). Closer to the contact, neither zonations nor cores were found. At larger distances, CL intensities differ from grain to grain, revealing diagenetic CL patterns that were incompletely recrystallized by grain growth. The role of infiltration of magmatic fluids is also manifest in the vicinity of dikes, where intense zonation patterns are prominent in the marbles. The software Elle was developed to simulate microstructural evolution in rocks. The numerical model with the title "Grain boundary sweeping" was performed by M. Jessell and was found on http://www.materialsknowledge.org/elle. It displays the grain size evolution and the development of growth zonations during grain boundary migration of a 2D foam structure. This simulation was chosen because the driving force is the minimization of isotropic surface energies. It will be compared to the natural microstructures. At the last stage of the simulation the average grain and core sizes have increased. All, even the smallest grains, show growth zonations. Grains can be divided into two groups: (a) initially larger grains, increasing their grain size and maintaining their core size and (b) initially smaller grains with decreasing grain and decreasing core size. Group (a) grains show large areas swept by grain boundaries into the direction of small grains. Grain boundaries between large grains move more slowly. Their cores do not touch any grain boundaries. Cores of group (b) grains are in contact with the grain boundary network and are on the way to be consumed. In the numerical model and in the natural example similar features can be observed: The cores are not necessarily centered, the zonations are not necessarily concentric and some of the cores touch the grain boundary network. In the simulation, grain boundary migration velocity between large grains is smaller than between a large and a small grain. From this we would predict that - given enough time - a well sorted grain size distribution of increased grain size could be generated. But since many small grains occur we infer that this equilibrium has not been obtained. Analytical results of some natural samples that could be analyzed up to now indicate a relatively well sorted grain size distribution suggesting a more mature state of static grain growth. In comparison to the simulation, grain and core boundaries in the marbles are not always straight. For lobate grain boundaries the surface area has not been minimized in respect to the grain size. An explanation for this might be grain boundary pinning or a local dynamic overprint. Some cores and growth zones in the investigated calcites show a continuous change in luminescence. This is interpreted to be an effect of late diffusion within the grain and/or a continuous change of fluid composition and supply. The absence of zonation in samples close to the contact might be explained by fast grain growth due to high temperatures and/or fast fluid transport. Possibly, this is combined with an enhanced component of volume diffusion. Thus concentration variations of Mn/Fe are diminished and not visible in form of a growth zonation. Herwegh M, Berger A (2003) Differences in grain growth of calcite: a field-based modeling approach. Contr. Min. Pet. 145: 600-611

  17. Rapid sampling of local minima in protein energy surface and effective reduction through a multi-objective filter

    PubMed Central

    2013-01-01

    Background Many problems in protein modeling require obtaining a discrete representation of the protein conformational space as an ensemble of conformations. In ab-initio structure prediction, in particular, where the goal is to predict the native structure of a protein chain given its amino-acid sequence, the ensemble needs to satisfy energetic constraints. Given the thermodynamic hypothesis, an effective ensemble contains low-energy conformations which are similar to the native structure. The high-dimensionality of the conformational space and the ruggedness of the underlying energy surface currently make it very difficult to obtain such an ensemble. Recent studies have proposed that Basin Hopping is a promising probabilistic search framework to obtain a discrete representation of the protein energy surface in terms of local minima. Basin Hopping performs a series of structural perturbations followed by energy minimizations with the goal of hopping between nearby energy minima. This approach has been shown to be effective in obtaining conformations near the native structure for small systems. Recent work by us has extended this framework to larger systems through employment of the molecular fragment replacement technique, resulting in rapid sampling of large ensembles. Methods This paper investigates the algorithmic components in Basin Hopping to both understand and control their effect on the sampling of near-native minima. Realizing that such an ensemble is reduced before further refinement in full ab-initio protocols, we take an additional step and analyze the quality of the ensemble retained by ensemble reduction techniques. We propose a novel multi-objective technique based on the Pareto front to filter the ensemble of sampled local minima. Results and conclusions We show that controlling the magnitude of the perturbation allows directly controlling the distance between consecutively-sampled local minima and, in turn, steering the exploration towards conformations near the native structure. For the minimization step, we show that the addition of Metropolis Monte Carlo-based minimization is no more effective than a simple greedy search. Finally, we show that the size of the ensemble of sampled local minima can be effectively and efficiently reduced by a multi-objective filter to obtain a simpler representation of the probed energy surface. PMID:24564970

  18. Surface Acoustic Wave Nebulisation Mass Spectrometry for the Fast and Highly Sensitive Characterisation of Synthetic Dyes in Textile Samples

    NASA Astrophysics Data System (ADS)

    Astefanei, Alina; van Bommel, Maarten; Corthals, Garry L.

    2017-10-01

    Surface acoustic wave nebulisation (SAWN) mass spectrometry (MS) is a method to generate gaseous ions compatible with direct MS of minute samples at femtomole sensitivity. To perform SAWN, acoustic waves are propagated through a LiNbO3 sampling chip, and are conducted to the liquid sample, which ultimately leads to the generation of a fine mist containing droplets of nanometre to micrometre diameter. Through fission and evaporation, the droplets undergo a phase change from liquid to gaseous analyte ions in a non-destructive manner. We have developed SAWN technology for the characterisation of organic colourants in textiles. It generates electrospray-ionisation-like ions in a non-destructive manner during ionisation, as can be observed by the unmodified chemical structure. The sample size is decreased by tenfold to 1000-fold when compared with currently used liquid chromatography-MS methods, with equal or better sensitivity. This work underscores SAWN-MS as an ideal tool for molecular analysis of art objects as it is non-destructive, is rapid, involves minimally invasive sampling and is more sensitive than current MS-based methods. [Figure not available: see fulltext.

  19. Friction Stir Processing of Stainless Steel for Ascertaining Its Superlative Performance in Bioimplant Applications.

    PubMed

    Perumal, G; Ayyagari, A; Chakrabarti, A; Kannan, D; Pati, S; Grewal, H S; Mukherjee, S; Singh, S; Arora, H S

    2017-10-25

    Substrate-cell interactions for a bioimplant are driven by substrate's surface characteristics. In addition, the performance of an implant and resistance to degradation are primarily governed by its surface properties. A bioimplant typically degrades by wear and corrosion in the physiological environment, resulting in metallosis. Surface engineering strategies for limiting degradation of implants and enhancing their performance may reduce or eliminate the need for implant removal surgeries and the associated cost. In the current study, we tailored the surface properties of stainless steel using submerged friction stir processing (FSP), a severe plastic deformation technique. FSP resulted in significant microstructural refinement from 22 μm grain size for the as-received alloy to 0.8 μm grain size for the processed sample with increase in hardness by nearly 1.5 times. The wear and corrosion behavior of the processed alloy was evaluated in simulated body fluid. The processed sample demonstrated remarkable improvement in both wear and corrosion resistance, which is explained by surface strengthening and formation of a highly stable passive layer. The methylthiazol tetrazolium assay demonstrated that the processed sample is better in supporting cell attachment, proliferation with minimal toxicity, and hemolysis. The athrombogenic characteristic of the as-received and processed samples was evaluated by fibrinogen adsorption and platelet adhesion via the enzyme-linked immunosorbent assay and lactate dehydrogenase assay, respectively. The processed sample showed less platelet and fibrinogen adhesion compared with the as-received alloy, signifying its high thromboresistance. The current study suggests friction stir processing to be a versatile toolbox for enhancing the performance and reliability of currently used bioimplant materials.

  20. Effective population size and genetic conservation criteria for bull trout

    Treesearch

    Bruce E. Rieman; F. W. Allendorf

    2001-01-01

    Effective population size (Ne) is an important concept in the management of threatened species like bull trout Salvelinus confluentus. General guidelines suggest that effective population sizes of 50 or 500 are essential to minimize inbreeding effects or maintain adaptive genetic variation, respectively....

  1. Method of Minimizing Size of Heat Rejection Systems for Thermoelectric Coolers to Cool Detectors in Space

    NASA Technical Reports Server (NTRS)

    Choi, Michael K.

    2014-01-01

    A thermal design concept of attaching the thermoelectric cooler (TEC) hot side directly to the radiator and maximizing the number of TECs to cool multiple detectors in space is presented. It minimizes the temperature drop between the TECs and radiator. An ethane constant conductance heat pipe transfers heat from the detectors to a TEC cold plate which the cold side of the TECs is attached to. This thermal design concept minimizes the size of TEC heat rejection systems. Hence it reduces the problem of accommodating the radiator within a required envelope. It also reduces the mass of the TEC heat rejection system. Thermal testing of a demonstration unit in vacuum verified the thermal performance of the thermal design concept.

  2. Hierarchical complexity and the size limits of life.

    PubMed

    Heim, Noel A; Payne, Jonathan L; Finnegan, Seth; Knope, Matthew L; Kowalewski, Michał; Lyons, S Kathleen; McShea, Daniel W; Novack-Gottshall, Philip M; Smith, Felisa A; Wang, Steve C

    2017-06-28

    Over the past 3.8 billion years, the maximum size of life has increased by approximately 18 orders of magnitude. Much of this increase is associated with two major evolutionary innovations: the evolution of eukaryotes from prokaryotic cells approximately 1.9 billion years ago (Ga), and multicellular life diversifying from unicellular ancestors approximately 0.6 Ga. However, the quantitative relationship between organismal size and structural complexity remains poorly documented. We assessed this relationship using a comprehensive dataset that includes organismal size and level of biological complexity for 11 172 extant genera. We find that the distributions of sizes within complexity levels are unimodal, whereas the aggregate distribution is multimodal. Moreover, both the mean size and the range of size occupied increases with each additional level of complexity. Increases in size range are non-symmetric: the maximum organismal size increases more than the minimum. The majority of the observed increase in organismal size over the history of life on the Earth is accounted for by two discrete jumps in complexity rather than evolutionary trends within levels of complexity. Our results provide quantitative support for an evolutionary expansion away from a minimal size constraint and suggest a fundamental rescaling of the constraints on minimal and maximal size as biological complexity increases. © 2017 The Author(s).

  3. Minimally processed vegetable salads: microbial quality evaluation.

    PubMed

    Fröder, Hans; Martins, Cecília Geraldes; De Souza, Katia Leani Oliveira; Landgraf, Mariza; Franco, Bernadette D G M; Destro, Maria Teresa

    2007-05-01

    The increasing demand for fresh fruits and vegetables and for convenience foods is causing an expansion of the market share for minimally processed vegetables. Among the more common pathogenic microorganisms that can be transmitted to humans by these products are Listeria monocytogenes, Escherichia coli O157:H7, and Salmonella. The aim of this study was to evaluate the microbial quality of a selection of minimally processed vegetables. A total of 181 samples of minimally processed leafy salads were collected from retailers in the city of Sao Paulo, Brazil. Counts of total coliforms, fecal coliforms, Enterobacteriaceae, psychrotrophic microorganisms, and Salmonella were conducted for 133 samples. L. monocytogenes was assessed in 181 samples using the BAX System and by plating the enrichment broth onto Palcam and Oxford agars. Suspected Listeria colonies were submitted to classical biochemical tests. Populations of psychrotrophic microorganisms >10(6) CFU/g were found in 51% of the 133 samples, and Enterobacteriaceae populations between 10(5) and 106 CFU/g were found in 42% of the samples. Fecal coliform concentrations higher than 10(2) CFU/g (Brazilian standard) were found in 97 (73%) of the samples, and Salmonella was detected in 4 (3%) of the samples. Two of the Salmonella-positive samples had <10(2) CFU/g concentrations of fecal coliforms. L. monocytogenes was detected in only 1 (0.6%) of the 181 samples examined. This positive sample was simultaneously detected by both methods. The other Listeria species identified by plating were L. welshimeri (one sample of curly lettuce) and L. innocua (2 samples of watercress). The results indicate that minimally processed vegetables had poor microbiological quality, and these products could be a vehicle for pathogens such as Salmonella and L. monocytogenes.

  4. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    PubMed

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  5. ONLINE MINIMIZATION OF VERTICAL BEAM SIZES AT APS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yipeng

    In this paper, online minimization of vertical beam sizes along the APS (Advanced Photon Source) storage ring is presented. A genetic algorithm (GA) was developed and employed for the online optimization in the APS storage ring. A total of 59 families of skew quadrupole magnets were employed as knobs to adjust the coupling and the vertical dispersion in the APS storage ring. Starting from initially zero current skew quadrupoles, small vertical beam sizes along the APS storage ring were achieved in a short optimization time of one hour. The optimization results from this method are briefly compared with the onemore » from LOCO (Linear Optics from Closed Orbits) response matrix correction.« less

  6. Flow through PCR module of BioBriefcase

    NASA Astrophysics Data System (ADS)

    Arroyo, E.; Wheeler, E. K.; Shediac, R.; Hindson, B.; Nasarabadi, S.; Vrankovich, G.; Bell, P.; Bailey, C.; Sheppod, T.; Christian, A. T.

    2005-11-01

    The BioBriefcase is an integrated briefcase-sized aerosol collection and analysis system for autonomous monitoring of the environment, which is currently being jointly developed by Lawrence Livermore and Sandia National Laboratories. This poster presents results from the polymerase chain reaction (PCR) module of the system. The DNA must be purified after exiting the aerosol collector to prevent inhibition of the enzymatic reaction. Traditional solid-phase extraction results in a large loss of sample. In this flow-through system, we perform sample purification, concentration and amplification in one reactor, which minimizes the loss of material. The sample from the aerosol collector is mixed with a denaturation solution prior to flowing through a capillary packed with silica beads. The DNA adheres to the silica beads allowing the environmental contaminants to be flushed to waste while effectively concentrating the DNA on the silica matrix. The adhered DNA is amplified while on the surface of the silica beads, resulting in a lower limit of detection than an equivalent eluted sample. Thus, this system is beneficial since more DNA is available for amplification, less reagents are utilized, and contamination risks are reduced.

  7. Comparison of quartz crystallographic preferred orientations identified with optical fabric analysis, electron backscatter and neutron diffraction techniques.

    PubMed

    Hunter, N J R; Wilson, C J L; Luzin, V

    2017-02-01

    Three techniques are used to measure crystallographic preferred orientations (CPO) in a naturally deformed quartz mylonite: transmitted light cross-polarized microscopy using an automated fabric analyser, electron backscatter diffraction (EBSD) and neutron diffraction. Pole figure densities attributable to crystal-plastic deformation are variably recognizable across the techniques, particularly between fabric analyser and diffraction instruments. Although fabric analyser techniques offer rapid acquisition with minimal sample preparation, difficulties may exist when gathering orientation data parallel with the incident beam. Overall, we have found that EBSD and fabric analyser techniques are best suited for studying CPO distributions at the grain scale, where individual orientations can be linked to their source grain or nearest neighbours. Neutron diffraction serves as the best qualitative and quantitative means of estimating the bulk CPO, due to its three-dimensional data acquisition, greater sample area coverage, and larger sample size. However, a number of sampling methods can be applied to FA and EBSD data to make similar approximations. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  8. The utility of point count surveys to predict wildlife interactions with wind energy facilities: An example focused on golden eagles

    USGS Publications Warehouse

    Sur, Maitreyi; Belthoff, James R.; Bjerre, Emily R.; Millsap, Brian A.; Katzner, Todd

    2018-01-01

    Wind energy development is rapidly expanding in North America, often accompanied by requirements to survey potential facility locations for existing wildlife. Within the USA, golden eagles (Aquila chrysaetos) are among the most high-profile species of birds that are at risk from wind turbines. To minimize golden eagle fatalities in areas proposed for wind development, modified point count surveys are usually conducted to estimate use by these birds. However, it is not always clear what drives variation in the relationship between on-site point count data and actual use by eagles of a wind energy project footprint. We used existing GPS-GSM telemetry data, collected at 15 min intervals from 13 golden eagles in 2012 and 2013, to explore the relationship between point count data and eagle use of an entire project footprint. To do this, we overlaid the telemetry data on hypothetical project footprints and simulated a variety of point count sampling strategies for those footprints. We compared the time an eagle was found in the sample plots with the time it was found in the project footprint using a metric we called “error due to sampling”. Error due to sampling for individual eagles appeared to be influenced by interactions between the size of the project footprint (20, 40, 90 or 180 km2) and the sampling type (random, systematic or stratified) and was greatest on 90 km2 plots. However, use of random sampling resulted in lowest error due to sampling within intermediate sized plots. In addition sampling intensity and sampling frequency both influenced the effectiveness of point count sampling. Although our work focuses on individual eagles (not the eagle populations typically surveyed in the field), our analysis shows both the utility of simulations to identify specific influences on error and also potential improvements to sampling that consider the context-specific manner that point counts are laid out on the landscape.

  9. Integrating scales of seagrass monitoring to meet conservation needs

    USGS Publications Warehouse

    Neckles, Hilary A.; Kopp, Blaine S.; Peterson, Bradley J.; Pooler, Penelope S.

    2012-01-01

    We evaluated a hierarchical framework for seagrass monitoring in two estuaries in the northeastern USA: Little Pleasant Bay, Massachusetts, and Great South Bay/Moriches Bay, New York. This approach includes three tiers of monitoring that are integrated across spatial scales and sampling intensities. We identified monitoring attributes for determining attainment of conservation objectives to protect seagrass ecosystems from estuarine nutrient enrichment. Existing mapping programs provided large-scale information on seagrass distribution and bed sizes (tier 1 monitoring). We supplemented this with bay-wide, quadrat-based assessments of seagrass percent cover and canopy height at permanent sampling stations following a spatially distributed random design (tier 2 monitoring). Resampling simulations showed that four observations per station were sufficient to minimize bias in estimating mean percent cover on a bay-wide scale, and sample sizes of 55 stations in a 624-ha system and 198 stations in a 9,220-ha system were sufficient to detect absolute temporal increases in seagrass abundance from 25% to 49% cover and from 4% to 12% cover, respectively. We made high-resolution measurements of seagrass condition (percent cover, canopy height, total and reproductive shoot density, biomass, and seagrass depth limit) at a representative index site in each system (tier 3 monitoring). Tier 3 data helped explain system-wide changes. Our results suggest tiered monitoring as an efficient and feasible way to detect and predict changes in seagrass systems relative to multi-scale conservation objectives.

  10. Current trends in treatment of obesity in Karachi and possibilities of cost minimization.

    PubMed

    Hussain, Mirza Izhar; Naqvi, Baqir Shyum

    2015-03-01

    Our study finds out drug usage trends in over weight and obese patients without any compelling indications in Karachi, looks for deviations of current practices from evidence based antihypertensive therapeutic guidelines and identifies not only cost minimization opportunities but also communication strategies to improve patients' awareness and compliance to achieve therapeutic goal. In present study two sets were used. Randomized stratified independent surveys were conducted in hospital doctors and family physicians (general practitioners), using pretested questionnaires. Sample size was 100. Statistical analysis was conducted on Statistical Package for Social Science (SPSS). Opportunities of cost minimization were also analyzed. One the basis of doctors' feedback, preference is given to non-pharmacologic management of obesity. Mass media campaign and media usage were recommended to increase patients awareness and patients' education along with strengthening family support systems was recommended for better compliance of the patients to doctor's advice. Local therapeutic guidelines for weight reduction were not found. Feedbacks showed that global therapeutic guidelines were followed by the doctors practicing in the community and hospitals in Karachi. However, high price branded drugs were used instead of low priced generic therapeutic equivalents. Patient's education is required for better awareness and improving patients' compliance. The doctors found preferring brand leaders instead of low cost options. This trend increases cost of therapy by 0.59 to 4.17 times. Therefore, there are great opportunities for cost minimization by using evidence-based clinically effective and safe medicines.

  11. Signal or noise? Separating grain size-dependent Nd isotope variability from provenance shifts in Indus delta sediments, Pakistan

    NASA Astrophysics Data System (ADS)

    Jonell, T. N.; Li, Y.; Blusztajn, J.; Giosan, L.; Clift, P. D.

    2017-12-01

    Rare earth element (REE) radioisotope systems, such as neodymium (Nd), have been traditionally used as powerful tracers of source provenance, chemical weathering intensity, and sedimentary processes over geologic timescales. More recently, the effects of physical fractionation (hydraulic sorting) of sediments during transport have called into question the utility of Nd isotopes as a provenance tool. Is source terrane Nd provenance resolvable if sediment transport strongly induces noise? Can grain-size sorting effects be quantified? This study works to address such questions by utilizing grain size analysis, trace element geochemistry, and Nd isotope geochemistry of bulk and grain-size fractions (<63μm, 63-125 μm, 125-250 μm) from the Indus delta of Pakistan. Here we evaluate how grain size effects drive Nd isotope variability and further resolve the total uncertainties associated with Nd isotope compositions of bulk sediments. Results from the Indus delta indicate bulk sediment ɛNd compositions are most similar to the <63 µm fraction as a result of strong mineralogical control on bulk compositions by silt- to clay-sized monazite and/or allanite. Replicate analyses determine that the best reproducibility (± 0.15 ɛNd points) is observed in the 125-250 µm fraction. The bulk and finest fractions display the worst reproducibility (±0.3 ɛNd points). Standard deviations (2σ) indicate that bulk sediment uncertainties are no more than ±1.0 ɛNd points. This argues that excursions of ≥1.0 ɛNd points in any bulk Indus delta sediments must in part reflect an external shift in provenance irrespective of sample composition, grain size, and grain size distribution. Sample standard deviations (2s) estimate that any terrigenous bulk sediment composition should vary no greater than ±1.1 ɛNd points if provenance remains constant. Findings from this study indicate that although there are grain-size dependent Nd isotope effects, they are minimal in the Indus delta such that resolvable provenance-driven trends can be identified in bulk sediment ɛNd compositions over the last 20 k.y., and that overall provenance trends remain consistent with previous findings.

  12. Prevalence and level of Listeria monocytogenes and other Listeria sp. in ready-to-eat minimally processed and refrigerated vegetables.

    PubMed

    Kovačević, Mira; Burazin, Jelena; Pavlović, Hrvoje; Kopjar, Mirela; Piližota, Vlasta

    2013-04-01

    Minimally processed and refrigerated vegetables can be contaminated with Listeria species bacteria including Listeria monocytogenes due to extensive handling during processing or by cross contamination from the processing environment. The objective of this study was to examine the microbiological quality of ready-to-eat minimally processed and refrigerated vegetables from supermarkets in Osijek, Croatia. 100 samples of ready-to-eat vegetables collected from different supermarkets in Osijek, Croatia, were analyzed for presence of Listeria species and Listeria monocytogenes. The collected samples were cut iceberg lettuces (24 samples), other leafy vegetables (11 samples), delicatessen salads (23 samples), cabbage salads (19 samples), salads from mixed (17 samples) and root vegetables (6 samples). Listeria species was found in 20 samples (20 %) and Listeria monocytogenes was detected in only 1 sample (1 %) of cut red cabbage (less than 100 CFU/g). According to Croatian and EU microbiological criteria these results are satisfactory. However, the presence of Listeria species and Listeria monocytogenes indicates poor hygiene quality. The study showed that these products are often improperly labeled, since 24 % of analyzed samples lacked information about shelf life, and 60 % of samples lacked information about storage conditions. With regard to these facts, cold chain abruption with extended use after expiration date is a probable scenario. Therefore, the microbiological risk for consumers of ready-to-eat minimally processed and refrigerated vegetables is not completely eliminated.

  13. Link between deviations from Murray's Law and occurrence of low wall shear stress regions in the left coronary artery.

    PubMed

    Doutel, E; Pinto, S I S; Campos, J B L M; Miranda, J M

    2016-08-07

    Murray developed two laws for the geometry of bifurcations in the circulatory system. Based on the principle of energy minimization, Murray found restrictions for the relation between the diameters and also between the angles of the branches. It is known that bifurcations are prone to the development of atherosclerosis, in regions associated to low wall shear stresses (WSS) and high oscillatory shear index (OSI). These indicators (size of low WSS regions, size of high OSI regions and size of high helicity regions) were evaluated in this work. All of them were normalized by the size of the outflow branches. The relation between Murray's laws and the size of low WSS regions was analysed in detail. It was found that the main factor leading to large regions of low WSS is the so called expansion ratio, a relation between the cross section areas of the outflow branches and the cross section area of the main branch. Large regions of low WSS appear for high expansion ratios. Furthermore, the size of low WSS regions is independent of the ratio between the diameters of the outflow branches. Since the expansion ratio in bifurcations following Murray's law is kept in a small range (1 and 1.25), all of them have regions of low WSS with similar size. However, the expansion ratio is not small enough to completely prevent regions with low WSS values and, therefore, Murray's law does not lead to atherosclerosis minimization. A study on the effect of the angulation of the bifurcation suggests that the Murray's law for the angles does not minimize the size of low WSS regions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. The Patient-Oriented Eczema Measure in young children: responsiveness and minimal clinically important difference.

    PubMed

    Gaunt, D M; Metcalfe, C; Ridd, M

    2016-11-01

    The Patient-Oriented Eczema Measure (POEM) has been recommended as the core patient-reported outcome measure for trials of eczema treatments. Using data from the Choice of Moisturiser for Eczema Treatment randomized feasibility study, we assess the responsiveness to change and determine the minimal clinically important difference (MCID) of the POEM in young children with eczema. Responsiveness to change by repeated administrations of the POEM was investigated in relation to change recalled using the Parent Global Assessment (PGA) measure. Five methods of determining the MCID of the POEM were employed; three anchor-based methods using PGA as the anchor: the within-patient score change, between-patient score change and sensitivity and specificity method, and two distribution-based methods: effect size estimate and the one half standard deviation of the baseline distribution of POEM scores. Successive POEM scores were found to be responsive to change in eczema severity. The MCID of the POEM change score, in relation to a slight improvement in eczema severity as recalled by parents on the PGA, estimated by the within-patient score change (4.27), the between-patient score change (2.89) and the sensitivity and specificity method (3.00) was similar to the one half standard deviation of the POEM baseline scores (2.94) and the effect size estimate (2.50). The Patient-Oriented Eczema Measure as applied to young children is responsive to change, and the MCID is around 3. This study will encourage the use of POEM and aid in determining sample size for future randomized controlled trials of treatments for eczema in young children. © 2016 The Authors. Allergy Published by John Wiley & Sons Ltd.

  15. Trace element contamination in feather and tissue samples from Anna’s hummingbirds

    USGS Publications Warehouse

    Mikoni, Nicole A.; Poppenga, Robert H.; Ackerman, Joshua T.; Foley, Janet E.; Hazlehurst, Jenny; Purdin, Güthrum; Aston, Linda; Hargrave, Sabine; Jelks, Karen; Tell, Lisa A.

    2017-01-01

    Trace element contamination (17 elements; Be, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Mo, Cd, Ba, Hg, Tl, and Pb) of live (feather samples only) and deceased (feather and tissue samples) Anna's hummingbirds (Calypte anna) was evaluated. Samples were analyzed using inductively coupled plasma-mass spectrometry (ICP-MS; 17 elements) and atomic absorption spectrophotometry (Hg only). Mean plus one standard deviation (SD) was considered the benchmark, and concentrations above the mean + 1 SD were considered elevated above normal. Contour feathers were sampled from live birds of varying age, sex, and California locations. In order to reduce thermal impacts, minimal feathers were taken from live birds, therefore a novel method was developed for preparation of low mass feather samples for ICP-MS analysis. The study found that the novel feather preparation method enabled small mass feather samples to be analyzed for trace elements using ICP-MS. For feather samples from live birds, all trace elements, with the exception of beryllium, had concentrations above the mean + 1 SD. Important risk factors for elevated trace element concentrations in feathers of live birds were age for iron, zinc, and arsenic, and location for iron, manganese, zinc, and selenium. For samples from deceased birds, ICP-MS results from body and tail feathers were correlated for Fe, Zn, and Pb, and feather concentrations were correlated with renal (Fe, Zn, Pb) or hepatic (Hg) tissue concentrations. Results for AA spectrophotometry analyzed samples from deceased birds further supported the ICP-MS findings where a strong correlation between mercury concentrations in feather and tissue (pectoral muscle) samples was found. These study results support that sampling feathers from live free-ranging hummingbirds might be a useful, non-lethal sampling method for evaluating trace element exposure and provides a sampling alternative since their small body size limits traditional sampling of blood and tissues. The results from this study provide a benchmark for the distribution of trace element concentrations in feather and tissue samples from hummingbirds and suggests a reference mark for exceeding normal. Lastly, pollinating avian species are minimally represented in the literature as bioindicators for environmental trace element contamination. Given that trace elements can move through food chains by a variety of routes, our study indicates that hummingbirds are possible bioindicators of environmental trace element contamination.

  16. Multifunctional Water Sensors for pH, ORP, and Conductivity Using Only Microfabricated Platinum Electrodes

    PubMed Central

    Lin, Wen-Chi; Brondum, Klaus; Monroe, Charles W.; Burns, Mark A.

    2017-01-01

    Monitoring of the pH, oxidation-reduction-potential (ORP), and conductivity of aqueous samples is typically performed using multiple sensors. To minimize the size and cost of these sensors for practical applications, we have investigated the use of a single sensor constructed with only bare platinum electrodes deposited on a glass substrate. The sensor can measure pH from 4 to 10 while simultaneously measuring ORP from 150 to 800 mV. The device can also measure conductivity up to 8000 μS/cm in the range of 10 °C to 50 °C, and all these measurements can be made even if the water samples contain common ions found in residential water. The sensor is inexpensive (i.e., ~$0.10/unit) and has a sensing area below 1 mm2, suggesting that the unit is cost-efficient, robust, and widely applicable, including in microfluidic systems. PMID:28753913

  17. ADVANCING SITE CHARACTERIZATION AND MONITORING ...

    EPA Pesticide Factsheets

    There is no astract available for htis product. If further information is requested, please refer to the bibliogaphic citation and contact the person listed under Contract field. The overall objective of this task is to provide the Agency with improved state-of-the-science guidance, strategies, and techniques to more accurately and effectively collect environmental samples. Under this umbrella objective, research is being conducted to: (a) reduce/minimize the loss of VOCs during sample collection, handling, and preservation, (b) collect undisturbed surface sediments so that the effects of recent depositional events (e.g., flooding or dredging) can clearly be delineated as to their influence on the contamination concentrations present downstream (or where the sediments are deposited), and (c) to determine an effective method to effectively and efficiently separate asbestos in soils from the rest of the soil matrix while maintaining the integrity (i.e, no fiber size reduction) of the asbestos fibers.

  18. Testing the significance of a correlation with nonnormal data: comparison of Pearson, Spearman, transformation, and resampling approaches.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2012-09-01

    It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n ≥ 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n ≤ 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests.

  19. [Human milk for neonatal pain relief during ophthalmoscopy].

    PubMed

    Ribeiro, Laiane Medeiros; Castral, Thaíla Corrêa; Montanholi, Liciane Langona; Daré, Mariana Firmino; Silva, Aline Carolina de Araújo; Antonini, Sonir Roberto Rauber; Scochi, Carmen Gracinda Silvan

    2013-10-01

    Ophthalmoscopy performed for the early diagnosis of retinopathy of prematurity (ROP) is painful for preterm infants, thus necessitating interventions for minimizing pain. The present study aimed to establish the effectiveness of human milk, compared with sucrose, for pain relief in premature infants subjected to ophthalmoscopy for the early diagnosis of ROP. This investigation was a pilot, quasi-experimental study conducted with 14 premature infants admitted to the neonatal intensive care unit (NICU) of a university hospital. Comparison between the groups did not yield a statistically significant difference relative to the crying time, salivary cortisol, or heart rate (HR). Human milk appears to be as effective as sucrose in relieving acute pain associated with ophthalmoscopy. The study's limitations included its small sample size and lack of randomization. Experimental investigations with greater sample power should be performed to reinforce the evidence found in the present study.

  20. A compact, fast UV photometer for measurement of ozone from research aircraft

    NASA Astrophysics Data System (ADS)

    Gao, R. S.; Ballard, J.; Watts, L. A.; Thornberry, T. D.; Ciciora, S. J.; McLaughlin, R. J.; Fahey, D. W.

    2012-09-01

    In situ measurements of atmospheric ozone (O3) are performed routinely from many research aircraft platforms. The most common technique depends on the strong absorption of ultraviolet (UV) light by ozone. As atmospheric science advances to the widespread use of unmanned aircraft systems (UASs), there is an increasing requirement for minimizing instrument space, weight, and power while maintaining instrument accuracy, precision and time response. The design and use of a new, dual-beam, UV photometer instrument for in situ O3 measurements is described. A polarization optical-isolator configuration is utilized to fold the UV beam inside the absorption cells, yielding a 60-cm absorption length with a 30-cm cell. The instrument has a fast sampling rate (2 Hz at <200 hPa, 1 Hz at 200-500 hPa, and 0.5 Hz at ≥ 500 hPa), high accuracy (3% excluding operation in the 300-450 hPa range, where the accuracy may be degraded to about 5%), and excellent precision (1.1 × 1010 O3 molecules cm-3 at 2 Hz, which corresponds to 3.0 ppb at 200 K and 100 hPa, or 0.41 ppb at 273 K and 1013 hPa). The size (36 l), weight (18 kg), and power (50-200 W) make the instrument suitable for many UASs and other airborne platforms. Inlet and exhaust configurations are also described for ambient sampling in the troposphere and lower stratosphere (1000-50 hPa) that control the sample flow rate to maximize time response while minimizing loss of precision due to induced turbulence in the sample cell. In-flight and laboratory intercomparisons with existing O3 instruments show that measurement accuracy is maintained in flight.

  1. Minimal Custom Pack Design and Wide-Awake Hand Surgery: Reducing Waste and Spending in the Orthopedic Operating Room.

    PubMed

    Thiel, Cassandra L; Fiorin Carvalho, Rafaela; Hess, Lindsay; Tighe, Joelle; Laurence, Vincent; Bilec, Melissa M; Baratz, Mark

    2017-11-01

    The US health care sector has substantial financial and environmental footprints. As literature continues to study the differences between wide-awake hand surgery (WAHS) and the more traditional hand surgery with sedation & local anesthesia, we sought to explore the opportunities to enhance the sustainability of WAHS through analysis of the respective costs and waste generation of the 2 techniques. We created a "minimal" custom pack of disposable surgical supplies expressly for small hand surgery procedures and then measured the waste from 178 small hand surgeries performed using either the "minimal pack" or the "standard pack," depending on physician pack choice. Patients were also asked to complete a postoperative survey on their experience. Data were analyzed using 1- and 2-way ANOVAs, 2-sample t tests, and Fisher exact tests. As expected, WAHS with the minimal pack produced 0.3 kg (13%) less waste and cost $125 (55%) less in supplies per case than sedation & local with the standard pack. Pack size was found to be the driving factor in waste generation. Patients who underwent WAHS reported slightly greater pain and anxiety levels during their surgery, but also reported greater satisfaction with their anesthetic choice, which could be tied to the enthusiasm of the physician performing WAHS. Surgical waste and spending can be reduced by minimizing the materials brought into the operating room in disposable packs. WAHS, as a nascent technique, may provide an opportunity to drive sustainability by paring back what is considered necessary in these packs. Moreover, despite some initial anxiety, many patients report greater satisfaction with WAHS. All told, our study suggests a potentially broader role for WAHS, with its concomitant emphases on patient satisfaction and the efficient use of time and resources.

  2. Impact of specimen adequacy on the assessment of renal allograft biopsy specimens.

    PubMed

    Cimen, S; Geldenhuys, L; Guler, S; Imamoglu, A; Molinari, M

    2016-01-01

    The Banff classification was introduced to achieve uniformity in the assessment of renal allograft biopsies. The primary aim of this study was to evaluate the impact of specimen adequacy on the Banff classification. All renal allograft biopsies obtained between July 2010 and June 2012 for suspicion of acute rejection were included. Pre-biopsy clinical data on suspected diagnosis and time from renal transplantation were provided to a nephropathologist who was blinded to the original pathological report. Second pathological readings were compared with the original to assess agreement stratified by specimen adequacy. Cohen's kappa test and Fisher's exact test were used for statistical analyses. Forty-nine specimens were reviewed. Among these specimens, 81.6% were classified as adequate, 6.12% as minimal, and 12.24% as unsatisfactory. The agreement analysis among the first and second readings revealed a kappa value of 0.97. Full agreement between readings was found in 75% of the adequate specimens, 66.7 and 50% for minimal and unsatisfactory specimens, respectively. There was no agreement between readings in 5% of the adequate specimens and 16.7% of the unsatisfactory specimens. For the entire sample full agreement was found in 71.4%, partial agreement in 20.4% and no agreement in 8.2% of the specimens. Statistical analysis using Fisher's exact test yielded a P value above 0.25 showing that - probably due to small sample size - the results were not statistically significant. Specimen adequacy may be a determinant of a diagnostic agreement in renal allograft specimen assessment. While additional studies including larger case numbers are required to further delineate the impact of specimen adequacy on the reliability of histopathological assessments, specimen quality must be considered during clinical decision making while dealing with biopsy reports based on minimal or unsatisfactory specimens.

  3. Accounting for imperfect detection of groups and individuals when estimating abundance.

    PubMed

    Clement, Matthew J; Converse, Sarah J; Royle, J Andrew

    2017-09-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.

  4. Accounting for imperfect detection of groups and individuals when estimating abundance

    USGS Publications Warehouse

    Clement, Matthew J.; Converse, Sarah J.; Royle, J. Andrew

    2017-01-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.

  5. Preparation of highly multiplexed small RNA sequencing libraries.

    PubMed

    Persson, Helena; Søkilde, Rolf; Pirona, Anna Chiara; Rovira, Carlos

    2017-08-01

    MicroRNAs (miRNAs) are ~22-nucleotide-long small non-coding RNAs that regulate the expression of protein-coding genes by base pairing to partially complementary target sites, preferentially located in the 3´ untranslated region (UTR) of target mRNAs. The expression and function of miRNAs have been extensively studied in human disease, as well as the possibility of using these molecules as biomarkers for prognostication and treatment guidance. To identify and validate miRNAs as biomarkers, their expression must be screened in large collections of patient samples. Here, we develop a scalable protocol for the rapid and economical preparation of a large number of small RNA sequencing libraries using dual indexing for multiplexing. Combined with the use of off-the-shelf reagents, more samples can be sequenced simultaneously on large-scale sequencing platforms at a considerably lower cost per sample. Sample preparation is simplified by pooling libraries prior to gel purification, which allows for the selection of a narrow size range while minimizing sample variation. A comparison with publicly available data from benchmarking of miRNA analysis platforms showed that this method captures absolute and differential expression as effectively as commercially available alternatives.

  6. A High-Throughput Method for Direct Detection of Therapeutic Oligonucleotide-Induced Gene Silencing In Vivo

    PubMed Central

    Coles, Andrew H.; Osborn, Maire F.; Alterman, Julia F.; Turanov, Anton A.; Godinho, Bruno M.D.C.; Kennington, Lori; Chase, Kathryn; Aronin, Neil

    2016-01-01

    Preclinical development of RNA interference (RNAi)-based therapeutics requires a rapid, accurate, and robust method of simultaneously quantifying mRNA knockdown in hundreds of samples. The most well-established method to achieve this is quantitative real-time polymerase chain reaction (qRT-PCR), a labor-intensive methodology that requires sample purification, which increases the potential to introduce additional bias. Here, we describe that the QuantiGene® branched DNA (bDNA) assay linked to a 96-well Qiagen TissueLyser II is a quick and reproducible alternative to qRT-PCR for quantitative analysis of mRNA expression in vivo directly from tissue biopsies. The bDNA assay is a high-throughput, plate-based, luminescence technique, capable of directly measuring mRNA levels from tissue lysates derived from various biological samples. We have performed a systematic evaluation of this technique for in vivo detection of RNAi-based silencing. We show that similar quality data is obtained from purified RNA and tissue lysates. In general, we observe low intra- and inter-animal variability (around 10% for control samples), and high intermediate precision. This allows minimization of sample size for evaluation of oligonucleotide efficacy in vivo. PMID:26595721

  7. Bias correction for selecting the minimal-error classifier from many machine learning models.

    PubMed

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Rethinking non-inferiority: a practical trial design for optimising treatment duration.

    PubMed

    Quartagno, Matteo; Walker, A Sarah; Carpenter, James R; Phillips, Patrick Pj; Parmar, Mahesh Kb

    2018-06-01

    Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm.

  9. Fractionating power and outlet stream polydispersity in asymmetrical flow field-flow fractionation. Part I: isocratic operation.

    PubMed

    Williams, P Stephen

    2016-05-01

    Asymmetrical flow field-flow fractionation (As-FlFFF) has become the most commonly used of the field-flow fractionation techniques. However, because of the interdependence of the channel flow and the cross flow through the accumulation wall, it is the most difficult of the techniques to optimize, particularly for programmed cross flow operation. For the analysis of polydisperse samples, the optimization should ideally be guided by the predicted fractionating power. Many experimentalists, however, neglect fractionating power and rely on light scattering detection simply to confirm apparent selectivity across the breadth of the eluted peak. The size information returned by the light scattering software is assumed to dispense with any reliance on theory to predict retention, and any departure of theoretical predictions from experimental observations is therefore considered of no importance. Separation depends on efficiency as well as selectivity, however, and efficiency can be a strong function of retention. The fractionation of a polydisperse sample by field-flow fractionation never provides a perfectly separated series of monodisperse fractions at the channel outlet. The outlet stream has some residual polydispersity, and it will be shown in this manuscript that the residual polydispersity is inversely related to the fractionating power. Due to the strong dependence of light scattering intensity and its angular distribution on the size of the scattering species, the outlet polydispersity must be minimized if reliable size data are to be obtained from the light scattering detector signal. It is shown that light scattering detection should be used with careful control of fractionating power to obtain optimized analysis of polydisperse samples. Part I is concerned with isocratic operation of As-FlFFF, and part II with programmed operation.

  10. Parallel Optical Random Access Memory (PORAM)

    NASA Technical Reports Server (NTRS)

    Alphonse, G. A.

    1989-01-01

    It is shown that the need to minimize component count, power and size, and to maximize packing density require a parallel optical random access memory to be designed in a two-level hierarchy: a modular level and an interconnect level. Three module designs are proposed, in the order of research and development requirements. The first uses state-of-the-art components, including individually addressed laser diode arrays, acousto-optic (AO) deflectors and magneto-optic (MO) storage medium, aimed at moderate size, moderate power, and high packing density. The next design level uses an electron-trapping (ET) medium to reduce optical power requirements. The third design uses a beam-steering grating surface emitter (GSE) array to reduce size further and minimize the number of components.

  11. Crystal growth and annealing for minimized residual stress

    DOEpatents

    Gianoulakis, Steven E.

    2002-01-01

    A method and apparatus for producing crystals that minimizes birefringence even at large crystal sizes, and is suitable for production of CaF.sub.2 crystals. The method of the present invention comprises annealing a crystal by maintaining a minimal temperature gradient in the crystal while slowly reducing the bulk temperature of the crystal. An apparatus according to the present invention includes a thermal control system added to a crystal growth and annealing apparatus, wherein the thermal control system allows a temperature gradient during crystal growth but minimizes the temperature gradient during crystal annealing.

  12. Electron spectroscopy for chemical analysis: Sample analysis

    NASA Technical Reports Server (NTRS)

    Carter, W. B.

    1989-01-01

    Exposure conditions in atomic oxygen (ESCA) was performed on an SSL-100/206 Small Spot Spectrometer. All data were taken with the use of a low voltage electron flood gun and a charge neutralization screen to minimize charging effects on the data. The X-ray spot size and electron flood gun voltage used are recorded on the individual spectra as are the instrumental resolutions. Two types of spectra were obtained for each specimen: (1) general surveys, and (2) high resolution spectra. The two types of data reduction performed are: (1) semiquantitative compositional analysis, and (2) peak fitting. The materials analyzed are: (1) kapton 4, 5, and 6, (2) HDPE 19, 20, and 21, and (3) PVDF 4, 5, and 6.

  13. Mapping risk of Nipah virus transmission across Asia and across Bangladesh.

    PubMed

    Peterson, A Townsend

    2015-03-01

    Nipah virus is a highly pathogenic but poorly known paramyxovirus from South and Southeast Asia. In spite of the risks that it poses to human health, the geography and ecology of its occurrence remain little understood-the virus is basically known from Bangladesh and peninsular Malaysia, and little in between. In this contribution, I use documented occurrences of the virus to develop ecological niche-based maps summarizing its likely broader occurrence-although rangewide maps could not be developed that had significant predictive abilities, reflecting minimal sample sizes available, maps within Bangladesh were quite successful in identifying areas in which the virus is predictably present and likely transmitted. © 2013 APJPH.

  14. Ultrasound detection of simulated intra-ocular foreign bodies by minimally trained personnel.

    PubMed

    Sargsyan, Ashot E; Dulchavsky, Alexandria G; Adams, James; Melton, Shannon; Hamilton, Douglas R; Dulchavsky, Scott A

    2008-01-01

    To test the ability of non-expert ultrasound operators of divergent backgrounds to detect the presence, size, location, and composition of foreign bodies in an ocular model. High school students (N = 10) and NASA astronauts (N = 4) completed a brief ultrasound training session which focused on basic ultrasound principles and the detection of foreign bodies. The operators used portable ultrasound devices to detect foreign objects of varying location, size (0.5-2 mm), and material (glass, plastic, metal) in a gelatinous ocular model. Operator findings were compared to known foreign object parameters and ultrasound experts (N = 2) to determine accuracy across and between groups. Ultrasound had high sensitivity (astronauts 85%, students 87%, and experts 100%) and specificity (astronauts 81%, students 83%, and experts 95%) for the detection of foreign bodies. All user groups were able to accurately detect the presence of foreign bodies in this model (astronauts 84%, students 81%, and experts 97%). Astronaut and student sensitivity results for material (64% vs. 48%), size (60% vs. 46%), and position (77% vs. 64%) were not statistically different. Experts' results for material (85%), size (90%), and position (98%) were higher; however, the small sample size precluded statistical conclusions. Ultrasound can be used by operators with varying training to detect the presence, location, and composition of intraocular foreign bodies with high sensitivity, specificity, and accuracy.

  15. Dark respiration rate increases with plant size in saplings of three temperate tree species despite decreasing tissue nitrogen and nonstructural carbohydrates.

    PubMed

    Machado, José-Luis; Reich, Peter B

    2006-07-01

    In shaded environments, minimizing dark respiration during growth could be an important aspect of maintaining a positive whole-plant net carbon balance. Changes with plant size in both biomass distribution to different tissue types and mass-specific respiration rates (R(d)) of those tissues would have an impact on whole-plant respiration. In this paper, we evaluated size-related variation in R(d), biomass distribution, and nitrogen (N) and total nonstructural carbohydrate (TNC) concentrations of leaves, stems and roots of three cold-temperate tree species (Abies balsamea (L.) Mill, Acer rubrum L. and Pinus strobus L.) in a forest understory. We sampled individuals varying in age (6 to 24 years old) and in size (from 2 to 500 g dry mass), and growing across a range of irradiances (from 1 to 13% of full sun) in northern Minnesota, USA. Within each species, we found small changes in R(d), N and TNC when comparing plants growing across this range of light availability. Consistent with our hypotheses, as plants grew larger, whole-plant N and TNC concentrations in all species declined as a result of a combination of changes in tissue N and shifts in biomass distribution patterns. However, contrary to our hypotheses, whole-plant and tissue R(d) increased with plant size in the three species.

  16. Tyrannosaurus en pointe: allometry minimized rotational inertia of large carnivorous dinosaurs.

    PubMed Central

    Henderson, Donald M; Snively, Eric

    2004-01-01

    Theropod dinosaurs attained the largest body sizes among terrestrial predators, and were also unique in being exclusively bipedal. With only two limbs for propulsion and balance, theropods would have been greatly constrained in their locomotor performance at large body size. Using three-dimensional restorations of the axial bodies and limbs of 12 theropod dinosaurs, and determining their rotational inertias (RIs) about a vertical axis, we show that these animals expressed a pattern of phyletic size increase that minimized the increase in RI associated with increases in body size. By contrast, the RI of six quadrupedal, carnivorous archosaurs exhibited changes in body proportions that were closer to those predicted by isometry. Correlations of low RI with high agility in lizards suggest that large theropods, with low relative RI, could engage in activities requiring higher agility than would be possible with isometric scaling. PMID:15101419

  17. Human Factors Evaluation of Surgeons' Working Positions for Gynecologic Minimal Access Surgery.

    PubMed

    Hignett, Sue; Gyi, Diane; Calkins, Lisa; Jones, Laura; Moss, Esther

    To investigate work-related musculoskeletal disorders (WRMSD) in gynaecological minimal access surgery (MAS), including bariatric (plus size) patients DESIGN: Mixed methods (Canadian Task Force classification III). Teaching hospital in the United Kingdom. Survey, observations (anthropometry, postural analysis), and interviews. Work-related musculoskeletal disorders (WRMSDs) were present in 63% of the survey respondents (n = 67). The pilot study (n = 11) identified contributory factors, including workplace layout, equipment design, and preference of port use (relative to patient size). Statistically significant differences for WRMSD-related posture risks were found within groups (average-size mannequin and plus-size mannequin) but not between patient size groups, suggesting that port preference may be driven by surgeon preference (and experience) rather than by patient size. Some of the challenges identified in this project need new engineering solutions to allow flexibility to support surgeon choice of operating approach (open, laparoscopic or robotic) with a workplace that supports adaptation to the task, the surgeon, and the patient. Copyright © 2017 American Association of Gynecologic Laparoscopists. Published by Elsevier Inc. All rights reserved.

  18. Toward Monitoring Parkinson's Through Analysis of Static Handwriting Samples: A Quantitative Analytical Framework.

    PubMed

    Zhi, Naiqian; Jaeger, Beverly Kris; Gouldstone, Andrew; Sipahi, Rifat; Frank, Samuel

    2017-03-01

    Detection of changes in micrographia as a manifestation of symptomatic progression or therapeutic response in Parkinson's disease (PD) is challenging as such changes can be subtle. A computerized toolkit based on quantitative analysis of handwriting samples would be valuable as it could supplement and support clinical assessments, help monitor micrographia, and link it to PD. Such a toolkit would be especially useful if it could detect subtle yet relevant changes in handwriting morphology, thus enhancing resolution of the detection procedure. This would be made possible by developing a set of metrics sensitive enough to detect and discern micrographia with specificity. Several metrics that are sensitive to the characteristics of micrographia were developed, with minimal sensitivity to confounding handwriting artifacts. These metrics capture character size-reduction, ink utilization, and pixel density within a writing sample from left to right. They are used here to "score" handwritten signatures of 12 different individuals corresponding to healthy and symptomatic PD conditions, and sample control signatures that had been artificially reduced in size for comparison purposes. Moreover, metric analyses of samples from ten of the 12 individuals for which clinical diagnosis time is known show considerable informative variations when applied to static signature samples obtained before and after diagnosis. In particular, a measure called pixel density variation showed statistically significant differences ( ) between two comparison groups of remote signature recordings: earlier versus recent, based on independent and paired t-test analyses on a total of 40 signature samples. The quantitative framework developed here has the potential to be used in future controlled experiments to study micrographia and links to PD from various aspects, including monitoring and assessment of applied interventions and treatments. The inherent value in this methodology is further enhanced by its reliance solely on static signatures, not requiring dynamic sampling with specialized equipment.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.

    Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less

  20. THE SECRETION OF DIGESTIVE ENZYMES AND CAECAL SIZE ARE DETERMINED BY DIETARY PROTEIN IN THE CRICKET Gryllus bimaculatus.

    PubMed

    Woodring, Joseph; Weidlich, Sandy

    2016-11-01

    In Gryllus bimaculatus, the size of the caecum decreases in the latter half of each instar to a stable minimal size with a steady minimal rate of digestive enzyme secretion until feeding resumes after ecdysis. The higher the percent protein in the newly ingested food, the faster and larger the caecum grows, and as a consequent the higher the secretion rate of trypsin and amylase. When hard boiled eggs (40% protein) are eaten the caecum is 2× larger, the trypsin secretion is almost 3× greater, and amylase 2.5× greater then when fed the same amount of apples (1.5% protein). Only dietary protein increases amylase secretion, whereas dietary carbohydrates have no effect on amylase secretion. The minimal caecal size and secretion rate must be supported by utilization of hemolymph amino acids, but the growth of the caecum and increasing enzymes secretions after the molt depend upon an amino acid source in the lumen. This simple regulation of digestive enzyme secretion is ideal for animals that must stop feeding in order to molt. This basic control system does not preclude additional regulation mechanisms, such as prandal, which is also indicated for G. bimaculatus, or even paramonal regulation. © 2016 Wiley Periodicals, Inc.

  1. Elemental composition of Arctic soils and aerosols in Ny-Ålesund measured using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Kim, Gibaek; Yoon, Young-Jun; Kim, Hyun-A.; Cho, Hee-joo; Park, Kihong

    2017-08-01

    Two laser-induced breakdown spectroscopy (LIBS) systems (soil LIBS and aerosol LIBS) were used to determine the elemental composition of soils and ambient aerosols less than 2.5 μm in Ny-Ålesund, Svalbard (the world's most northerly human settlement). For soil LIBS measurements, matrix effects such as moisture content, soil grain size, and surrounding gas on the LIBS response were minimized. When Ar gas was supplied onto the soil sample surfaces, a significant enhancement in LIBS emission lines was observed. Arctic soil samples were collected at 10 locations, and various elements (Al, Ba, C, Ca, Cu, Fe, H, K, Mg, Mn, N, Na, O, Pb, and Si) were detected in soils. The elemental distribution in arctic soils was clearly distinguishable from those in urban and abandoned mining soils in Korea. Moreover, the concentrations of most of anthropogenic metals were fairly low, and localized sources in extremely close proximity affected the elevated level of Cu in the soil samples derived from Ny-Ålesund. The number of elements detected in aerosols (C, Ca, H, K, Mg, Na, and O) was lower than those determined in soils. The elements in aerosols can mainly originate from minerals and sea salts. The elemental distribution in aerosols was also clearly distinguishable from that in soils, suggesting that the resuspension of local soil particles by wind erosion into aerosols was minimal. The daily variation of particle number concentration (RSD = 71%) and the elements in aerosols (RSD = 25%) varied substantially, possibly due to fluctuating air masses and meteorological conditions.

  2. Preliminary Assessment/Site Inspection Work Plan for Granite Mountain Radio Relay System

    DTIC Science & Technology

    1994-09-01

    represent field conditions, and (3) sampling results are repeatable. Final (04 WV---,,1-, ,W•, S 2, mbr . 19W4 13 RyCWed 1.5.2 Sample Handling Sample...procedures specified in Section 2.1.3. Samples collected from shallow depths will be obtained by submerging a stainless- steel, Teflon, or glass... submerged in a manner that minimizes agitation of sediment and the water sample. If a seep or spring has minimal discharge flow, gravel, boulders, and soil

  3. The prevalence of domestic violence within different socio-economic classes in Central Trinidad.

    PubMed

    Nagassar, R P; Rawlins, J M; Sampson, N R; Zackerali, J; Chankadyal, K; Ramasir, C; Boodram, R

    2010-01-01

    Domestic violence is a medical and social issue that often leads to negative consequences for society. This paper examines the association between the prevalence of domestic violence in relation to the different socio-economic classes in Central Trinidad. The paper also explores the major perceived causes of physical abuse in Central Trinidad. Participants were selected using a two-stage stratified sampling method within the Couva district. Households, each contributing one participant, were stratified into different socioeconomic classes (SES Class) and each stratum size (or its share in the sample) was determined by the portion of its size in the sampling frame to the total sample; then its members were randomly selected. The sampling method attempted to balance and then minimize racial, age, cultural biases and confounding factors. The participant chosen had to be older than 16-years of age, female and a resident of the household. If more than one female was at home, the most senior was interviewed. The study found a statistically significant relationship between verbal abuse (p = 0.0017), physical abuse (p = 0.0012) and financial abuse (p = 0.001) and socio-economic class. For all the socio-economic classes considered, the highest prevalence of domestic violence occurred amongst the working class and lower middle socio-economic classes. The most prominent reasons cited for the physical violence was drug and alcohol abuse (37%) and communication differences (16.3%). These were the other two main perceived causes of the violence. The power of the study was 0.78 and the all strata prevalence of domestic violence was 41%. Domestic violence was reported within all socio-economic class groupings but it was most prevalent within the working class and lower middle socio-economic classes. The major perceived cause of domestic violence was alcohol/drug abuse.

  4. Microbiological performance of dairy processing plants is influenced by scale of production and the implemented food safety management system: a case study.

    PubMed

    Opiyo, Beatrice Atieno; Wangoh, John; Njage, Patrick Murigu Kamau

    2013-06-01

    The effects of existing food safety management systems and size of the production facility on microbiological quality in the dairy industry in Kenya were studied. A microbial assessment scheme was used to evaluate 14 dairies in Nairobi and its environs, and their performance was compared based on their size and on whether they were implementing hazard analysis critical control point (HACCP) systems and International Organization for Standardization (ISO) 22000 recommendations. Environmental samples from critical sampling locations, i.e., workers' hands and food contact surfaces, and from end products were analyzed for microbial quality, including hygiene indicators and pathogens. Microbial safety level profiles (MSLPs) were constructed from the microbiological data to obtain an overview of contamination. The maximum MSLP score for environmental samples was 18 (six microbiological parameters, each with a maximum MSLP score of 3) and that for end products was 15 (five microbiological parameters). Three dairies (two large scale and one medium scale; 21% of total) achieved the maximum MSLP scores of 18 for environmental samples and 15 for the end product. Escherichia coli was detected on food contact surfaces in three dairies, all of which were small scale dairies, and the microorganism was also present in end product samples from two of these dairies, an indication of cross-contamination. Microbial quality was poorest in small scale dairies. Most operations in these dairies were manual, with minimal system documentation. Noncompliance with hygienic practices such as hand washing and cleaning and disinfection procedures, which is common in small dairies, directly affects the microbial quality of the end products. Dairies implementing HACCP systems or ISO 22000 recommendations achieved maximum MSLP scores and hence produced safer products.

  5. The influence of secondary processing on the structural relaxation dynamics of fluticasone propionate.

    PubMed

    Depasquale, Roberto; Lee, Sau L; Saluja, Bhawana; Shur, Jagdeep; Price, Robert

    2015-06-01

    This study investigated the structural relaxation of micronized fluticasone propionate (FP) under different lagering conditions and its influence on aerodynamic particle size distribution (APSD) of binary and tertiary carrier-based dry powder inhaler (DPI) formulations. Micronized FP was lagered under low humidity (LH 25 C, 33% RH [relative humidity]), high humidity (HH 25°C, 75% RH) for 30, 60, and 90 days, respectively, and high temperature (HT 60°C, 44% RH) for 14 days. Physicochemical, surface interfacial properties via cohesive-adhesive balance (CAB) measurements and amorphous disorder levels of the FP samples were characterized. Particle size, surface area, and rugosity suggested minimal morphological changes of the lagered FP samples, with the exception of the 90-day HH (HH90) sample. HH90 FP samples appeared to undergo surface reconstruction with a reduction in surface rugosity. LH and HH lagering reduced the levels of amorphous content over 90-day exposure, which influenced the CAB measurements with lactose monohydrate and salmeterol xinafoate (SX). CAB analysis suggested that LH and HH lagering led to different interfacial interactions with lactose monohydrate but an increasing adhesive affinity with SX. HT lagering led to no detectable levels of the amorphous disorder, resulting in an increase in the adhesive interaction with lactose monohydrate. APSD analysis suggested that the fine particle mass of FP and SX was affected by the lagering of the FP. In conclusion, environmental conditions during the lagering of FP may have a profound effect on physicochemical and interfacial properties as well as product performance of binary and tertiary carrier-based DPI formulations.

  6. Choosing a design to fit the situation: how to improve specificity and positive predictive values using Bayesian lot quality assurance sampling

    PubMed Central

    Olives, Casey; Pagano, Marcello

    2013-01-01

    Background Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined. Methods We present Bayesian-LQAS (B-LQAS), an approach to incorporating prior information into the choice of the LQAS sample size and decision rule, and explore its properties through a numerical study. Additionally, we analyse vaccination coverage data from UNICEF’s State of the World’s Children in 1968–1989 and 2008 to exemplify the performance of LQAS and B-LQAS. Results Results of our numerical study show that the choice of LQAS sample size and decision rule is sensitive to the distribution of prior information, as well as to individual beliefs about the importance of correct classification. Application of the B-LQAS approach to the UNICEF data improves specificity and PPV in both time periods (1968–1989 and 2008) with minimal reductions in sensitivity and negative predictive value. Conclusions LQAS is shown to be a robust tool that is not necessarily prone to poor specificity and PPV as previously alleged. In situations where prior or historical data are available, B-LQAS can lead to improvements in expected performance. PMID:23378151

  7. Use of Bayesian Decision Analysis to Minimize Harm in Patient-Centered Randomized Clinical Trials in Oncology.

    PubMed

    Montazerhodjat, Vahid; Chaudhuri, Shomesh E; Sargent, Daniel J; Lo, Andrew W

    2017-09-14

    Randomized clinical trials (RCTs) currently apply the same statistical threshold of alpha = 2.5% for controlling for false-positive results or type 1 error, regardless of the burden of disease or patient preferences. Is there an objective and systematic framework for designing RCTs that incorporates these considerations on a case-by-case basis? To apply Bayesian decision analysis (BDA) to cancer therapeutics to choose an alpha and sample size that minimize the potential harm to current and future patients under both null and alternative hypotheses. We used the National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) database and data from the 10 clinical trials of the Alliance for Clinical Trials in Oncology. The NCI SEER database was used because it is the most comprehensive cancer database in the United States. The Alliance trial data was used owing to the quality and breadth of data, and because of the expertise in these trials of one of us (D.J.S.). The NCI SEER and Alliance data have already been thoroughly vetted. Computations were replicated independently by 2 coauthors and reviewed by all coauthors. Our prior hypothesis was that an alpha of 2.5% would not minimize the overall expected harm to current and future patients for the most deadly cancers, and that a less conservative alpha may be necessary. Our primary study outcomes involve measuring the potential harm to patients under both null and alternative hypotheses using NCI and Alliance data, and then computing BDA-optimal type 1 error rates and sample sizes for oncology RCTs. We computed BDA-optimal parameters for the 23 most common cancer sites using NCI data, and for the 10 Alliance clinical trials. For RCTs involving therapies for cancers with short survival times, no existing treatments, and low prevalence, the BDA-optimal type 1 error rates were much higher than the traditional 2.5%. For cancers with longer survival times, existing treatments, and high prevalence, the corresponding BDA-optimal error rates were much lower, in some cases even lower than 2.5%. Bayesian decision analysis is a systematic, objective, transparent, and repeatable process for deciding the outcomes of RCTs that explicitly incorporates burden of disease and patient preferences.

  8. Use of Bayesian Decision Analysis to Minimize Harm in Patient-Centered Randomized Clinical Trials in Oncology

    PubMed Central

    Montazerhodjat, Vahid; Chaudhuri, Shomesh E.; Sargent, Daniel J.

    2017-01-01

    Importance Randomized clinical trials (RCTs) currently apply the same statistical threshold of alpha = 2.5% for controlling for false-positive results or type 1 error, regardless of the burden of disease or patient preferences. Is there an objective and systematic framework for designing RCTs that incorporates these considerations on a case-by-case basis? Objective To apply Bayesian decision analysis (BDA) to cancer therapeutics to choose an alpha and sample size that minimize the potential harm to current and future patients under both null and alternative hypotheses. Data Sources We used the National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) database and data from the 10 clinical trials of the Alliance for Clinical Trials in Oncology. Study Selection The NCI SEER database was used because it is the most comprehensive cancer database in the United States. The Alliance trial data was used owing to the quality and breadth of data, and because of the expertise in these trials of one of us (D.J.S.). Data Extraction and Synthesis The NCI SEER and Alliance data have already been thoroughly vetted. Computations were replicated independently by 2 coauthors and reviewed by all coauthors. Main Outcomes and Measures Our prior hypothesis was that an alpha of 2.5% would not minimize the overall expected harm to current and future patients for the most deadly cancers, and that a less conservative alpha may be necessary. Our primary study outcomes involve measuring the potential harm to patients under both null and alternative hypotheses using NCI and Alliance data, and then computing BDA-optimal type 1 error rates and sample sizes for oncology RCTs. Results We computed BDA-optimal parameters for the 23 most common cancer sites using NCI data, and for the 10 Alliance clinical trials. For RCTs involving therapies for cancers with short survival times, no existing treatments, and low prevalence, the BDA-optimal type 1 error rates were much higher than the traditional 2.5%. For cancers with longer survival times, existing treatments, and high prevalence, the corresponding BDA-optimal error rates were much lower, in some cases even lower than 2.5%. Conclusions and Relevance Bayesian decision analysis is a systematic, objective, transparent, and repeatable process for deciding the outcomes of RCTs that explicitly incorporates burden of disease and patient preferences. PMID:28418507

  9. Visualization of three pathways for macromolecule transport across cultured endothelium and their modification by flow.

    PubMed

    Ghim, Mean; Alpresa, Paola; Yang, Sung-Wook; Braakman, Sietse T; Gray, Stephen G; Sherwin, Spencer J; van Reeuwijk, Maarten; Weinberg, Peter D

    2017-11-01

    Transport of macromolecules across vascular endothelium and its modification by fluid mechanical forces are important for normal tissue function and in the development of atherosclerosis. However, the routes by which macromolecules cross endothelium, the hemodynamic stresses that maintain endothelial physiology or trigger disease, and the dependence of transendothelial transport on hemodynamic stresses are controversial. We visualized pathways for macromolecule transport and determined the effect on these pathways of different types of flow. Endothelial monolayers were cultured under static conditions or on an orbital shaker producing different flow profiles in different parts of the wells. Fluorescent tracers that bound to the substrate after crossing the endothelium were used to identify transport pathways. Maps of tracer distribution were compared with numerical simulations of flow to determine effects of different shear stress metrics on permeability. Albumin-sized tracers dominantly crossed the cultured endothelium via junctions between neighboring cells, high-density lipoprotein-sized tracers crossed at tricellular junctions, and low-density lipoprotein-sized tracers crossed through cells. Cells aligned close to the angle that minimized shear stresses across their long axis. The rate of paracellular transport under flow correlated with the magnitude of these minimized transverse stresses, whereas transport across cells was uniformly reduced by all types of flow. These results contradict the long-standing two-pore theory of solute transport across microvessel walls and the consensus view that endothelial cells align with the mean shear vector. They suggest that endothelial cells minimize transverse shear, supporting its postulated proatherogenic role. Preliminary data show that similar tracer techniques are practicable in vivo. NEW & NOTEWORTHY Solutes of increasing size crossed cultured endothelium through intercellular junctions, through tricellular junctions, or transcellularly. Cells aligned to minimize the shear stress acting across their long axis. Paracellular transport correlated with the level of this minimized shear, but transcellular transport was reduced uniformly by flow regardless of the shear profile. Copyright © 2017 the American Physiological Society.

  10. Hack's relation and optimal channel networks: The elongation of river basins as a consequence of energy minimization

    NASA Astrophysics Data System (ADS)

    Ijjasz-Vasquez, Ede J.; Bras, Rafael L.; Rodriguez-Iturbe, Ignacio

    1993-08-01

    As pointed by Hack (1957), river basins tend to become longer and narrower as their size increases. This work shows that this property may be partially regarded as the consequence of competition and minimization of energy expenditure in river basins.

  11. Pupillary and Heart Rate Reactivity in Children with Minimal Brain Dysfunction

    ERIC Educational Resources Information Center

    Zahn, Theodore P.; And Others

    1978-01-01

    In an attempt to replicate and extend previous findings on autonomic arousal and responsivity in children with minimal brain dysfunction (MBD), pupil size, heart rate, skin conductance, and skin temperature were recorded from 32 MBD and 45 control children (6-13 years old). (Author/CL)

  12. POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF LOCKING DEVICES (EPA/600/S-95/013)

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected u...

  13. ENVIRONMENTAL RESEARCH BRIEF: POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF FOOD SERVICE EQUIPMENT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  14. Storage Optimization of Educational System Data

    ERIC Educational Resources Information Center

    Boja, Catalin

    2006-01-01

    There are described methods used to minimize data files dimension. There are defined indicators for measuring size of files and databases. The storage optimization process is based on selecting from a multitude of data storage models the one that satisfies the propose problem objective, maximization or minimization of the optimum criterion that is…

  15. POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF BOURBON WHISKEY (EPA/600/S-95/010

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected u...

  16. Spray spectrum modifications through changes in airspeed to minimize drift

    USDA-ARS?s Scientific Manuscript database

    Management of droplet size is one of the key components to minimizing spray drift, which can be accomplished in-flight by changing airspeed. Studies were conducted measuring spray droplet spectra parameters across airspeeds ranging from 100-140 mph (in 5 mph increments). In general the volume medi...

  17. POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF POWER SUPPLIES (EPA/600/S-95/025)

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected u...

  18. POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF METAL FASTENERS (EPA/600/S-95/016)

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected u...

  19. ENVIRONMENTAL RESEARCH BRIEF: POLLUTION PREVENTION FOR A MANUFACTURER OF METAL FASTENERS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  20. ENVIRONMENTAL RESEARCH BRIEF: POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF REBUILT INDUSTRIAL CRANKSHAFTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  1. A semi-automated Raman micro-spectroscopy method for morphological and chemical characterizations of microplastic litter.

    PubMed

    L, Frère; I, Paul-Pont; J, Moreau; P, Soudant; C, Lambert; A, Huvet; E, Rinnert

    2016-12-15

    Every step of microplastic analysis (collection, extraction and characterization) is time-consuming, representing an obstacle to the implementation of large scale monitoring. This study proposes a semi-automated Raman micro-spectroscopy method coupled to static image analysis that allows the screening of a large quantity of microplastic in a time-effective way with minimal machine operator intervention. The method was validated using 103 particles collected at the sea surface spiked with 7 standard plastics: morphological and chemical characterization of particles was performed in <3h. The method was then applied to a larger environmental sample (n=962 particles). The identification rate was 75% and significantly decreased as a function of particle size. Microplastics represented 71% of the identified particles and significant size differences were observed: polystyrene was mainly found in the 2-5mm range (59%), polyethylene in the 1-2mm range (40%) and polypropylene in the 0.335-1mm range (42%). Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The Equilibrium Allele Frequency Distribution for a Population with Reproductive Skew

    PubMed Central

    Der, Ricky; Plotkin, Joshua B.

    2014-01-01

    We study the population genetics of two neutral alleles under reversible mutation in a model that features a skewed offspring distribution, called the Λ-Fleming–Viot process. We describe the shape of the equilibrium allele frequency distribution as a function of the model parameters. We show that the mutation rates can be uniquely identified from this equilibrium distribution, but the form of the offspring distribution cannot itself always be so identified. We introduce an estimator for the mutation rate that is consistent, independent of the form of reproductive skew. We also introduce a two-allele infinite-sites version of the Λ-Fleming–Viot process, and we use it to study how reproductive skew influences standing genetic diversity in a population. We derive asymptotic formulas for the expected number of segregating sites as a function of sample size and offspring distribution. We find that the Wright–Fisher model minimizes the equilibrium genetic diversity, for a given mutation rate and variance effective population size, compared to all other Λ-processes. PMID:24473932

  3. Estimating pore and cement volumes in thin section

    USGS Publications Warehouse

    Halley, R.B.

    1978-01-01

    Point count estimates of pore, grain and cement volumes from thin sections are inaccurate, often by more than 100 percent, even though they may be surprisingly precise (reproducibility + or - 3 percent). Errors are produced by: 1) inclusion of submicroscopic pore space within solid volume and 2) edge effects caused by grain curvature within a 30-micron thick thin section. Submicroscopic porosity may be measured by various physical tests or may be visually estimated from scanning electron micrographs. Edge error takes the form of an envelope around grains and increases with decreasing grain size and sorting, increasing grain irregularity and tighter grain packing. Cements are greatly involved in edge error because of their position at grain peripheries and their generally small grain size. Edge error is minimized by methods which reduce the thickness of the sample viewed during point counting. Methods which effectively reduce thickness include use of ultra-thin thin sections or acetate peels, point counting in reflected light, or carefully focusing and counting on the upper surface of the thin section.

  4. The Role of Grain Size on Neutron Irradiation Response of Nanocrystalline Copper

    PubMed Central

    Mohamed, Walid; Miller, Brandon; Porter, Douglas; Murty, Korukonda

    2016-01-01

    The role of grain size on the developed microstructure and mechanical properties of neutron irradiated nanocrystalline copper was investigated by comparing the radiation response of material to the conventional micrograined counterpart. Nanocrystalline (nc) and micrograined (MG) copper samples were subjected to a range of neutron exposure levels from 0.0034 to 2 dpa. At all damage levels, the response of MG-copper was governed by radiation hardening manifested by an increase in strength with accompanying ductility loss. Conversely, the response of nc-copper to neutron irradiation exhibited a dependence on the damage level. At low damage levels, grain growth was the primary response, with radiation hardening and embrittlement becoming the dominant responses with increasing damage levels. Annealing experiments revealed that grain growth in nc-copper is composed of both thermally-activated and irradiation-induced components. Tensile tests revealed minimal change in the source hardening component of the yield stress in MG-copper, while the source hardening component was found to decrease with increasing radiation exposure in nc-copper. PMID:28773270

  5. Performance Evaluation of Telemedicine System based on multicasting over Heterogeneous Network.

    PubMed

    Yun, H Y; Yoo, S K; Kim, D K; Rim Kim, Sung

    2005-01-01

    For appropriate diagnosis, medical image such as high quality image of patient's affected part and vital signal, patient information, and teleconferencing data for communication between specialists will be transmitted. After connecting patient and specialist the center, sender acquires patient data and transmits to the center through TCP/IP protocol. Data that is transmitted to center is retransmitted to each specialist side that accomplish connection after being copied according to listener's number from transmission buffer. At transmission of medical information data in network, transmission delay and loss occur by the change of buffer size, packet size, number of user and kind of networks. As there lies the biggest delay possibility in ADSL, buffer Size should be established by 1Mbytes first to minimize transmission regionalism and each packet's size must be set accordingly to MTU Size in order to improve network efficiency by maximum. Also, listener's number should be limited by less than 6 people. Data transmission consisted smoothly all in experiment result in common use network- ADSL, VDSL, WLAN, LAN-. But, possibility of delay appeared most greatly in ADSL that has the most confined bandwidth. To minimize the possibility of delay, some adjustment is needed such as buffer size, number of receiver, packet size.

  6. A novel approach for the development of tiered use biological criteria for rivers and streams in an ecologically diverse landscape.

    PubMed

    Bouchard, R William; Niemela, Scott; Genet, John A; Yoder, Chris O; Sandberg, John; Chirhart, Joel W; Feist, Mike; Lundeen, Benjamin; Helwig, Dan

    2016-03-01

    Water resource protection goals for aquatic life are often general and can result in under protection of some high quality water bodies and unattainable expectations for other water bodies. More refined aquatic life goals known as tiered aquatic life uses (TALUs) provide a framework to designate uses by setting protective goals for high quality water bodies and establishing attainable goals for water bodies altered by legally authorized legacy activities (e.g., channelization). Development of biological criteria or biocriteria typically requires identification of a set of least- or minimally-impacted reference sites that are used to establish a baseline from which goals are derived. Under a more refined system of stream types and aquatic life use goals, an adequate set of reference sites is needed to account for the natural variability of aquatic communities (e.g., landscape differences, thermal regime, and stream size). To develop sufficient datasets, Minnesota employed a reference condition approach in combination with an approach based on characterizing a stream's response to anthropogenic disturbance through development of a Biological Condition Gradient (BCG). These two approaches allowed for the creation of ecologically meaningful and consistent biocriteria within a more refined stream typology and solved issues related to small sample sizes and poor representation of minimally- or least-disturbed conditions for some stream types. Implementation of TALU biocriteria for Minnesota streams and rivers will result in consistent and protective goals that address fundamental differences among waters in terms of their potential for restoration.

  7. Determining the minimal clinically important difference criteria for the Multidimensional Fatigue Inventory in a radiotherapy population.

    PubMed

    Purcell, Amanda; Fleming, Jennifer; Bennett, Sally; Burmeister, Bryan; Haines, Terry

    2010-03-01

    The Multidimensional Fatigue Inventory (MFI) is a commonly used cancer-related fatigue assessment tool. Unlike other fatigue assessments, there are no published minimal clinically important difference (MCID) criteria for the MFI in cancer populations. MCID criteria determine the smallest change in scores that can be regarded as important, allowing clinicians and researchers to interpret the meaning of changes in patient's fatigue scores. This research aims to improve the clinical utility of the MFI by establishing MCID criteria for the MFI sub-scales in a radiotherapy population. Two hundred ten patients undergoing radiotherapy were recruited to a single-centre prospective cohort study. Patients were assessed at three time points, at the start of radiotherapy, the end of radiotherapy and 6 weeks after radiotherapy completion. Assessment consisted of four clinically relevant constructs: (1) treatment impact on fatigue, (2) health-related quality of life, (3) performance status and (4) occupational productivity. These constructs were used as external or anchor-based measures to determine MCIDs for each sub-scale of the MFI. Multiple MCIDs were identified, each from a different perspective based on the constructs cited. Researchers seeking to use a generic MCID may wish to use a two-point reference for each MFI sub-scale as it was consistent across the pre- and post-radiotherapy comparison and occupational productivity anchors. MCIDs validated in this study allow better interpretation of changes in MFI sub-scale scores and allow effect size calculations for determining sample size in future studies.

  8. Implementing reduced-risk integrated pest management in fresh-market cabbage: influence of sampling parameters, and validation of binomial sequential sampling plans for the cabbage looper (Lepidoptera Noctuidae).

    PubMed

    Burkness, Eric C; Hutchison, W D

    2009-10-01

    Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.

  9. Enhanced techniques for the measurement of ultra-low level (pg and fg) actinide analysis by ICP-MS for forensic and geologic applications

    NASA Astrophysics Data System (ADS)

    Pollington, A. D.; Kinman, W.; Hanson, S. K.

    2014-12-01

    Recent advances in mass spectrometry have led to an improved ability to measure high precision isotope ratios at increasingly low analyte concentrations. Combining techniques for enhanced ionization with better counting of small ion beams, we routinely measure isotope ratios on 100's of pg uranium samples and ≤10 pg plutonium samples with relative standard deviations of 1‰ on major isotope ratios and 10‰ on minor ratios achievable. With slightly larger samples (≤1 ng total U), these precisions can be as low as 0.01‰ (10 ppm) and 1‰ respectively. These techniques can be applied to both nuclear forensics questions where only a small amount of sample is available, as well as geologic questions such as U-Pb or U-Th disequilibrium geochronology from either single small crystals, or microsampled domains from within a heterogeneous sample. The analytical setup is a Cetac Aridus II desolvating nebulizer interfaced with a ThermoScientific Neptune Plus equipped with a jet-type sample cone and x-type skimmer cone. The combination of the desolvating nebulizer with the enhanced cone setup leads to an increase in sensitivity on the order of 10x that of a standard glass spray chamber (~1000V/ppm U). The Neptune Plus is equipped with 9 Faraday cups and 5 electron multipliers (two behind RPQ energy filters for improved abundance sensitivtiy). This allows for the simultaneous collection of all isotopes of either U or Pu with a combination of Faraday cups (e.g., 235U and 238U) and electron multipliers (e.g., 234U and 236U) with other configurations also available (e.g., 235U and 238U can instead be measured on electron multipliers in small samples). As sample sizes get small, the contributions from environmental blanks, as well as interfering species, become increasing concerns. In this study, we will present data on efforts to minimize the contribution of environmental U using scaled down chemical procedures as well as the effect of polyatomic species on the precision and accuracy of actinide isotope measurements and what procedures can be applied to minimize interferences.

  10. Repopulation of calibrations with samples from the target site: effect of the size of the calibration.

    NASA Astrophysics Data System (ADS)

    Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.

    2009-04-01

    Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".

  11. Squamate hatchling size and the evolutionary causes of negative offspring size allometry.

    PubMed

    Meiri, S; Feldman, A; Kratochvíl, L

    2015-02-01

    Although fecundity selection is ubiquitous, in an overwhelming majority of animal lineages, small species produce smaller number of offspring per clutch. In this context, egg, hatchling and neonate sizes are absolutely larger, but smaller relative to adult body size in larger species. The evolutionary causes of this widespread phenomenon are not fully explored. The negative offspring size allometry can result from processes limiting maximal egg/offspring size forcing larger species to produce relatively smaller offspring ('upper limit'), or from a limit on minimal egg/offspring size forcing smaller species to produce relatively larger offspring ('lower limit'). Several reptile lineages have invariant clutch sizes, where females always lay either one or two eggs per clutch. These lineages offer an interesting perspective on the general evolutionary forces driving negative offspring size allometry, because an important selective factor, fecundity selection in a single clutch, is eliminated here. Under the upper limit hypotheses, large offspring should be selected against in lineages with invariant clutch sizes as well, and these lineages should therefore exhibit the same, or shallower, offspring size allometry as lineages with variable clutch size. On the other hand, the lower limit hypotheses would allow lineages with invariant clutch sizes to have steeper offspring size allometries. Using an extensive data set on the hatchling and female sizes of > 1800 species of squamates, we document that negative offspring size allometry is widespread in lizards and snakes with variable clutch sizes and that some lineages with invariant clutch sizes have unusually steep offspring size allometries. These findings suggest that the negative offspring size allometry is driven by a constraint on minimal offspring size, which scales with a negative allometry. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  12. Demographic history of an elusive carnivore: using museums to inform management

    PubMed Central

    Holbrook, Joseph D; DeYoung, Randy W; Tewes, Michael E; Young, John H

    2012-01-01

    Elusive carnivores present a challenge to managers because traditional survey methods are not suitable. We applied a genetic approach using museum specimens to examine how historical and recent conditions influenced the demographic history of Puma concolor in western and southern Texas, USA. We used 10 microsatellite loci and indexed population trends by estimating historical and recent genetic diversity, genetic differentiation and effective population size. Mountain lions in southern Texas exhibited a 9% decline in genetic diversity, whereas diversity remained stable in western Texas. Genetic differentiation between western and southern Texas was minimal historically (FST = 0.04, P < 0.01), but increased 2–2.5 times in our recent sample. An index of genetic drift for southern Texas was seven to eight times that of western Texas, presumably contributing to the current differentiation between western and southern Texas. Furthermore, southern Texas exhibited a >50% temporal decline in effective population size, whereas western Texas showed no change. Our results illustrate that population declines and genetic drift have occurred in southern Texas, likely because of contemporary habitat loss and predator control. Population monitoring may be needed to ensure the persistence of mountain lions in the southern Texas region. This study highlights the utility of sampling museum collections to examine demographic histories and inform wildlife management. PMID:23028402

  13. An Optimal Bahadur-Efficient Method in Detection of Sparse Signals with Applications to Pathway Analysis in Sequencing Association Studies.

    PubMed

    Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui

    2016-01-01

    Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.

  14. Testing Pneumonia Vaccines in the Elderly: Determining a Case Definition for Pneumococcal Pneumonia in the Absence of a Gold Standard.

    PubMed

    Jokinen, Jukka; Snellman, Marja; Palmu, Arto A; Saukkoriipi, Annika; Verlant, Vincent; Pascal, Thierry; Devaster, Jeanne-Marie; Hausdorff, William P; Kilpi, Terhi M

    2018-06-01

    Clinical assessments of vaccines to prevent pneumococcal community-acquired pneumonia (CAP) require sensitive and specific case definitions, but there is no gold standard diagnostic test. To develop a new case definition suitable for vaccine efficacy studies, we applied latent class analysis (LCA) to the results from 7 diagnostic tests for pneumococcal etiology on clinical specimens from 323 elderly persons with radiologically confirmed pneumonia enrolled in the Finnish Community-Acquired Pneumonia Epidemiology study during 2005-2007. Compared with the conventional use of LCA, which is mainly to determine sensitivities and specificities of different tests, we instead used LCA as an appropriate instrument to predict the probability of pneumococcal etiology for each CAP case based on individual test profiles, and we used the predictions to minimize the sample size that would be needed for a vaccine efficacy trial. When compared with the conventional laboratory criteria of encapsulated pneumococci in culture, in blood culture or high-quality sputum culture, or urine antigen positivity, our optimized case definition for pneumococcal CAP resulted in a trial sample size that was almost 20,000 subjects smaller. We believe that the novel application of LCA detailed here to determine a case definition for pneumococcal CAP could also be similarly applied to other diseases without a gold standard.

  15. A framework for measurement and harmonization of pediatric multiple sclerosis etiologic research studies: The Pediatric MS Tool-Kit.

    PubMed

    Magalhaes, Sandra; Banwell, Brenda; Bar-Or, Amit; Fortier, Isabel; Hanwell, Heather E; Lim, Ming; Matt, Georg E; Neuteboom, Rinze F; O'Riordan, David L; Schneider, Paul K; Pugliatti, Maura; Shatenstein, Bryna; Tansey, Catherine M; Wassmer, Evangeline; Wolfson, Christina

    2018-06-01

    While studying the etiology of multiple sclerosis (MS) in children has several methodological advantages over studying etiology in adults, studies are limited by small sample sizes. Using a rigorous methodological process, we developed the Pediatric MS Tool-Kit, a measurement framework that includes a minimal set of core variables to assess etiological risk factors. We solicited input from the International Pediatric MS Study Group to select three risk factors: environmental tobacco smoke (ETS) exposure, sun exposure, and vitamin D intake. To develop the Tool-Kit, we used a Delphi study involving a working group of epidemiologists, neurologists, and content experts from North America and Europe. The Tool-Kit includes six core variables to measure ETS, six to measure sun exposure, and six to measure vitamin D intake. The Tool-Kit can be accessed online ( www.maelstrom-research.org/mica/network/tool-kit ). The goals of the Tool-Kit are to enhance exposure measurement in newly designed pediatric MS studies and comparability of results across studies, and in the longer term to facilitate harmonization of studies, a methodological approach that can be used to circumvent issues of small sample sizes. We believe the Tool-Kit will prove to be a valuable resource to guide pediatric MS researchers in developing study-specific questionnaire.

  16. Limited genomic consequences of mixed mating in the recently derived sister species pair, Collinsia concolor and Collinsia parryi.

    PubMed

    Salcedo, A; Kalisz, S; Wright, S I

    2014-07-01

    Highly selfing species often show reduced effective population sizes and reduced selection efficacy. Whether mixed mating species, which produce both self and outcross progeny, show similar patterns of diversity and selection remains less clear. Examination of patterns of molecular evolution and levels of diversity in species with mixed mating systems can be particularly useful for investigating the relative importance of linked selection and demographic effects on diversity and the efficacy of selection, as the effects of linked selection should be minimal in mixed mating populations, although severe bottlenecks tied to founder events could still be frequent. To begin to address this gap, we assembled and analysed the transcriptomes of individuals from a recently diverged mixed mating sister species pair in the self-compatible genus, Collinsia. The de novo assembly of 52 and 37 Mbp C. concolor and C. parryi transcriptomes resulted in ~40 000 and ~55 000 contigs, respectively, both with an average contig size ~945. We observed a high ratio of shared polymorphisms to fixed differences in the species pair and minimal differences between species in the ratio of synonymous to replacement substitutions or codon usage bias implying comparable effective population sizes throughout species divergence. Our results suggest that differences in effective population size and selection efficacy in mixed mating taxa shortly after their divergence may be minimal and are likely influenced by fluctuating mating systems and population sizes. © 2014 The Authors. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  17. Minimally invasive 'step-up approach' versus maximal necrosectomy in patients with acute necrotising pancreatitis (PANTER trial): design and rationale of a randomised controlled multicenter trial [ISRCTN13975868].

    PubMed

    Besselink, Marc G H; van Santvoort, Hjalmar C; Nieuwenhuijs, Vincent B; Boermeester, Marja A; Bollen, Thomas L; Buskens, Erik; Dejong, Cornelis H C; van Eijck, Casper H J; van Goor, Harry; Hofker, Sijbrand S; Lameris, Johan S; van Leeuwen, Maarten S; Ploeg, Rutger J; van Ramshorst, Bert; Schaapherder, Alexander F M; Cuesta, Miguel A; Consten, Esther C J; Gouma, Dirk J; van der Harst, Erwin; Hesselink, Eric J; Houdijk, Lex P J; Karsten, Tom M; van Laarhoven, Cees J H M; Pierie, Jean-Pierre E N; Rosman, Camiel; Bilgen, Ernst Jan Spillenaar; Timmer, Robin; van der Tweel, Ingeborg; de Wit, Ralph J; Witteman, Ben J M; Gooszen, Hein G

    2006-04-11

    The initial treatment of acute necrotizing pancreatitis is conservative. Intervention is indicated in patients with (suspected) infected necrotizing pancreatitis. In the Netherlands, the standard intervention is necrosectomy by laparotomy followed by continuous postoperative lavage (CPL). In recent years several minimally invasive strategies have been introduced. So far, these strategies have never been compared in a randomised controlled trial. The PANTER study (PAncreatitis, Necrosectomy versus sTEp up appRoach) was conceived to yield the evidence needed for a considered policy decision. 88 patients with (suspected) infected necrotizing pancreatitis will be randomly allocated to either group A) minimally invasive 'step-up approach' starting with drainage followed, if necessary, by videoscopic assisted retroperitoneal debridement (VARD) or group B) maximal necrosectomy by laparotomy. Both procedures are followed by CPL. Patients will be recruited from 20 hospitals, including all Dutch university medical centres, over a 3-year period. The primary endpoint is the proportion of patients suffering from postoperative major morbidity and mortality. Secondary endpoints are complications, new onset sepsis, length of hospital and intensive care stay, quality of life and total (direct and indirect) costs. To demonstrate that the 'step-up approach' can reduce the major morbidity and mortality rate from 45 to 16%, with 80% power at 5% alpha, a total sample size of 88 patients was calculated. The PANTER-study is a randomised controlled trial that will provide evidence on the merits of a minimally invasive 'step-up approach' in patients with (suspected) infected necrotizing pancreatitis.

  18. Minimally invasive 'step-up approach' versus maximal necrosectomy in patients with acute necrotising pancreatitis (PANTER trial): design and rationale of a randomised controlled multicenter trial [ISRCTN38327949

    PubMed Central

    Besselink, Marc GH; van Santvoort, Hjalmar C; Nieuwenhuijs, Vincent B; Boermeester, Marja A; Bollen, Thomas L; Buskens, Erik; Dejong, Cornelis HC; van Eijck, Casper HJ; van Goor, Harry; Hofker, Sijbrand S; Lameris, Johan S; van Leeuwen, Maarten S; Ploeg, Rutger J; van Ramshorst, Bert; Schaapherder, Alexander FM; Cuesta, Miguel A; Consten, Esther CJ; Gouma, Dirk J; van der Harst, Erwin; Hesselink, Eric J; Houdijk, Lex PJ; Karsten, Tom M; van Laarhoven, Cees JHM; Pierie, Jean-Pierre EN; Rosman, Camiel; Bilgen, Ernst Jan Spillenaar; Timmer, Robin; van der Tweel, Ingeborg; de Wit, Ralph J; Witteman, Ben JM; Gooszen, Hein G

    2006-01-01

    Background The initial treatment of acute necrotizing pancreatitis is conservative. Intervention is indicated in patients with (suspected) infected necrotizing pancreatitis. In the Netherlands, the standard intervention is necrosectomy by laparotomy followed by continuous postoperative lavage (CPL). In recent years several minimally invasive strategies have been introduced. So far, these strategies have never been compared in a randomised controlled trial. The PANTER study (PAncreatitis, Necrosectomy versus sTEp up appRoach) was conceived to yield the evidence needed for a considered policy decision. Methods/design 88 patients with (suspected) infected necrotizing pancreatitis will be randomly allocated to either group A) minimally invasive 'step-up approach' starting with drainage followed, if necessary, by videoscopic assisted retroperitoneal debridement (VARD) or group B) maximal necrosectomy by laparotomy. Both procedures are followed by CPL. Patients will be recruited from 20 hospitals, including all Dutch university medical centres, over a 3-year period. The primary endpoint is the proportion of patients suffering from postoperative major morbidity and mortality. Secondary endpoints are complications, new onset sepsis, length of hospital and intensive care stay, quality of life and total (direct and indirect) costs. To demonstrate that the 'step-up approach' can reduce the major morbidity and mortality rate from 45 to 16%, with 80% power at 5% alpha, a total sample size of 88 patients was calculated. Discussion The PANTER-study is a randomised controlled trial that will provide evidence on the merits of a minimally invasive 'step-up approach' in patients with (suspected) infected necrotizing pancreatitis. PMID:16606471

  19. A method to determine the acoustic reflection and absorption coefficients of porous media by using modal dispersion in a waveguide.

    PubMed

    Prisutova, Jevgenija; Horoshenkov, Kirill; Groby, Jean-Philippe; Brouard, Bruno

    2014-12-01

    The measurement of acoustic material characteristics using a standard impedance tube method is generally limited to the plane wave regime below the tube cut-on frequency. This implies that the size of the tube and, consequently, the size of the material specimen must remain smaller than a half of the wavelength. This paper presents a method that enables the extension of the frequency range beyond the plane wave regime by at least a factor of 3, so that the size of the material specimen can be much larger than the wavelength. The proposed method is based on measuring of the sound pressure at different axial locations and applying the spatial Fourier transform. A normal mode decomposition approach is used together with an optimization algorithm to minimize the discrepancy between the measured and predicted sound pressure spectra. This allows the frequency and angle dependent reflection and absorption coefficients of the material specimen to be calculated in an extended frequency range. The method has been tested successfully on samples of melamine foam and wood fiber. The measured data are in close agreement with the predictions by the equivalent fluid model for the acoustical properties of porous media.

  20. Pyrolytic boron nitride coatings on ceramic yarns and fabrication of insulations

    NASA Technical Reports Server (NTRS)

    Moore, Arthur W.

    1992-01-01

    Pyrolytic boron nitride (PBN) was deposited on Nicalon NL 202 silicon carbide yarns at 1000 to 1200 C with the goal of improving the resistance of the Nicalon to deterioration in an aerodynamic environment at temperatures up to 1000 C. For continuous coating, the yarns were fed through the deposition chamber of a pilot plant sized CVD furnace at a rate of about 2 feet per minute. PBN coatings were obtained by reacting boron trichloride and ammonia gases inside the deposition chamber. Most of the PBN coatings were made at around 1080 C to minimize thermal degradation of the Nicalon. Pressures were typically below 0.1 Torr. The coated yarns were characterized by weight per unit length, tensile strength and modulus, scanning electron microscopy, and scanning Auger microscopy. The PBN coated Nicalon was woven into cloth, but was not entirely satisfactory as a high temperature sizing. Several 13 in. square pieces of Nicalon cloth were coated with PBN in a batch process in a factory sized deposition furnace. Samples of cloth made from the PBN coated Nicalon were sewn into thermal insulation panels, whose performance is being compared with that of panels made using uncoated Nicalon.

  1. Pre-Whaling Genetic Diversity and Population Ecology in Eastern Pacific Gray Whales: Insights from Ancient DNA and Stable Isotopes

    PubMed Central

    Alter, S. Elizabeth; Newsome, Seth D.; Palumbi, Stephen R.

    2012-01-01

    Commercial whaling decimated many whale populations, including the eastern Pacific gray whale, but little is known about how population dynamics or ecology differed prior to these removals. Of particular interest is the possibility of a large population decline prior to whaling, as such a decline could explain the ∼5-fold difference between genetic estimates of prior abundance and estimates based on historical records. We analyzed genetic (mitochondrial control region) and isotopic information from modern and prehistoric gray whales using serial coalescent simulations and Bayesian skyline analyses to test for a pre-whaling decline and to examine prehistoric genetic diversity, population dynamics and ecology. Simulations demonstrate that significant genetic differences observed between ancient and modern samples could be caused by a large, recent population bottleneck, roughly concurrent with commercial whaling. Stable isotopes show minimal differences between modern and ancient gray whale foraging ecology. Using rejection-based Approximate Bayesian Computation, we estimate the size of the population bottleneck at its minimum abundance and the pre-bottleneck abundance. Our results agree with previous genetic studies suggesting the historical size of the eastern gray whale population was roughly three to five times its current size. PMID:22590499

  2. Real-Time Measurement of Electronic Cigarette Aerosol Size Distribution and Metals Content Analysis.

    PubMed

    Mikheev, Vladimir B; Brinkman, Marielle C; Granville, Courtney A; Gordon, Sydney M; Clark, Pamela I

    2016-09-01

    Electronic cigarette (e-cigarette) use is increasing worldwide and is highest among both daily and nondaily smokers. E-cigarettes are perceived as a healthier alternative to combustible tobacco products, but their health risk factors have not yet been established, and one of them is lack of data on aerosol size generated by e-cigarettes. We applied a real-time, high-resolution aerosol differential mobility spectrometer to monitor the evolution of aerosol size and concentration during puff development. Particles generated by e-cigarettes were immediately delivered for analysis with minimal dilution and therefore with minimal sample distortion, which is critically important given the highly dynamic aerosol/vapor mixture inherent to e-cigarette emissions. E-cigarette aerosols normally exhibit a bimodal particle size distribution: nanoparticles (11-25nm count median diameter) and submicron particles (96-175nm count median diameter). Each mode has comparable number concentrations (10(7)-10(8) particles/cm(3)). "Dry puff" tests conducted with no e-cigarette liquid (e-liquid) present in the e-cigarette tank demonstrated that under these conditions only nanoparticles were generated. Analysis of the bulk aerosol collected on the filter showed that e-cigarette emissions contained a variety of metals. E-cigarette aerosol size distribution is different from that of combustible tobacco smoke. E-cigarettes generate high concentrations of nanoparticles and their chemical content requires further investigation. Despite the small mass of nanoparticles, their toxicological impact could be significant. Toxic chemicals that are attached to the small nanoparticles may have greater adverse health effects than when attached to larger submicron particles. The e-cigarette aerosol size distribution is different from that of combustible tobacco smoke and typically exhibits a bimodal behavior with comparable number concentrations of nanoparticles and submicron particles. While vaping the e-cigarette, along with submicron particles the user is also inhaling nano-aerosol that consists of nanoparticles with attached chemicals that has not been fully investigated. The presence of high concentrations of nanoparticles requires nanotoxicological consideration in order to assess the potential health impact of e-cigarettes. The toxicological impact of inhaled nanoparticles could be significant, though not necessarily similar to the biomarkers typical of combustible tobacco smoke. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Real-Time Measurement of Electronic Cigarette Aerosol Size Distribution and Metals Content Analysis

    PubMed Central

    Brinkman, Marielle C.; Granville, Courtney A.; Gordon, Sydney M.; Clark, Pamela I.

    2016-01-01

    Introduction: Electronic cigarette (e-cigarette) use is increasing worldwide and is highest among both daily and nondaily smokers. E-cigarettes are perceived as a healthier alternative to combustible tobacco products, but their health risk factors have not yet been established, and one of them is lack of data on aerosol size generated by e-cigarettes. Methods: We applied a real-time, high-resolution aerosol differential mobility spectrometer to monitor the evolution of aerosol size and concentration during puff development. Particles generated by e-cigarettes were immediately delivered for analysis with minimal dilution and therefore with minimal sample distortion, which is critically important given the highly dynamic aerosol/vapor mixture inherent to e-cigarette emissions. Results: E-cigarette aerosols normally exhibit a bimodal particle size distribution: nanoparticles (11–25nm count median diameter) and submicron particles (96–175nm count median diameter). Each mode has comparable number concentrations (107–108 particles/cm3). “Dry puff” tests conducted with no e-cigarette liquid (e-liquid) present in the e-cigarette tank demonstrated that under these conditions only nanoparticles were generated. Analysis of the bulk aerosol collected on the filter showed that e-cigarette emissions contained a variety of metals. Conclusions: E-cigarette aerosol size distribution is different from that of combustible tobacco smoke. E-cigarettes generate high concentrations of nanoparticles and their chemical content requires further investigation. Despite the small mass of nanoparticles, their toxicological impact could be significant. Toxic chemicals that are attached to the small nanoparticles may have greater adverse health effects than when attached to larger submicron particles. Implications: The e-cigarette aerosol size distribution is different from that of combustible tobacco smoke and typically exhibits a bimodal behavior with comparable number concentrations of nanoparticles and submicron particles. While vaping the e-cigarette, along with submicron particles the user is also inhaling nano-aerosol that consists of nanoparticles with attached chemicals that has not been fully investigated. The presence of high concentrations of nanoparticles requires nanotoxicological consideration in order to assess the potential health impact of e-cigarettes. The toxicological impact of inhaled nanoparticles could be significant, though not necessarily similar to the biomarkers typical of combustible tobacco smoke. PMID:27146638

  4. Advanced Design of Dumbbell-shaped Genetic Minimal Vectors Improves Non-coding and Coding RNA Expression.

    PubMed

    Jiang, Xiaoou; Yu, Han; Teo, Cui Rong; Tan, Genim Siu Xian; Goh, Sok Chin; Patel, Parasvi; Chua, Yiqiang Kevin; Hameed, Nasirah Banu Sahul; Bertoletti, Antonio; Patzel, Volker

    2016-09-01

    Dumbbell-shaped DNA minimal vectors lacking nontherapeutic genes and bacterial sequences are considered a stable, safe alternative to viral, nonviral, and naked plasmid-based gene-transfer systems. We investigated novel molecular features of dumbbell vectors aiming to reduce vector size and to improve the expression of noncoding or coding RNA. We minimized small hairpin RNA (shRNA) or microRNA (miRNA) expressing dumbbell vectors in size down to 130 bp generating the smallest genetic expression vectors reported. This was achieved by using a minimal H1 promoter with integrated transcriptional terminator transcribing the RNA hairpin structure around the dumbbell loop. Such vectors were generated with high conversion yields using a novel protocol. Minimized shRNA-expressing dumbbells showed accelerated kinetics of delivery and transcription leading to enhanced gene silencing in human tissue culture cells. In primary human T cells, minimized miRNA-expressing dumbbells revealed higher stability and triggered stronger target gene suppression as compared with plasmids and miRNA mimics. Dumbbell-driven gene expression was enhanced up to 56- or 160-fold by implementation of an intron and the SV40 enhancer compared with control dumbbells or plasmids. Advanced dumbbell vectors may represent one option to close the gap between durable expression that is achievable with integrating viral vectors and short-term effects triggered by naked RNA.

  5. Planning for the Impacts of Highway Relief Routes on Small- and Medium-Size Communities

    DOT National Transportation Integrated Search

    2001-03-01

    This report explores possible strategies for minimizing the negative impacts and maximizing the positive impacts of highway relief routes on small- and medium-size communities in Texas. Planning strategies are identified through a : literature search...

  6. Characterization, adaptive traffic shaping, and multiplexing of real-time MPEG II video

    NASA Astrophysics Data System (ADS)

    Agrawal, Sanjay; Barry, Charles F.; Binnai, Vinay; Kazovsky, Leonid G.

    1997-01-01

    We obtain network traffic model for real-time MPEG-II encoded digital video by analyzing video stream samples from real-time encoders from NUKO Information Systems. MPEG-II sample streams include a resolution intensive movie, City of Joy, an action intensive movie, Aliens, a luminance intensive (black and white) movie, Road To Utopia, and a chrominance intensive (color) movie, Dick Tracy. From our analysis we obtain a heuristic model for the encoded video traffic which uses a 15-stage Markov process to model the I,B,P frame sequences within a group of pictures (GOP). A jointly-correlated Gaussian process is used to model the individual frame sizes. Scene change arrivals are modeled according to a gamma process. Simulations show that our MPEG-II traffic model generates, I,B,P frame sequences and frame sizes that closely match the sample MPEG-II stream traffic characteristics as they relate to latency and buffer occupancy in network queues. To achieve high multiplexing efficiency we propose a traffic shaping scheme which sets preferred 1-frame generation times among a group of encoders so as to minimize the overall variation in total offered traffic while still allowing the individual encoders to react to scene changes. Simulations show that our scheme results in multiplexing gains of up to 10% enabling us to multiplex twenty 6 Mbps MPEG-II video streams instead of 18 streams over an ATM/SONET OC3 link without latency or cell loss penalty. This scheme is due for a patent.

  7. Estimated Mid-Infrared (200-2000 cm-1) Optical Constants of Some Silica Polymorphs

    NASA Astrophysics Data System (ADS)

    Glotch, Timothy; Rossman, G. R.; Michalski, J. R.

    2006-09-01

    We use Lorentz-Lorenz dispersion analysis to model the mid-infrared (200-2000 cm-1) optical constants, of opal-A, opal-CT, and tridymite. These minerals, which are all polymorphs of silica (SiO2), are potentially important in the analysis of thermal emission spectra acquired by the Mars Global Surveyor Thermal Emission Spectrometer (MGS-TES) and Mars Exploration Rover Mini-TES instruments in orbit and on the surface of Mars as well as emission spectra acquired by telescopes of planetary disks and dust and debris clouds in young solar systems. Mineral samples were crushed, washed, and sieved and emissivity spectra of the >100; μm size fraction were acquired at Arizona State University's emissivity spectroscopy laboratory. Therefore, the spectra and optical constants are representative of all crystal orientations. Ideally, emissivity or reflectance measurements of single polished crystals or fine powders pressed to compact disks are used for the determination of mid-infrared optical constants. Measurements of these types of surfaces eliminate or minimize multiple reflections, providing a specular surface. Our measurements, however, likely produce a reasonable approximation of specular emissivity or reflectance, as the minimum particle size is greater than the maximum wavelength of light measured. Future work will include measurement of pressed disks of powdered samples in emission and reflection, and when possible, small single crystals under an IR reflectance microscope, which will allow us to assess the variability of spectra and optical constants under different sample preparation and measurement conditions.

  8. Delayed reward discounting and addictive behavior: a meta-analysis.

    PubMed

    MacKillop, James; Amlung, Michael T; Few, Lauren R; Ray, Lara A; Sweet, Lawrence H; Munafò, Marcus R

    2011-08-01

    Delayed reward discounting (DRD) is a behavioral economic index of impulsivity and numerous studies have examined DRD in relation to addictive behavior. To synthesize the findings across the literature, the current review is a meta-analysis of studies comparing DRD between criterion groups exhibiting addictive behavior and control groups. The meta-analysis sought to characterize the overall patterns of findings, systematic variability by sample and study type, and possible small study (publication) bias. Literature reviews identified 310 candidate articles from which 46 studies reporting 64 comparisons were identified (total N=56,013). From the total comparisons identified, a small magnitude effect was evident (d= .15; p< .00001) with very high heterogeneity of effect size. Based on systematic observed differences, large studies assessing DRD with a small number of self-report items were removed and an analysis of 57 comparisons (n=3,329) using equivalent methods and exhibiting acceptable heterogeneity revealed a medium magnitude effect (d= .58; p< .00001). Further analyses revealed significantly larger effect sizes for studies using clinical samples (d= .61) compared with studies using nonclinical samples (d=.45). Indices of small study bias among the various comparisons suggested varying levels of influence by unpublished findings, ranging from minimal to moderate. These results provide strong evidence of greater DRD in individuals exhibiting addictive behavior in general and particularly in individuals who meet criteria for an addictive disorder. Implications for the assessment of DRD and research priorities are discussed.

  9. Delayed reward discounting and addictive behavior: a meta-analysis

    PubMed Central

    Amlung, Michael T.; Few, Lauren R.; Ray, Lara A.; Sweet, Lawrence H.; Munafò, Marcus R.

    2011-01-01

    Rationale Delayed reward discounting (DRD) is a behavioral economic index of impulsivity and numerous studies have examined DRD in relation to addictive behavior. To synthesize the findings across the literature, the current review is a meta-analysis of studies comparing DRD between criterion groups exhibiting addictive behavior and control groups. Objectives The meta-analysis sought to characterize the overall patterns of findings, systematic variability by sample and study type, and possible small study (publication) bias. Methods Literature reviews identified 310 candidate articles from which 46 studies reporting 64 comparisons were identified (total N=56,013). Results From the total comparisons identified, a small magnitude effect was evident (d=.15; p<.00001) with very high heterogeneity of effect size. Based on systematic observed differences, large studies assessing DRD with a small number of self-report items were removed and an analysis of 57 comparisons (n=3,329) using equivalent methods and exhibiting acceptable heterogeneity revealed a medium magnitude effect (d=.58; p<.00001). Further analyses revealed significantly larger effect sizes for studies using clinical samples (d=.61) compared with studies using nonclinical samples (d=.45). Indices of small study bias among the various comparisons suggested varying levels of influence by unpublished findings, ranging from minimal to moderate. Conclusions These results provide strong evidence of greater DRD in individuals exhibiting addictive behavior in general and particularly in individuals who meet criteria for an addictive disorder. Implications for the assessment of DRD and research priorities are discussed. PMID:21373791

  10. Forensic analysis of laser printed ink by X-ray fluorescence and laser-excited plume fluorescence.

    PubMed

    Chu, Po-Chun; Cai, Bruno Yue; Tsoi, Yeuk Ki; Yuen, Ronald; Leung, Kelvin S Y; Cheung, Nai-Ho

    2013-05-07

    We demonstrated a minimally destructive two-tier approach for multielement forensic analysis of laser-printed ink. The printed document was first screened using a portable-X-ray fluorescence (XRF) probe. If the results were not conclusive, a laser microprobe was then deployed. The laser probe was based on a two-pulse scheme: the first laser pulse ablated a thin layer of the printed ink; the second laser pulse at 193 nm induced multianalytes in the desorbed ink to fluoresce. We analyzed four brands of black toners. The toners were printed on paper in the form of patches or letters or overprinted on another ink. The XRF probe could sort the four brands if the printed letters were larger than font 20. It could not tell the printing sequence in the case of overprints. The laser probe was more discriminatory; it could sort the toner brands and reveal the overprint sequence regardless of font size while the sampled area was not visibly different from neighboring areas even under the microscope. In terms of general analytical performance, the laser probe featured tens of micrometer lateral resolution and tens to hundreds of nm depth resolution and atto-mole mass detection limits. It could handle samples of arbitrary size and shape and was air compatible, and no sample pretreatment was necessary. It will prove useful whenever high-resolution and high sensitivity 3D elemental mapping is required.

  11. Mechanisms and kinetics of granulated sewage sludge combustion.

    PubMed

    Kijo-Kleczkowska, Agnieszka; Środa, Katarzyna; Kosowska-Golachowska, Monika; Musiał, Tomasz; Wolski, Krzysztof

    2015-12-01

    This paper investigates sewage sludge disposal methods with particular emphasis on combustion as the priority disposal method. Sewage sludge incineration is an attractive option because it minimizes odour, significantly reduces the volume of the starting material and thermally destroys organic and toxic components of the off pads. Additionally, it is possible that ashes could be used. Currently, as many as 11 plants use sewage sludge as fuel in Poland; thus, this technology must be further developed in Poland while considering the benefits of co-combustion with other fuels. This paper presents the results of experimental studies aimed at determining the mechanisms (defining the fuel combustion region by studying the effects of process parameters, including the size of the fuel sample, temperature in the combustion chamber and air velocity, on combustion) and kinetics (measurement of fuel temperature and mass changes) of fuel combustion in an air stream under different thermal conditions and flow rates. The combustion of the sludge samples during air flow between temperatures of 800 and 900°C is a kinetic-diffusion process. This process determines the sample size, temperature of its environment, and air velocity. The adopted process parameters, the time and ignition temperature of the fuel by volatiles, combustion time of the volatiles, time to reach the maximum temperature of the fuel surface, maximum temperature of the fuel surface, char combustion time, and the total process time, had significant impacts. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  13. Effects of sampling methods on the quantity and quality of dissolved organic matter in sediment pore waters as revealed by absorption and fluorescence spectroscopy.

    PubMed

    Chen, Meilian; Lee, Jong-Hyeon; Hur, Jin

    2015-10-01

    Despite literature evidence suggesting the importance of sampling methods on the properties of sediment pore waters, their effects on the dissolved organic matter (PW-DOM) have been unexplored to date. Here, we compared the effects of two commonly used sampling methods (i.e., centrifuge and Rhizon sampler) on the characteristics of PW-DOM for the first time. The bulk dissolved organic carbon (DOC), ultraviolet-visible (UV-Vis) absorption, and excitation-emission matrixes coupled with parallel factor analysis (EEM-PARAFAC) of the PW-DOM samples were compared for the two sampling methods with the sediments from minimal to severely contaminated sites. The centrifuged samples were found to have higher average values of DOC, UV absorption, and protein-like EEM-PARAFAC components. The samples collected with the Rhizon sampler, however, exhibited generally more humified characteristics than the centrifuged ones, implying a preferential collection of PW-DOM with respect to the sampling methods. Furthermore, the differences between the two sampling methods seem more pronounced in relatively more polluted sites. Our observations were possibly explained by either the filtration effect resulting from the smaller pore size of the Rhizon sampler or the desorption of DOM molecules loosely bound to minerals during centrifugation, or both. Our study suggests that consistent use of one sampling method is crucial for PW-DOM studies and also that caution should be taken in the comparison of data collected with different sampling methods.

  14. Endobronchial ultrasound-guided transbronchial needle aspiration for staging of lung cancer: a concise review.

    PubMed

    Aziz, Fahad

    2012-09-01

    Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) offers a minimally invasive alternative to mediastinoscopy with additional access to the hilar nodes, a better safety profile, and it removes the costs and hazards of theatre time and general anesthesia with comparable sensitivity, although the negative predictive value of mediastinoscopy (and sample size) is greater. EBUS- TBNA also obtains larger samples than conventional TBNA, has superior performance and theoretically is safer, allowing real-time sampling under direct vision. It can also have predictive value both in sonographic appearance of the nodes and histological characteristics. EBUS-TBNA is therefore indicated for NSCLC staging, diagnosis of lung cancer when there is no endobronchial lesion, and diagnosis of both benign (especially tuberculosis and sarcoidosis) and malignant mediastinal lesions. The procedure is different than for flexible bronchoscopy, takes longer, and requires more training. EBUS-TBNA is more expensive than conventional TBNA but can save costs by reducing the number of more costly mediastinoscopies. In the future, endobronchial ultrasound may have applications in airways disease and pulmonary vascular disease.

  15. Progress in Developing Transfer Functions for Surface Scanning Eddy Current Inspections

    NASA Astrophysics Data System (ADS)

    Shearer, J.; Heebl, J.; Brausch, J.; Lindgren, E.

    2009-03-01

    As US Air Force (USAF) aircraft continue to age, additional inspections are required for structural components. The validation of new inspections typically requires a capability demonstration of the method using representative structure with representative damage. To minimize the time and cost required to prepare such samples, Electric Discharge machined (EDM) notches are commonly used to represent fatigue cracks in validation studies. However, the sensitivity to damage typically changes as a function of damage type. This requires a mathematical relationship to be developed between the responses from the two different flaw types to enable the use of EDM notched samples to validate new inspections. This paper reviews progress to develop transfer functions for surface scanning eddy current inspections of aluminum and titanium alloys found in structural aircraft components. Multiple samples with well characterized grown fatigue cracks and master gages with EDM notches, both with a range of flaw sizes, were used to collect flaw signals with USAF field inspection equipment. Analysis of this empirical data was used to develop a transfer function between the response from the EDM notches and grown fatigue cracks.

  16. Effect of Machining Parameters on Oxidation Behavior of Mild Steel

    NASA Astrophysics Data System (ADS)

    Majumdar, P.; Shekhar, S.; Mondal, K.

    2015-01-01

    This study aims to find out a correlation between machining parameters, resultant microstructure, and isothermal oxidation behavior of lathe-machined mild steel in the temperature range of 660-710 °C. The tool rake angles "α" used were +20°, 0°, and -20°, and cutting speeds used were 41, 232, and 541 mm/s. Under isothermal conditions, non-machined and machined mild steel samples follow parabolic oxidation kinetics with activation energy of 181 and ~400 kJ/mol, respectively. Exaggerated grain growth of the machined surface was observed, whereas, the center part of the machined sample showed minimal grain growth during oxidation at higher temperatures. Grain growth on the surface was attributed to the reduction of strain energy at high temperature oxidation, which was accumulated on the sub-region of the machined surface during machining. It was also observed that characteristic surface oxide controlled the oxidation behavior of the machined samples. This study clearly demonstrates the effect of equivalent strain, roughness, and grain size due to machining, and subsequent grain growth on the oxidation behavior of the mild steel.

  17. POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF STAINLESS STEEL PIPES AND FITTINGS (EPA/600/S-95/017)

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected u...

  18. POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF FOOD SERVICE EQUIPMENT (EPA/600/S-95/026)

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected u...

  19. ENVIRONMENTAL RESEARCH BRIEF: POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF GEAR CASES FOR OUTBOARD MOTORS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  20. Analyzing indirect secondary electron contrast of unstained bacteriophage T4 based on SEM images and Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogura, Toshihiko, E-mail: t-ogura@aist.go.jp

    2009-03-06

    The indirect secondary electron contrast (ISEC) condition of the scanning electron microscopy (SEM) produces high contrast detection with minimal damage of unstained biological samples mounted under a thin carbon film. The high contrast image is created by a secondary electron signal produced under the carbon film by a low acceleration voltage. Here, we show that ISEC condition is clearly able to detect unstained bacteriophage T4 under a thin carbon film (10-15 nm) by using high-resolution field emission (FE) SEM. The results show that FE-SEM provides higher resolution than thermionic emission SEM. Furthermore, we investigated the scattered electron area within themore » carbon film under ISEC conditions using Monte Carlo simulation. The simulations indicated that the image resolution difference is related to the scattering width in the carbon film and the electron beam spot size. Using ISEC conditions on unstained virus samples would produce low electronic damage, because the electron beam does not directly irradiate the sample. In addition to the routine analysis, this method can be utilized for structural analysis of various biological samples like viruses, bacteria, and protein complexes.« less

  1. Helioseismology of pre-emerging active regions. III. Statistical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, G.; Leka, K. D.; Braun, D. C.

    The subsurface properties of active regions (ARs) prior to their appearance at the solar surface may shed light on the process of AR formation. Helioseismic holography has been applied to samples taken from two populations of regions on the Sun (pre-emergence and without emergence), each sample having over 100 members, that were selected to minimize systematic bias, as described in Paper I. Paper II showed that there are statistically significant signatures in the average helioseismic properties that precede the formation of an AR. This paper describes a more detailed analysis of the samples of pre-emergence regions and regions without emergencemore » based on discriminant analysis. The property that is best able to distinguish the populations is found to be the surface magnetic field, even a day before the emergence time. However, after accounting for the correlations between the surface field and the quantities derived from helioseismology, there is still evidence of a helioseismic precursor to AR emergence that is present for at least a day prior to emergence, although the analysis presented cannot definitively determine the subsurface properties prior to emergence due to the small sample sizes.« less

  2. Atomic force microscopy contact, tapping, and jumping modes for imaging biological samples in liquids

    NASA Astrophysics Data System (ADS)

    Moreno-Herrero, F.; Colchero, J.; Gómez-Herrero, J.; Baró, A. M.

    2004-03-01

    The capabilities of the atomic force microscope for imaging biomolecules under physiological conditions has been systematically investigated. Contact, dynamic, and jumping modes have been applied to four different biological systems: DNA, purple membrane, Alzheimer paired helical filaments, and the bacteriophage φ29. These samples have been selected to cover a wide variety of biological systems in terms of sizes and substrate contact area, which make them very appropriate for the type of comparative studies carried out in the present work. Although dynamic mode atomic force microscopy is clearly the best choice for imaging soft samples in air, in liquids there is not a leading technique. In liquids, the most appropriate imaging mode depends on the sample characteristics and preparation methods. Contact or dynamic modes are the best choices for imaging molecular assemblies arranged as crystals such as the purple membrane. In this case, the advantage of image acquisition speed predominates over the disadvantage of high lateral or normal force. For imaging individual macromolecules, which are weakly bonded to the substrate, lateral and normal forces are the relevant factors, and hence the jumping mode, an imaging mode which minimizes lateral and normal forces, is preferable to other imaging modes.

  3. Chi-squared and C statistic minimization for low count per bin data. [sampling in X ray astronomy

    NASA Technical Reports Server (NTRS)

    Nousek, John A.; Shue, David R.

    1989-01-01

    Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.

  4. Direct and sensitive detection of foodborne pathogens within fresh produce samples using a field-deployable handheld device.

    PubMed

    You, David J; Geshell, Kenneth J; Yoon, Jeong-Yeol

    2011-10-15

    Direct and sensitive detection of foodborne pathogens from fresh produce samples was accomplished using a handheld lab-on-a-chip device, requiring little to no sample processing and enrichment steps for a near-real-time detection and truly field-deployable device. The detection of Escherichia coli K12 and O157:H7 in iceberg lettuce was achieved utilizing optimized Mie light scatter parameters with a latex particle immunoagglutination assay. The system exhibited good sensitivity, with a limit of detection of 10 CFU mL(-1) and an assay time of <6 min. Minimal pretreatment with no detrimental effects on assay sensitivity and reproducibility was accomplished with a simple and cost-effective KimWipes filter and disposable syringe. Mie simulations were used to determine the optimal parameters (particle size d, wavelength λ, and scatter angle θ) for the assay that maximize light scatter intensity of agglutinated latex microparticles and minimize light scatter intensity of the tissue fragments of iceberg lettuce, which were experimentally validated. This introduces a powerful method for detecting foodborne pathogens in fresh produce and other potential sample matrices. The integration of a multi-channel microfluidic chip allowed for differential detection of the agglutinated particles in the presence of the antigen, revealing a true field-deployable detection system with decreased assay time and improved robustness over comparable benchtop systems. Additionally, two sample preparation methods were evaluated through simulated field studies based on overall sensitivity, protocol complexity, and assay time. Preparation of the plant tissue sample by grinding resulted in a two-fold improvement in scatter intensity over washing, accompanied with a significant increase in assay time: ∼5 min (grinding) versus ∼1 min (washing). Specificity studies demonstrated binding of E. coli O157:H7 EDL933 to only O157:H7 antibody conjugated particles, with no cross-reactivity to K12. This suggests the adaptability of the system for use with a wide variety of pathogens, and the potential to detect in a variety of biological matrices with little to no sample pretreatment. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Estimation of anomaly location and size using electrical impedance tomography.

    PubMed

    Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun; Woo, Eung Je; Cho, Young Gu

    2003-01-01

    We developed a new algorithm that estimates locations and sizes of anomalies in electrically conducting medium based on electrical impedance tomography (EIT) technique. When only the boundary current and voltage measurements are available, it is not practically feasible to reconstruct accurate high-resolution cross-sectional conductivity or resistivity images of a subject. In this paper, we focus our attention on the estimation of locations and sizes of anomalies with different conductivity values compared with the background tissues. We showed the performance of the algorithm from experimental results using a 32-channel EIT system and saline phantom. With about 1.73% measurement error in boundary current-voltage data, we found that the minimal size (area) of the detectable anomaly is about 0.72% of the size (area) of the phantom. Potential applications include the monitoring of impedance related physiological events and bubble detection in two-phase flow. Since this new algorithm requires neither any forward solver nor time-consuming minimization process, it is fast enough for various real-time applications in medicine and nondestructive testing.

  6. Penrose-like inequality with angular momentum for minimal surfaces

    NASA Astrophysics Data System (ADS)

    Anglada, Pablo

    2018-02-01

    In axially symmetric spacetimes the Penrose inequality can be strengthened to include angular momentum. We prove a version of this inequality for minimal surfaces, more precisely, a lower bound for the ADM mass in terms of the area of a minimal surface, the angular momentum and a particular measure of the surface size. We consider axially symmetric and asymptotically flat initial data, and use the monotonicity of the Geroch quasi-local energy on 2-surfaces along the inverse mean curvature flow.

  7. Multi-Ferroic Polymer Nanoparticle Composites for Next Generation Metamaterials

    DTIC Science & Technology

    2014-07-28

    particle size of magnetite nanoparticles. The PI will continue to develop composites that could be utilized for developing high- bandwidth radio frequency...to improve the efficiency and decrease the size of the device. High performance stretchable magneto-dielectric materials can be accomplished using...nanoparticles oxidize at dimensions smaller than the critical size for superparamagnetic to ferromagnetic transition, which is essential for minimal

  8. The cycle characteristics of clomiphene with clomiphene and menotropins in polycystic ovary syndrome and non polycystic ovary syndrome infertile patients.

    PubMed

    Ghasemi, M; Ashraf, H; Koushyar, H; Mousavifar, N

    2013-06-01

    This study compares the cycle characteristics of clomiphene (CC) with CC+HMG (Human Menopausal Gonadotropin or Menotropins) in Polycystic Ovary Syndrome (PCOS) and non-PCOS infertile patients. Patients were treated by CC + minimal HMg protocol. The cancellation rate, the mean number of different follicle sizes and endometrial thickness and pattern were compared. The cancelled cycles due to non-responsiveness were significantly higher in CC compared to CC+ minimal HMg protocol. PCOS patients are significantly nonresponsive in CC cycle and hyperresponsive in CC+ minimal HMg cycles. The mean number of different sizes of follicles and the endometrial thickness were significantly higher in CC+ minimal HMg. PCOS patients were significantly different from non-PCOS regarding the number of mature follicle and endometrial thickness. The pregnancy rate was 11% (10.2% in non-PCOS and 12.2% in PCOS). CC+ minimal HMg is a viable alternative to HMg /FSH only protocol in CC failure or resistant patients, and its efficacy can be mostly attributed to improvement of endometrial quality and increase in follicle number. Moreover, due to high cancellation of PCOS patients treated by this protocol, seemingly other alternatives should be found; perhaps sequential letrozole+HMg/FSH that have been shown to improve the ovarian response in this group of patients.

  9. Nonparametric Methods in Astronomy: Think, Regress, Observe—Pick Any Three

    NASA Astrophysics Data System (ADS)

    Steinhardt, Charles L.; Jermyn, Adam S.

    2018-02-01

    Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.

  10. MIXI: Mobile Intelligent X-Ray Inspection System

    NASA Astrophysics Data System (ADS)

    Arodzero, Anatoli; Boucher, Salime; Kutsaev, Sergey V.; Ziskin, Vitaliy

    2017-07-01

    A novel, low-dose Mobile Intelligent X-ray Inspection (MIXI) concept is being developed at RadiaBeam Technologies. The MIXI concept relies on a linac-based, adaptive, ramped energy source of short X-ray packets of pulses, a new type of fast X-ray detector, rapid processing of detector signals for intelligent control of the linac, and advanced radiography image processing. The key parameters for this system include: better than 3 mm line pair resolution; penetration greater than 320 mm of steel equivalent; scan speed with 100% image sampling rate of up to 15 km/h; and material discrimination over a range of thicknesses up to 200 mm of steel equivalent. Its minimal radiation dose, size and weight allow MIXI to be placed on a lightweight truck chassis.

  11. Angiomyolipoma with Minimal Fat: Can It Be Differentiated from Clear Cell Renal Cell Carcinoma by Using Standard MR Techniques?

    PubMed Central

    Hindman, Nicole; Ngo, Long; Genega, Elizabeth M.; Melamed, Jonathan; Wei, Jesse; Braza, Julia M.; Rofsky, Neil M.

    2012-01-01

    Purpose: To retrospectively assess whether magnetic resonance (MR) imaging with opposed-phase and in-phase gradient-echo (GRE) sequences and MR feature analysis can differentiate angiomyolipomas (AMLs) that contain minimal fat from clear cell renal cell carcinomas (RCCs), with particular emphasis on small (<3-cm) masses. Materials and Methods: Institutional review board approval and a waiver of informed consent were obtained for this HIPAA-compliant study. MR images from 108 pathologically proved renal masses (88 clear cell RCCs and 20 minimal fat AMLs from 64 men and 44 women) at two academic institutions were evaluated. The signal intensity (SI) of each renal mass and spleen on opposed-phase and in-phase GRE images was used to calculate an SI index and tumor-to-spleen SI ratio. Two radiologists who were blinded to the pathologic results independently assessed the subjective presence of intravoxel fat (ie, decreased SI on opposed-phase images compared with that on in-phase images), SI on T1-weighted and T2-weighted images, cystic degeneration, necrosis, hemorrhage, retroperitoneal collaterals, and renal vein thrombosis. Results were analyzed by using the Wilcoxon rank sum test, two-tailed Fisher exact test, and multivariate logistic regression analysis for all renal masses and for small masses. A P value of less than .05 was considered to indicate a statistically significant difference. Results: There were no differences between minimal fat AMLs and clear cell RCCs for the SI index (8.05% ± 14.46 vs 14.99% ± 19.9; P = .146) or tumor-to-spleen ratio (−8.96% ± 16.6 and −15.8% ± 22.4; P = .227) when all masses or small masses were analyzed. Diagnostic accuracy (area under receiver operating characteristic curve) for the SI index and tumor-to-spleen ratio was 0.59. Intratumoral necrosis and larger size were predictive of clear cell RCC (P < .001) for all lesions, whereas low SI (relative to renal parenchyma SI) on T2-weighted images, smaller size, and female sex correlated with minimal fat AML (P < .001) for all lesions. Conclusion: The diagnostic accuracy of opposed-phase and in-phase GRE MR imaging for the differentiation of minimal fat AML and clear cell RCC is poor. In this cohort, low SI on T2-weighted images relative to renal parenchyma and small size suggested minimal fat AML, whereas intratumoral necrosis and large size argued against this diagnosis. © RSNA, 2012 PMID:23012463

  12. Synthesizing Information From Language Samples and Standardized Tests in School-Age Bilingual Assessment

    PubMed Central

    Pham, Giang

    2017-01-01

    Purpose Although language samples and standardized tests are regularly used in assessment, few studies provide clinical guidance on how to synthesize information from these testing tools. This study extends previous work on the relations between tests and language samples to a new population—school-age bilingual speakers with primary language impairment—and considers the clinical implications for bilingual assessment. Method Fifty-one bilingual children with primary language impairment completed narrative language samples and standardized language tests in English and Spanish. Children were separated into younger (ages 5;6 [years;months]–8;11) and older (ages 9;0–11;2) groups. Analysis included correlations with age and partial correlations between language sample measures and test scores in each language. Results Within the younger group, positive correlations with large effect sizes indicated convergence between test scores and microstructural language sample measures in both Spanish and English. There were minimal correlations in the older group for either language. Age related to English but not Spanish measures. Conclusions Tests and language samples complement each other in assessment. Wordless picture-book narratives may be more appropriate for ages 5–8 than for older children. We discuss clinical implications, including a case example of a bilingual child with primary language impairment, to illustrate how to synthesize information from these tools in assessment. PMID:28055056

  13. Rehabilitation of the dominance of maxillary central incisors with refractory porcelain veneers requiring minimal tooth preparation.

    PubMed

    da Cunha, Leonardo Fernandes; Gonzaga, Carla Castiglia; Saab, Rafaella; Mushashe, Amanda Mahammad; Correr, Gisele Maria

    2015-01-01

    Central dominance is an important element of an esthetic smile. Color, form, and size have been suggested as tools for assessing the dominance of maxillary teeth. A spectrophotometer can be used to determine the value, hue, and chroma. Correct sizing of restorations according to the central incisor dominance principle improves not only esthetics but also aspects of occlusion, such as anterior guidance. Refractory porcelain systems can effectively restore the color, shape, emergence profile, and incisal translucency. This report illustrates the esthetic and occlusal rehabilitation of the dominance of maxillary central incisors using fabricated minimal thickness refractory porcelain veneers.

  14. Development of a syringe pump assisted dynamic headspace sampling technique for needle trap device.

    PubMed

    Eom, In-Yong; Niri, Vadoud H; Pawliszyn, Janusz

    2008-07-04

    This paper describes a new approach that combines needle trap devices (NTDs) with a dynamic headspace sampling technique (purge and trap) using a bidirectional syringe pump. The needle trap device is a 22-G stainless steel needle 3.5-in. long packed with divinylbenzene sorbent particles. The same sized needle, without packing, was used for purging purposes. We chose an aqueous mixture of benzene, toluene, ethylbenzene, and p-xylene (BTEX) and developed a sequential purge and trap (SPNT) method, in which sampling (trapping) and purging cycles were performed sequentially by the use of syringe pump with different distribution channels. In this technique, a certain volume (1 mL) of headspace was sequentially sampled using the needle trap; afterwards, the same volume of air was purged into the solution at a high flow rate. The proposed technique showed an effective extraction compared to the continuous purge and trap technique, with a minimal dilution effect. Method evaluation was also performed by obtaining the calibration graphs for aqueous BTEX solutions in the concentration range of 1-250 ng/mL. The developed technique was compared to the headspace solid-phase microextraction method for the analysis of aqueous BTEX samples. Detection limits as low as 1 ng/mL were obtained for BTEX by NTD-SPNT.

  15. A Dual Wedge Microneedle for sampling of perilymph solution via round window membrane

    PubMed Central

    Watanabe, Hirobumi; Cardoso, Luis; Lalwani, Anil K.; Kysar, Jeffrey W.

    2017-01-01

    Objective Precision medicine for inner-ear disease is hampered by the absence of a methodology to sample inner-ear fluid atraumatically. The round window membrane (RWM) is an attractive portal for accessing cochlear fluids as it heals spontaneously. In this study, we report on the development of a microneedle for perilymph sampling that minimizes size of RWM perforation, facilitates quick aspiration, and provides precise volume control. Methods Considering the mechanical anisotropy of the RWM and hydrodynamics through a microneedle, a 31G stainless steel pipe was machined into wedge-shaped design via electrical discharge machining. Guinea pig RWM was penetrated in vitro, and 1 μ1 of perilymph was sampled and analyzed via UV-vis spectroscopy. Results The prototype wedge shaped needle created oval perforation with minor and major diameter of 143 and 344 μm (n=6). The sampling duration and standard deviation of aspirated volume were seconds and 6.8% respectively. The protein concentration was 1.74 mg/mL. Conclusion The prototype needle facilitated precise perforation of RWMs and rapid aspiration of cochlear fluid with precise volume control. The needle design is promising and requires testing in human cadaveric temporal bone and further optimization to become clinically viable. PMID:26888440

  16. Influence of growth temperature on bulk and surface defects in hybrid lead halide perovskite films

    NASA Astrophysics Data System (ADS)

    Peng, Weina; Anand, Benoy; Liu, Lihong; Sampat, Siddharth; Bearden, Brandon E.; Malko, Anton V.; Chabal, Yves J.

    2016-01-01

    The rapid development of perovskite solar cells has focused its attention on defects in perovskites, which are gradually realized to strongly control the device performance. A fundamental understanding is therefore needed for further improvement in this field. Recent efforts have mainly focused on minimizing the surface defects and grain boundaries in thin films. Using time-resolved photoluminescence spectroscopy, we show that bulk defects in perovskite samples prepared using vapor assisted solution process (VASP) play a key role in addition to surface and grain boundary defects. The defect state density of samples prepared at 150 °C (~1017 cm-3) increases by 5 fold at 175 °C even though the average grains size increases slightly, ruling out grain boundary defects as the main mechanism for the observed differences in PL properties upon annealing. Upon surface passivation using water molecules, the PL intensity and lifetime of samples prepared at 200 °C are only partially improved, remaining significantly lower than those prepared at 150 °C. Thus, the present study indicates that the majority of these defect states observed at elevated growth temperatures originates from bulk defects and underscores the importance to control the formation of bulk defects together with grain boundary and surface defects to further improve the optoelectronic properties of perovskites.The rapid development of perovskite solar cells has focused its attention on defects in perovskites, which are gradually realized to strongly control the device performance. A fundamental understanding is therefore needed for further improvement in this field. Recent efforts have mainly focused on minimizing the surface defects and grain boundaries in thin films. Using time-resolved photoluminescence spectroscopy, we show that bulk defects in perovskite samples prepared using vapor assisted solution process (VASP) play a key role in addition to surface and grain boundary defects. The defect state density of samples prepared at 150 °C (~1017 cm-3) increases by 5 fold at 175 °C even though the average grains size increases slightly, ruling out grain boundary defects as the main mechanism for the observed differences in PL properties upon annealing. Upon surface passivation using water molecules, the PL intensity and lifetime of samples prepared at 200 °C are only partially improved, remaining significantly lower than those prepared at 150 °C. Thus, the present study indicates that the majority of these defect states observed at elevated growth temperatures originates from bulk defects and underscores the importance to control the formation of bulk defects together with grain boundary and surface defects to further improve the optoelectronic properties of perovskites. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr06222e

  17. Nest morphology and body size of Ross' Geese and Lesser Snow Geese

    USGS Publications Warehouse

    McCracken, K.G.; Afton, A.D.; Alisauskas, R.T.

    1997-01-01

    Arctic-nesting geese build large, insulated nests to protect developing embryos from cold ambient temperatures. Ross' Geese (Chen rossii) are about two-thirds the mass of Lesser Snow Geese (C. caerulescens caerulescens), have higher mass-specific metabolic rate, and maintain lower nest attentiveness, yet they hatch goslings with more functionally mature gizzards and more protein for their size than do Lesser Snow Geese. We compared nest size (a reflection of nest insulation) in four distinct habitats in a mixed breeding colony of Ross' Geese and Lesser Snow Geese at Karrak Lake, Northwest Territories, Canada. After adjusting measurements for nest-specific egg size and clutch size, we found that overall nest morphology differed between species and among habitats. Nest size increased progressively among heath, rock, mixed, and moss habitats. When nesting materials were not limiting, nests were smaller in habitats that provided cover from wind and precipitation than in habitats that did not provide cover. Ross' Geese constructed relatively larger, more insulated nests than did Lesser Snow Geese, which may hasten embryonic development, minimize energy expenditure during incubation, and minimize embryonic cooling during recesses. We suggest that relative differences in nest morphology reflect greater selection for Ross' Geese to improve nest insulation because of their smaller size (adults and embryos), higher mass-specific metabolic rate, and lower incubation constancy.

  18. Measuring Spray Droplet Size from Agricultural Nozzles Using Laser Diffraction

    PubMed Central

    Fritz, Bradley K.; Hoffmann, W. Clint

    2016-01-01

    When making an application of any crop protection material such as an herbicide or pesticide, the applicator uses a variety of skills and information to make an application so that the material reaches the target site (i.e., plant). Information critical in this process is the droplet size that a particular spray nozzle, spray pressure, and spray solution combination generates, as droplet size greatly influences product efficacy and how the spray moves through the environment. Researchers and product manufacturers commonly use laser diffraction equipment to measure the spray droplet size in laboratory wind tunnels. The work presented here describes methods used in making spray droplet size measurements with laser diffraction equipment for both ground and aerial application scenarios that can be used to ensure inter- and intra-laboratory precision while minimizing sampling bias associated with laser diffraction systems. Maintaining critical measurement distances and concurrent airflow throughout the testing process is key to this precision. Real time data quality analysis is also critical to preventing excess variation in the data or extraneous inclusion of erroneous data. Some limitations of this method include atypical spray nozzles, spray solutions or application conditions that result in spray streams that do not fully atomize within the measurement distances discussed. Successful adaption of this method can provide a highly efficient method for evaluation of the performance of agrochemical spray application nozzles under a variety of operational settings. Also discussed are potential experimental design considerations that can be included to enhance functionality of the data collected. PMID:27684589

  19. Measuring sperm backflow following female orgasm: a new method

    PubMed Central

    King, Robert; Dempsey, Maria; Valentine, Katherine A.

    2016-01-01

    Background Human female orgasm is a vexed question in the field while there is credible evidence of cryptic female choice that has many hallmarks of orgasm in other species. Our initial goal was to produce a proof of concept for allowing females to study an aspect of infertility in a home setting, specifically by aligning the study of human infertility and increased fertility with the study of other mammalian fertility. In the latter case - the realm of oxytocin-mediated sperm retention mechanisms seems to be at work in terms of ultimate function (differential sperm retention) while the proximate function (rapid transport or cervical tenting) remains unresolved. Method A repeated measures design using an easily taught technique in a natural setting was used. Participants were a small (n=6), non-representative sample of females. The introduction of a sperm-simulant combined with an orgasm-producing technique using a vibrator/home massager and other easily supplied materials. Results The sperm flowback (simulated) was measured using a technique that can be used in a home setting. There was a significant difference in simulant retention between the orgasm (M=4.08, SD=0.17) and non-orgasm (M=3.30, SD=0.22) conditions; t (5)=7.02, p=0.001. Cohen's d=3.97, effect size r=0.89. This indicates a medium to small effect size. Conclusions This method could allow females to test an aspect of sexual response that has been linked to lowered fertility in a home setting with minimal training. It needs to be replicated with a larger sample size. PMID:27799082

  20. Measuring sperm backflow following female orgasm: a new method.

    PubMed

    King, Robert; Dempsey, Maria; Valentine, Katherine A

    2016-01-01

    Human female orgasm is a vexed question in the field while there is credible evidence of cryptic female choice that has many hallmarks of orgasm in other species. Our initial goal was to produce a proof of concept for allowing females to study an aspect of infertility in a home setting, specifically by aligning the study of human infertility and increased fertility with the study of other mammalian fertility. In the latter case - the realm of oxytocin-mediated sperm retention mechanisms seems to be at work in terms of ultimate function (differential sperm retention) while the proximate function (rapid transport or cervical tenting) remains unresolved. A repeated measures design using an easily taught technique in a natural setting was used. Participants were a small (n=6), non-representative sample of females. The introduction of a sperm-simulant combined with an orgasm-producing technique using a vibrator/home massager and other easily supplied materials. The sperm flowback (simulated) was measured using a technique that can be used in a home setting. There was a significant difference in simulant retention between the orgasm (M=4.08, SD=0.17) and non-orgasm (M=3.30, SD=0.22) conditions; t (5)=7.02, p=0.001. Cohen's d=3.97, effect size r=0.89. This indicates a medium to small effect size. This method could allow females to test an aspect of sexual response that has been linked to lowered fertility in a home setting with minimal training. It needs to be replicated with a larger sample size.

  1. Optimizing occupational exposure measurement strategies when estimating the log-scale arithmetic mean value--an example from the reinforced plastics industry.

    PubMed

    Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A

    2006-06-01

    When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.

  2. A framework for inference about carnivore density from unstructured spatial sampling of scat using detector dogs

    USGS Publications Warehouse

    Thompson, Craig M.; Royle, J. Andrew; Garner, James D.

    2012-01-01

    Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or mark–recapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the reality of small sample sizes and movement on and off study sites. In response to these difficulties, there is growing interest in the use of non-invasive survey techniques, which provide the opportunity to collect larger samples with minimal increases in effort, as well as the application of analytical frameworks that are not reliant on large sample size arguments. One promising survey technique, the use of scat detecting dogs, offers a greatly enhanced probability of detection while at the same time generating new difficulties with respect to non-standard survey routes, variable search intensity, and the lack of a fixed survey point for characterizing non-detection. In order to account for these issues, we modified an existing spatially explicit, capture–recapture model for camera trap data to account for variable search intensity and the lack of fixed, georeferenced trap locations. We applied this modified model to a fisher (Martes pennanti) dataset from the Sierra National Forest, California, and compared the results (12.3 fishers/100 km2) to more traditional density estimates. We then evaluated model performance using simulations at 3 levels of population density. Simulation results indicated that estimates based on the posterior mode were relatively unbiased. We believe that this approach provides a flexible analytical framework for reconciling the inconsistencies between detector dog survey data and density estimation procedures.

  3. Survey of Salmonella contamination in chicken layer farms in three Caribbean countries.

    PubMed

    Adesiyun, Abiodun; Webb, Lloyd; Musai, Lisa; Louison, Bowen; Joseph, George; Stewart-Johnson, Alva; Samlal, Sannandan; Rodrigo, Shelly

    2014-09-01

    This study was conducted to investigate the demography, management, and production practices on layer chicken farms in Trinidad and Tobago, Grenada, and St. Lucia and the frequency of risk factors for Salmonella infection. The frequency of isolation of Salmonella from the layer farm environment, eggs, feeds, hatchery, and imported day-old chicks was determined using standard methods. Of the eight risk factors (farm size, age group of layers, source of day-old chicks, vaccination, sanitation practices, biosecurity measures, presence of pests, and previous disease outbreaks) for Salmonella infection investigated, farm size was the only risk factor significantly associated (P = 0.031) with the prevalence of Salmonella; 77.8% of large farms were positive for this pathogen compared with 33.3 and 26.1% of medium and small farms, respectively. The overall isolation rate of Salmonella from 35 layer farms was 40.0%. Salmonella was isolated at a significantly higher rate (P < 0.05) from farm environments than from the cloacae. Only in Trinidad and Tobago did feeds (6.5% of samples) and pooled egg contents (12.5% of samples) yield Salmonella; however, all egg samples from hotels, hatcheries, and airports in this country were negative. Salmonella Anatum, Salmonella group C, and Salmonella Kentucky were the predominant serotypes in Trinidad and Tobago, Grenada, and St. Lucia, respectively. Although Salmonella infections were found in layer birds sampled, table eggs appear to pose minimal risk to consumers. However, the detection of Salmonella -contaminated farm environments and feeds cannot be ignored. Only 2.9% of the isolates belonged to Salmonella Enteritidis, a finding that may reflect the impact of changes in farm management and poultry production in the region.

  4. Blubber Cortisol: A Potential Tool for Assessing Stress Response in Free-Ranging Dolphins without Effects due to Sampling

    PubMed Central

    Kellar, Nicholas M.; Catelani, Krista N.; Robbins, Michelle N.; Trego, Marisa L.; Allen, Camryn D.; Danil, Kerri; Chivers, Susan J.

    2015-01-01

    When paired with dart biopsying, quantifying cortisol in blubber tissue may provide an index of relative stress levels (i.e., activation of the hypothalamus-pituitary-adrenal axis) in free-ranging cetacean populations while minimizing the effects of the act of sampling. To validate this approach, cortisol was extracted from blubber samples collected from beach-stranded and bycaught short-beaked common dolphins using a modified blubber steroid isolation technique and measured via commercially available enzyme immunoassays. The measurements exhibited appropriate quality characteristics when analyzed via a bootstraped stepwise parallelism analysis (observed/expected = 1.03, 95%CI: 99.6 – 1.08) and showed no evidence of matrix interference with increasing sample size across typical biopsy tissue masses (75–150mg; r2 = 0.012, p = 0.78, slope = 0.022ngcortisol deviation/ultissue extract added). The relationships between blubber cortisol and eight potential cofactors namely, 1) fatality type (e.g., stranded or bycaught), 2) specimen condition (state of decomposition), 3) total body length, 4) sex, 5) sexual maturity state, 6) pregnancy status, 7) lactation state, and 8) adrenal mass, were assessed using a Bayesian generalized linear model averaging technique. Fatality type was the only factor correlated with blubber cortisol, and the magnitude of the effect size was substantial: beach-stranded individuals had on average 6.1-fold higher cortisol levels than those of bycaught individuals. Because of the difference in conditions surrounding these two fatality types, we interpret this relationship as evidence that blubber cortisol is indicative of stress response. We found no evidence of seasonal variation or a relationship between cortisol and the remaining cofactors. PMID:25643144

  5. Bayes factors for testing inequality constrained hypotheses: Issues with prior specification.

    PubMed

    Mulder, Joris

    2014-02-01

    Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non-informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one-to-one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys-Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis. © 2013 The British Psychological Society.

  6. Where do the Field Plots Belong? A Multiple-Constraint Sampling Design for the BigFoot Project

    NASA Astrophysics Data System (ADS)

    Kennedy, R. E.; Cohen, W. B.; Kirschbaum, A. A.; Gower, S. T.

    2002-12-01

    A key component of a MODIS validation project is effective characterization of biophysical measures on the ground. Fine-grain ecological field measurements must be placed strategically to capture variability at the scale of the MODIS imagery. Here we describe the BigFoot project's revised sampling scheme, designed to simultaneously meet three important goals: capture landscape variability, avoid spatial autocorrelation between field plots, and minimize time and expense of field sampling. A stochastic process places plots in clumped constellations to reduce field sampling costs, while minimizing spatial autocorrelation. This stochastic process is repeated, creating several hundred realizations of plot constellations. Each constellation is scored and ranked according to its ability to match landscape variability in several Landsat-based spectral indices, and its ability to minimize field sampling costs. We show how this approach has recently been used to place sample plots at the BigFoot project's two newest study areas, one in a desert system and one in a tundra system. We also contrast this sampling approach to that already used at the four prior BigFoot project sites.

  7. Universal Solid-phase Reversible Sample-Prep for Concurrent Proteome and N-glycome Characterization

    PubMed Central

    Zhou, Hui; Morley, Samantha; Kostel, Stephen; Freeman, Michael R.; Joshi, Vivek; Brewster, David; Lee, Richard S.

    2017-01-01

    SUMMARY We describe a novel Solid-phase Reversible Sample-Prep (SRS) platform, which enables rapid sample preparation for concurrent proteome and N-glycome characterization by mass spectrometry. SRS utilizes a uniquely functionalized, silica-based bead that has strong affinity toward proteins with minimal-to-no affinity for peptides and other small molecules. By leveraging the inherent size difference between, SRS permits high-capacity binding of proteins, rapid removal of small molecules (detergents, metabolites, salts, etc.), extensive manipulation including enzymatic and chemical treatments on beads-bound proteins, and easy recovery of N-glycans and peptides. The efficacy of SRS was evaluated in a wide range of biological samples including single glycoprotein, whole cell lysate, murine tissues, and human urine. To further demonstrate the SRS platform, we coupled a quantitative strategy to SRS to investigate the differences between DU145 prostate cancer cells and its DIAPH3-silenced counterpart. Our previous studies suggested that DIAPH3 silencing in DU145 prostate cancer cells induced transition to an amoeboid phenotype that correlated with tumor progression and metastasis. In this analysis we identified distinct proteomic and N-glycomic alterations between the two cells. Intriguingly, a metastasis-associated tyrosine kinase receptor ephrin-type-A receptor (EPHA2) was highly upregulated in DIAPH3-silenced cells, indicating underling connection between EPHA2 and DIAPH3. Moreover, distinct alterations in the N-glycome were identified, suggesting a cross-link between DIAPH3 and glycosyltransferase networks. Overall, SRS is an enabling universal sample preparation strategy that is not size limited and has the capability to efficiently prepare and clean peptides and N-glycans concurrently from nearly all sample types. Conceptually, SRS can be utilized for the analysis of other posttranslational modifications, and the unique surface chemistry can be further transformed for high-throughput automation. The technical simplicity, robustness, and modularity of SRS make it a highly promising technology with great potential in proteomic-based research. PMID:26791391

  8. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  9. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  10. The methodology of preparing the end faces of cylindrical waveguide of polydimethylsiloxane

    NASA Astrophysics Data System (ADS)

    Novak, M.; Nedoma, J.; Jargus, J.; Bednarek, L.; Cvejn, D.; Vasinek, V.

    2017-10-01

    Polydimethylsiloxane (PDMS) can be used for its optical properties and its composition offers the possibility of use in the dangerous environments. Therefore authors of this article focused on more detailed working with this material. The article describes the methodology of preparing the end faces of the cylindrical waveguide of polymer polydimethylsiloxane (PDMS) to minimize losses during joining. The first method of preparing the end faces of the cylindrical waveguide of polydimethylsiloxane is based on the polishing surface of the sandpaper of different sizes of grains (3 species). The second method using so-called heat smoothing and the third method using aligning end faces by a new layer of polydimethylsiloxane. The outcome of the study is to evaluate the quality of the end faces of the cylindrical waveguide of polymer polydimethylsiloxane based on evaluating the attenuation. For this experiment, it was created a total of 140 samples. The attenuation was determined from both sides of the created samples for three different wavelengths of the visible spectrum.

  11. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2015-10-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box-Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples ( n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated.

  12. The method for on-site determination of trace concentrations of methyl mercaptan and dimethyl sulfide in air using a mobile mass spectrometer with atmospheric pressure chemical ionization, combined with a fast enrichment/separation system.

    PubMed

    Kudryavtsev, Andrey S; Makas, Alexey L; Troshkov, Mikhail L; Grachev, Mikhail А; Pod'yachev, Sergey P

    2014-06-01

    A method for fast simultaneous on-site determination of methyl mercaptan and dimethyl sulfide in air was developed. The target compounds were actively collected on silica gel, followed by direct flash thermal desorption, fast separation on a short chromatographic column and detection by means of mass spectrometer with atmospheric pressure chemical ionization. During the sampling of ambient air, water vapor was removed with a Nafion selective membrane. A compact mass spectrometer prototype, which was designed earlier at Trofimuk Institute of Petroleum Geology and Geophysics, was used. The minimization of gas load of the atmospheric pressure ion source allowed reducing the power requirements and size of the vacuum system and increasing its ruggedness. The measurement cycle is about 3 min. Detection limits in a 0.6 L sample are 1 ppb for methyl mercaptan and 0.2 ppb for dimethyl sulfide. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    PubMed Central

    Hittner, James B.

    2014-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box–Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples (n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated. PMID:29795841

  14. High-quality unsaturated zone hydraulic property data for hydrologic applications

    USGS Publications Warehouse

    Perkins, Kimberlie; Nimmo, John R.

    2009-01-01

    In hydrologic studies, especially those using dynamic unsaturated zone moisture modeling, calculations based on property transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values has become increasingly common with the use of neural networks. High-quality data are needed for databases used in this way and for theoretical and property transfer model development and testing. Hydraulic properties predicted on the basis of existing databases may be adequate in some applications but not others. An obvious problem occurs when the available database has few or no data for samples that are closely related to the medium of interest. The data set presented in this paper includes saturated and unsaturated hydraulic conductivity, water retention, particle-size distributions, and bulk properties. All samples are minimally disturbed, all measurements were performed using the same state of the art techniques and the environments represented are diverse.

  15. Turnover, staffing, skill mix, and resident outcomes in a national sample of US nursing homes.

    PubMed

    Trinkoff, Alison M; Han, Kihye; Storr, Carla L; Lerner, Nancy; Johantgen, Meg; Gartrell, Kyungsook

    2013-12-01

    The authors examined the relationship of staff turnover to selected nursing home quality outcomes, in the context of staffing and skill mix. Staff turnover is a serious concern in nursing homes as it has been found to adversely affect care. When employee turnover is minimized, better care quality is more likely in nursing homes. Data from the National Nursing Home Survey, a nationally representative sample of US nursing homes, were linked to Nursing Home Compare quality outcomes and analyzed using logistic regression. Nursing homes with high certified nursing assistant turnover had significantly higher odds of pressure ulcers, pain, and urinary tract infections even after controlling for staffing, skill mix, bed size, and ownership. Nurse turnover was associated with twice the odds of pressure ulcers, although this was attenuated when staffing was controlled. This study suggests turnover may be more important in explaining nursing home (NH) outcomes than staffing and skill mix and should therefore be given greater emphasis.

  16. Value stream mapping of the Pap test processing procedure: a lean approach to improve quality and efficiency.

    PubMed

    Michael, Claire W; Naik, Kalyani; McVicker, Michael

    2013-05-01

    We developed a value stream map (VSM) of the Papanicolaou test procedure to identify opportunities to reduce waste and errors, created a new VSM, and implemented a new process emphasizing Lean tools. Preimplementation data revealed the following: (1) processing time (PT) for 1,140 samples averaged 54 hours; (2) 27 accessioning errors were detected on review of 357 random requisitions (7.6%); (3) 5 of the 20,060 tests had labeling errors that had gone undetected in the processing stage. Four were detected later during specimen processing but 1 reached the reporting stage. Postimplementation data were as follows: (1) PT for 1,355 samples averaged 31 hours; (2) 17 accessioning errors were detected on review of 385 random requisitions (4.4%); and (3) no labeling errors were undetected. Our results demonstrate that implementation of Lean methods, such as first-in first-out processes and minimizing batch size by staff actively participating in the improvement process, allows for higher quality, greater patient safety, and improved efficiency.

  17. Tapered holey fibers for spot-size and numerical-aperture conversion.

    PubMed

    Town, G E; Lizier, J T

    2001-07-15

    Adiabatically tapered holey fibers are shown to be potentially useful for guided-wave spot-size and numerical-aperture conversion. Conditions for adiabaticity and design guidelines are provided in terms of the effective-index model. We also present finite-difference time-domain calculations of downtapered holey fiber, showing that large spot-size conversion factors are obtainable with minimal loss by use of short, optimally shaped tapers.

  18. Optimal Compressed Sensing and Reconstruction of Unstructured Mesh Datasets

    DOE PAGES

    Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.; ...

    2017-08-09

    Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less

  19. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  20. Development and evaluation of the quick anaero-system-a new disposable anaerobic culture system.

    PubMed

    Yang, Nam Woong; Kim, Jin Man; Choi, Gwang Ju; Jang, Sook Jin

    2010-04-01

    We developed a new disposable anaerobic culture system, namely, the Quick anaero-system, for easy culturing of obligate anaerobes. Our system consists of 3 components: 1) new disposable anaerobic gas pack, 2) disposable culture-envelope and sealer, and 3) reusable stainless plate rack with mesh containing 10 g of palladium catalyst pellets. To evaluate the efficiency of our system, we used 12 anaerobic bacteria. We prepared 2 sets of ten-fold serial dilutions of the 12 anaerobes, and inoculated these samples on Luria-Bertani (LB) broth and LB blood agar plate (LB-BAP) (BD Diagnostic Systems, USA). Each set was incubated in the Quick anaero-system (DAS Tech, Korea) and BBL GasPak jar with BD GasPak EZ Anaerobe Container System (BD Diagnostic Systems) at 35-37 degrees C for 48 hr. The minimal inoculum size showing visible growth of 12 anaerobes when incubated in both the systems was compared. The minimal inoculum size showing visible growth for 2 out of the 12 anaerobes in the LB broth and 9 out of the 12 anaerobes on LB-BAP was lower for the Quick anaero-system than in the BD GasPak EZ Anaerobe Container System. The mean time (+/-SD) required to achieve absolute anaerobic conditions of the Quick anaero-system was 17 min and 56 sec (+/-3 min and 25 sec). The Quick anaero-system is a simple and effective method of culturing obligate anaerobes, and its performance is superior to that of the BD GasPak EZ Anaerobe Container System.

  1. How chip size impacts steam pretreatment effectiveness for biological conversion of poplar wood into fermentable sugars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeMartini, Jaclyn D.; Foston, Marcus; Meng, Xianzhi

    We report that woody biomass is highly recalcitrant to enzymatic sugar release and often requires significant size reduction and severe pretreatments to achieve economically viable sugar yields in biological production of sustainable fuels and chemicals. However, because mechanical size reduction of woody biomass can consume significant amounts of energy, it is desirable to minimize size reduction and instead pretreat larger wood chips prior to biological conversion. To date, however, most laboratory research has been performed on materials that are significantly smaller than applicable in a commercial setting. As a result, there is a limited understanding of the effects that largermore » biomass particle size has on the effectiveness of steam explosion pretreatment and subsequent enzymatic hydrolysis of wood chips. To address these concerns, novel downscaled analysis and high throughput pretreatment and hydrolysis (HTPH) were applied to examine whether differences exist in the composition and digestibility within a single pretreated wood chip due to heterogeneous pretreatment across its thickness. Heat transfer modeling, Simons’ stain testing, magnetic resonance imaging (MRI), and scanning electron microscopy (SEM) were applied to probe the effects of pretreatment within and between pretreated wood samples to shed light on potential causes of variation, pointing to enzyme accessibility (i.e., pore size) distribution being a key factor dictating enzyme digestibility in these samples. Application of these techniques demonstrated that the effectiveness of pretreatment of Populus tremuloides can vary substantially over the chip thickness at short pretreatment times, resulting in spatial digestibility effects and overall lower sugar yields in subsequent enzymatic hydrolysis. Finally, these results indicate that rapid decompression pretreatments (e.g., steam explosion) that specifically alter accessibility at lower temperature conditions are well suited for larger wood chips due to the non-uniformity in temperature and digestibility profiles that can result from high temperature and short pretreatment times. Furthermore, this study also demonstrated that wood chips were hydrated primarily through the natural pore structure during pretreatment, suggesting that preserving the natural grain and transport systems in wood during storage and chipping processes could likely promote pretreatment efficacy and uniformity.« less

  2. How chip size impacts steam pretreatment effectiveness for biological conversion of poplar wood into fermentable sugars

    DOE PAGES

    DeMartini, Jaclyn D.; Foston, Marcus; Meng, Xianzhi; ...

    2015-12-09

    We report that woody biomass is highly recalcitrant to enzymatic sugar release and often requires significant size reduction and severe pretreatments to achieve economically viable sugar yields in biological production of sustainable fuels and chemicals. However, because mechanical size reduction of woody biomass can consume significant amounts of energy, it is desirable to minimize size reduction and instead pretreat larger wood chips prior to biological conversion. To date, however, most laboratory research has been performed on materials that are significantly smaller than applicable in a commercial setting. As a result, there is a limited understanding of the effects that largermore » biomass particle size has on the effectiveness of steam explosion pretreatment and subsequent enzymatic hydrolysis of wood chips. To address these concerns, novel downscaled analysis and high throughput pretreatment and hydrolysis (HTPH) were applied to examine whether differences exist in the composition and digestibility within a single pretreated wood chip due to heterogeneous pretreatment across its thickness. Heat transfer modeling, Simons’ stain testing, magnetic resonance imaging (MRI), and scanning electron microscopy (SEM) were applied to probe the effects of pretreatment within and between pretreated wood samples to shed light on potential causes of variation, pointing to enzyme accessibility (i.e., pore size) distribution being a key factor dictating enzyme digestibility in these samples. Application of these techniques demonstrated that the effectiveness of pretreatment of Populus tremuloides can vary substantially over the chip thickness at short pretreatment times, resulting in spatial digestibility effects and overall lower sugar yields in subsequent enzymatic hydrolysis. Finally, these results indicate that rapid decompression pretreatments (e.g., steam explosion) that specifically alter accessibility at lower temperature conditions are well suited for larger wood chips due to the non-uniformity in temperature and digestibility profiles that can result from high temperature and short pretreatment times. Furthermore, this study also demonstrated that wood chips were hydrated primarily through the natural pore structure during pretreatment, suggesting that preserving the natural grain and transport systems in wood during storage and chipping processes could likely promote pretreatment efficacy and uniformity.« less

  3. Nondestructive Analysis of Astromaterials by Micro-CT and Micro-XRF Analysis for PET Examination

    NASA Technical Reports Server (NTRS)

    Zeigler, R. A.; Righter, K.; Allen, C. C.

    2013-01-01

    An integral part of any sample return mission is the initial description and classification of returned samples by the preliminary examination team (PET). The goal of the PET is to characterize and classify returned samples and make this information available to the larger research community who then conduct more in-depth studies on the samples. The PET tries to minimize the impact their work has on the sample suite, which has in the past limited the PET work to largely visual, nonquantitative measurements (e.g., optical microscopy). More modern techniques can also be utilized by a PET to nondestructively characterize astromaterials in much more rigorous way. Here we discuss our recent investigations into the applications of micro-CT and micro-XRF analyses with Apollo samples and ANSMET meteorites and assess the usefulness of these techniques in future PET. Results: The application of micro computerized tomography (micro-CT) to astromaterials is not a new concept. The technique involves scanning samples with high-energy x-rays and constructing 3-dimensional images of the density of materials within the sample. The technique can routinely measure large samples (up to approx. 2700 cu cm) with a small individual voxel size (approx. 30 cu m), and has the sensitivity to distinguish the major rock forming minerals and identify clast populations within brecciated samples. We have recently run a test sample of a terrestrial breccia with a carbonate matrix and multiple igneous clast lithologies. The test results are promising and we will soon analyze a approx. 600 g piece of Apollo sample 14321 to map out the clast population within the sample. Benchtop micro x-ray fluorescence (micro-XRF) instruments can rapidly scan large areas (approx. 100 sq cm) with a small pixel size (approx. 25 microns) and measure the (semi) quantitative composition of largely unprepared surfaces for all elements between Be and U, often with sensitivity on the order of a approx. 100 ppm. Our recent testing of meteorite and Apollo samples on micro-XRF instruments has shown that they can easily detect small zircons and phosphates (approx. 10 m), distinguish different clast lithologies within breccias, and identify different lithologies within small rock fragments (2-4 mm soil Apollo soil fragments).

  4. Functional Status Score for the Intensive Care Unit (FSS-ICU): An International Clinimetric Analysis of Validity, Responsiveness, and Minimal Important Difference

    PubMed Central

    Huang, Minxuan; Chan, Kitty S.; Zanni, Jennifer M.; Parry, Selina M.; Neto, Saint-Clair G. B.; Neto, Jose A. A.; da Silva, Vinicius Z. M.; Kho, Michelle E.; Needham, Dale M.

    2017-01-01

    Objective To evaluate the internal consistency, validity, responsiveness, and minimal important difference of the Functional Status Score for the Intensive Care Unit (FSS-ICU), a physical function measure designed for the intensive care unit (ICU). Design Clinimetric analysis. Settings Five international data sets from the United States, Australia, and Brazil. Patients 819 ICU patients. Intervention None. Measurements and Main Results Clinimetric analyses were initially conducted separately for each data source and time point to examine generalizability of findings, with pooled analyses performed thereafter to increase power of analyses. The FSS-ICU demonstrated good to excellent internal consistency. There was good convergent and discriminant validity, with significant and positive correlations (r = 0.30 to 0.95) between FSS-ICU and other physical function measures, and generally weaker correlations with non-physical measures (|r| = 0.01 to 0.70). Known group validity was demonstrated by significantly higher FSS-ICU scores among patients without ICU-acquired weakness (Medical Research Council sumscore ≥48 versus <48) and with hospital discharge to home (versus healthcare facility). FSS-ICU at ICU discharge predicted post-ICU hospital length of stay and discharge location. Responsiveness was supported via increased FSS-ICU scores with improvements in muscle strength. Distribution-based methods indicated a minimal important difference of 2.0 to 5.0. Conclusions The FSS-ICU has good internal consistency and is a valid and responsive measure of physical function for ICU patients. The estimated minimal important difference can be used in sample size calculations and in interpreting studies comparing the physical function of groups of ICU patients. PMID:27488220

  5. Responsiveness, minimal detectable change, and minimal clinically important difference of the Nottingham Extended Activities of Daily Living Scale in patients with improved performance after stroke rehabilitation.

    PubMed

    Wu, Ching-yi; Chuang, Li-ling; Lin, Keh-chung; Lee, Shin-da; Hong, Wei-hsien

    2011-08-01

    To determine the responsiveness, minimal detectable change (MDC), and minimal clinically important differences (MCIDs) of the Nottingham Extended Activities of Daily Living (NEADL) scale and to assess percentages of patients' change scores exceeding the MDC and MCID after stroke rehabilitation. Secondary analyses of patients who received stroke rehabilitation therapy. Medical centers. Patients with stroke (N=78). Secondary analyses of patients who received 1 of 4 rehabilitation interventions. Responsiveness (standardized response mean [SRM]), 90% confidence that a change score at this threshold or higher is true and reliable rather than measurement error (MDC(90)), and MCID on the NEADL score and percentages of patients exceeding the MDC(90) and MCID. The SRM of the total NEADL scale was 1.3. The MDC(90) value for the total NEADL scale was 4.9, whereas minima and maxima of the MCID for total NEADL score were 2.4 and 6.1 points, respectively. Percentages of patients exceeding the MDC(90) and MCID of the total NEADL score were 50.0%, 73.1%, and 32.1%, respectively. The NEADL is a responsive instrument relevant for measuring change in instrumental activities of daily living after stroke rehabilitation. A patient's change score has to reach 4.9 points on the total to indicate a true change. The mean change score of a stroke group on the total NEADL scale should achieve 6.1 points to be regarded as clinically important. Our findings are based on patients with improved NEADL performance after they received specific interventions. Future research with larger sample sizes is warranted to validate these estimates. Copyright © 2011 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  6. Carbon Mineralization in Two Ultisols Amended with Different Sources and Particle Sizes of Pyrolyzed Biochar

    EPA Science Inventory

    Biochar produced during pyrolysis has the potential to enhance soil fertility and reduce greenhouse gas emissions. The influence of biochar properties (e.g., particle size) on both short- and long-term carbon (C) mineralization of biochar remains unclear. There is minimal informa...

  7. Why minimally invasive skin sampling techniques? A bright scientific future.

    PubMed

    Wang, Christina Y; Maibach, Howard I

    2011-03-01

    There is increasing interest in minimally invasive skin sampling techniques to assay markers of molecular biology and biochemical processes. This overview examines methodology strengths and limitations, and exciting developments pending in the scientific community. Publications were searched via PubMed, the U.S. Patent and Trademark Office Website, the DermTech Website and the CuDerm Website. The keywords used were noninvasive skin sampling, skin stripping, skin taping, detergent method, ring method, mechanical scrub, reverse iontophoresis, glucose monitoring, buccal smear, hair root sampling, mRNA, DNA, RNA, and amino acid. There is strong interest in finding methods to access internal biochemical, molecular, and genetic processes through noninvasive and minimally invasive external means. Minimally invasive techniques include the widely used skin tape stripping, the abrasion method that includes scraping and detergent, and reverse iontophoresis. The first 2 methods harvest largely the stratum corneum. Hair root sampling (material deeper than the epidermis), buccal smear, shave biopsy, punch biopsy, and suction blistering are also methods used to obtain cellular material for analysis, but involve some degree of increased invasiveness and thus are only briefly mentioned. Existing and new sampling methods are being refined and validated, offering exciting, different noninvasive means of quickly and efficiently obtaining molecular material with which to monitor bodily functions and responses, assess drug levels, and follow disease processes without subjecting patients to unnecessary discomfort and risk.

  8. Vertical accretion sand proxies of gaged floods along the upper Little Tennessee River, Blue Ridge Mountains, USA

    NASA Astrophysics Data System (ADS)

    Leigh, David S.

    2018-02-01

    Understanding environmental hazards presented by river flooding has been enhanced by paleoflood analysis, which uses sedimentary records to document floods beyond historical records. Bottomland overbank deposits (e.g., natural levees, floodbasins, meander scars, low terraces) have the potential as continuous paleoflood archives of flood frequency and magnitude, but they have been under-utilized because of uncertainty about their ability to derive flood magnitude estimates. The purpose of this paper is to provide a case study that illuminates tremendous potential of bottomland overbank sediments as reliable proxies of both flood frequency and magnitude. Methods involve correlation of particle-size measurements of the coarse tail of overbank deposits (> 0.25 mm sand) from three separate sites with historical flood discharge records for the upper Little Tennessee River in the Blue Ridge Mountains of the southeastern United States. Results show that essentially all floods larger than a 20% probability event can be detected by the coarse tail of particle-size distributions, especially if the temporal resolution of sampling is annual or sub-annual. Coarser temporal resolution (1.0 to 2.5 year sample intervals) provides an adequate record of large floods, but is unable to discriminate individual floods separated by only one to three years. Measurements of > 0.25 mm sand that are normalized against a smoothed trend line through the down-column data produce highly significant correlations (R2 values of 0.50 to 0.60 with p-values of 0.004 to < 0.001) between sand peak values and flood peak discharges, indicating that flood magnitude can be reliably estimated. In summary, bottomland overbank deposits can provide excellent continuous records of paleofloods when the following conditions are met: 1) Stable depositional sites should be chosen; 2) Analysis should concentrate on the coarse tails of particle-size distributions; 3) Sampling of sediment intervals should achieve annual or better resolution; 4) Time-series data of particle-size should be detrended to minimize variation from dynamic aspects of fluvial sedimentation that are not related to flood magnitude; and 5) Multiple sites should be chosen to allow for replication of findings.

  9. Application of Aerosol Hygroscopicity Measured at the Atmospheric Radiation Measurement Program's Southern Great Plains Site to Examine Composition and Evolution

    NASA Technical Reports Server (NTRS)

    Gasparini, Roberto; Runjun, Li; Collins, Don R.; Ferrare, Richard A.; Brackett, Vincent G.

    2006-01-01

    A Differential Mobility Analyzer/Tandem Differential Mobility Analyzer (DMA/TDMA) was used to measure submicron aerosol size distributions, hygroscopicity, and occasionally volatility during the May 2003 Aerosol Intensive Operational Period (IOP) at the Central Facility of the Atmospheric Radiation Measurement Program's Southern Great Plains (ARM SGP) site. Hygroscopic growth factor distributions for particles at eight dry diameters ranging from 0.012 micrometers to 0.600 micrometers were measured throughout the study. For a subset of particle sizes, more detailed measurements were occasionally made in which the relative humidity or temperature to which the aerosol was exposed was varied over a wide range. These measurements, in conjunction with backtrajectory clustering, were used to infer aerosol composition and to gain insight into the processes responsible for evolution. The hygroscopic growth of both the smallest and largest particles analyzed was typically less than that of particles with dry diameters of about 0.100 micrometers. It is speculated that condensation of secondary organic aerosol on nucleation mode particles is largely responsible for the minimal hygroscopic growth observed at the smallest sizes considered. Growth factor distributions of the largest particles characterized typically contained a nonhygroscopic mode believed to be composed primarily of dust. A model was developed to characterize the hygroscopic properties of particles within a size distribution mode through analysis of the fixed size hygroscopic growth measurements. The performance of this model was quantified through comparison of the measured fixed size hygroscopic growth factor distributions with those simulated through convolution of the size-resolved concentration contributed by each of the size modes and the mode-resolved hygroscopicity. This transformation from sizeresolved hygroscopicity to mode-resolved hygroscopicity facilitated examination of changes in the hygroscopic properties of particles within a size distribution mode that accompanied changes in the sizes of those particles. This model was used to examine three specific cases in which the sampled aerosol evolved slowly over a period of hours or days.

  10. Human Mars Ascent Vehicle Configuration and Performance Sensitivities

    NASA Technical Reports Server (NTRS)

    Polsgrove, Tara P.; Thomas, Herbert D.; Stephens, Walter; Collins, Tim; Rucker, Michelle; Gernhardt, Mike; Zwack, Matthew R.; Dees, Patrick D.

    2017-01-01

    The total ascent vehicle mass drives performance requirements for the Mars descent systems and the Earth to Mars transportation elements. Minimizing Mars Ascent Vehicle (MAV) mass is a priority and minimizing the crew cabin size and mass is one way to do that. Human missions to Mars may utilize several small cabins where crew members could live for days up to a couple of weeks. A common crew cabin design that can perform in each of these applications is desired and could reduce the overall mission cost. However, for the MAV, the crew cabin size and mass can have a large impact on vehicle design and performance. This paper explores the sensitivities to trajectory, propulsion, crew cabin size and the benefits and impacts of using a common crew cabin design for the MAV. Results of these trades will be presented along with mass and performance estimates for the selected design.

  11. Direct drive wind turbine

    DOEpatents

    Bywaters, Garrett Lee; Danforth, William; Bevington, Christopher; Stowell, Jesse; Costin, Daniel

    2006-09-19

    A wind turbine is provided that minimizes the size of the drive train and nacelle while maintaining the power electronics and transformer at the top of the tower. The turbine includes a direct drive generator having an integrated disk brake positioned radially inside the stator while minimizing the potential for contamination. The turbine further includes a means for mounting a transformer below the nacelle within the tower.

  12. Direct drive wind turbine

    DOEpatents

    Bywaters, Garrett; Danforth, William; Bevington, Christopher; Jesse, Stowell; Costin, Daniel

    2006-10-10

    A wind turbine is provided that minimizes the size of the drive train and nacelle while maintaining the power electronics and transformer at the top of the tower. The turbine includes a direct drive generator having an integrated disk brake positioned radially inside the stator while minimizing the potential for contamination. The turbine further includes a means for mounting a transformer below the nacelle within the tower.

  13. Direct drive wind turbine

    DOEpatents

    Bywaters, Garrett; Danforth, William; Bevington, Christopher; Stowell, Jesse; Costin, Daniel

    2006-07-11

    A wind turbine is provided that minimizes the size of the drive train and nacelle while maintaining the power electronics and transformer at the top of the tower. The turbine includes a direct drive generator having an integrated disk brake positioned radially inside the stator while minimizing the potential for contamination. The turbine further includes a means for mounting a transformer below the nacelle within the tower.

  14. Direct drive wind turbine

    DOEpatents

    Bywaters, Garrett; Danforth, William; Bevington, Christopher; Jesse, Stowell; Costin, Daniel

    2007-02-27

    A wind turbine is provided that minimizes the size of the drive train and nacelle while maintaining the power electronics and transformer at the top of the tower. The turbine includes a direct drive generator having an integrated disk brake positioned radially inside the stator while minimizing the potential for contamination. The turbine further includes a means for mounting a transformer below the nacelle within the tower.

  15. Minimal T-wave representation and its use in the assessment of drug arrhythmogenicity.

    PubMed

    Shakibfar, Saeed; Graff, Claus; Kanters, Jørgen K; Nielsen, Jimmi; Schmidt, Samuel; Struijk, Johannes J

    2017-05-01

    Recently, numerous models and techniques have been developed for analyzing and extracting features from the T wave which could be used as biomarkers for drug-induced abnormalities. The majority of these techniques and algorithms use features that determine readily apparent characteristics of the T wave, such as duration, area, amplitude, and slopes. In the present work the T wave was down-sampled to a minimal rate, such that a good reconstruction was still possible. The entire T wave was then used as a feature vector to assess drug-induced repolarization effects. The ability of the samples or combinations of samples obtained from the minimal T-wave representation to correctly classify a group of subjects before and after receiving d,l-sotalol 160 mg and 320 mg was evaluated using a linear discriminant analysis (LDA). The results showed that a combination of eight samples from the minimal T-wave representation can be used to identify normal from abnormal repolarization significantly better compared to the heart rate-corrected QT interval (QTc). It was further indicated that the interval from the peak of the T wave to the end of the T wave (Tpe) becomes relatively shorter after I K r inhibition by d,l-sotalol and that the most pronounced repolarization changes were present in the ascending segment of the minimal T-wave representation. The minimal T-wave representation can potentially be used as a new tool to identify normal from abnormal repolarization in drug safety studies. © 2016 Wiley Periodicals, Inc.

  16. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  17. Calibrated Tully-Fisher relations for improved estimates of disc rotation velocities

    NASA Astrophysics Data System (ADS)

    Reyes, R.; Mandelbaum, R.; Gunn, J. E.; Pizagno, J.; Lackner, C. N.

    2011-11-01

    In this paper, we derive scaling relations between photometric observable quantities and disc galaxy rotation velocity Vrot or Tully-Fisher relations (TFRs). Our methodology is dictated by our purpose of obtaining purely photometric, minimal-scatter estimators of Vrot applicable to large galaxy samples from imaging surveys. To achieve this goal, we have constructed a sample of 189 disc galaxies at redshifts z < 0.1 with long-slit Hα spectroscopy from Pizagno et al. and new observations. By construction, this sample is a fair subsample of a large, well-defined parent disc sample of ˜170 000 galaxies selected from the Sloan Digital Sky Survey Data Release 7 (SDSS DR7). The optimal photometric estimator of Vrot we find is stellar mass M★ from Bell et al., based on the linear combination of a luminosity and a colour. Assuming a Kroupa initial mass function (IMF), we find: log [V80/(km s-1)] = (2.142 ± 0.004) + (0.278 ± 0.010)[log (M★/M⊙) - 10.10], where V80 is the rotation velocity measured at the radius R80 containing 80 per cent of the i-band galaxy light. This relation has an intrinsic Gaussian scatter ? dex and a measured scatter σmeas= 0.056 dex in log V80. For a fixed IMF, we find that the dynamical-to-stellar mass ratios within R80, (Mdyn/M★)(R80), decrease from approximately 10 to 3, as stellar mass increases from M★≈ 109 to 1011 M⊙. At a fixed stellar mass, (Mdyn/M★)(R80) increases with disc size, so that it correlates more tightly with stellar surface density than with stellar mass or disc size alone. We interpret the observed variation in (Mdyn/M★)(R80) with disc size as a reflection of the fact that disc size dictates the radius at which Mdyn/M★ is measured, and consequently, the fraction of the dark matter 'seen' by the gas at that radius. For the lowest M★ galaxies, we find a positive correlation between TFR residuals and disc sizes, indicating that the total density profile is dominated by dark matter on these scales. For the highest M★ galaxies, we find instead a weak negative correlation, indicating a larger contribution of stars to the total density profile. This change in the sense of the correlation (from positive to negative) is consistent with the decreasing trend in (Mdyn/M★)(R80) with stellar mass. In future work, we will use these results to study disc galaxy formation and evolution and perform a fair, statistical analysis of the dynamics and masses of a photometrically selected sample of disc galaxies.

  18. Mini-Membrane Evaporator for Contingency Spacesuit Cooling

    NASA Technical Reports Server (NTRS)

    Makinen, Janice V.; Bue, Grant C.; Campbell, Colin; Petty, Brian; Craft, Jesse; Lynch, William; Wilkes, Robert; Vogel, Matthew

    2015-01-01

    The next-generation Advanced Extravehicular Mobility Unit (AEMU) Portable Life Support System (PLSS) is integrating a number of new technologies to improve reliability and functionality. One of these improvements is the development of the Auxiliary Cooling Loop (ACL) for contingency crewmember cooling. The ACL is a completely redundant, independent cooling system that consists of a small evaporative cooler--the Mini Membrane Evaporator (Mini-ME), independent pump, independent feedwater assembly and independent Liquid Cooling Garment (LCG). The Mini-ME utilizes the same hollow fiber technology featured in the full-sized AEMU PLSS cooling device, the Spacesuit Water Membrane Evaporator (SWME), but Mini-ME occupies only approximately 25% of the volume of SWME, thereby providing only the necessary crewmember cooling in a contingency situation. The ACL provides a number of benefits when compared with the current EMU PLSS contingency cooling technology, which relies upon a Secondary Oxygen Vessel; contingency crewmember cooling can be provided for a longer period of time, more contingency situations can be accounted for, no reliance on a Secondary Oxygen Vessel (SOV) for contingency cooling--thereby allowing a reduction in SOV size and pressure, and the ACL can be recharged-allowing the AEMU PLSS to be reused, even after a contingency event. The first iteration of Mini-ME was developed and tested in-house. Mini-ME is currently packaged in AEMU PLSS 2.0, where it is being tested in environments and situations that are representative of potential future Extravehicular Activities (EVA's). The second iteration of Mini-ME, known as Mini-ME2, is currently being developed to offer more heat rejection capability. The development of this contingency evaporative cooling system will contribute to a more robust and comprehensive AEMU PLSS.

  19. Human genomic DNA quantitation system, H-Quant: development and validation for use in forensic casework.

    PubMed

    Shewale, Jaiprakash G; Schneida, Elaine; Wilson, Jonathan; Walker, Jerilyn A; Batzer, Mark A; Sinha, Sudhir K

    2007-03-01

    The human DNA quantification (H-Quant) system, developed for use in human identification, enables quantitation of human genomic DNA in biological samples. The assay is based on real-time amplification of AluYb8 insertions in hominoid primates. The relatively high copy number of subfamily-specific Alu repeats in the human genome enables quantification of very small amounts of human DNA. The oligonucleotide primers present in H-Quant are specific for human DNA and closely related great apes. During the real-time PCR, the SYBR Green I dye binds to the DNA that is synthesized by the human-specific AluYb8 oligonucleotide primers. The fluorescence of the bound SYBR Green I dye is measured at the end of each PCR cycle. The cycle at which the fluorescence crosses the chosen threshold correlates to the quantity of amplifiable DNA in that sample. The minimal sensitivity of the H-Quant system is 7.6 pg/microL of human DNA. The amplicon generated in the H-Quant assay is 216 bp, which is within the same range of the common amplifiable short tandem repeat (STR) amplicons. This size amplicon enables quantitation of amplifiable DNA as opposed to a quantitation of degraded or nonamplifiable DNA of smaller sizes. Development and validation studies were performed on the 7500 real-time PCR system following the Quality Assurance Standards for Forensic DNA Testing Laboratories.

  20. Potential health impacts of heavy-metal exposure at the Tar Creek Superfund site, Ottawa County, Oklahoma.

    PubMed

    Neuberger, John S; Hu, Stephen C; Drake, K David; Jim, Rebecca

    2009-02-01

    The potential impact of exposure to heavy metals and health problems was evaluated at the Tar Creek Superfund site, Ottawa County, Oklahoma, USA. Observed versus expected mortality was calculated for selected conditions in the County and exposed cities. Excess mortality was found for stroke and heart disease when comparing the exposed County to the state but not when comparing the exposed cities to the nonexposed rest of the County. However, sample sizes in the exposed area were small, population emigration has been ongoing, and geographic coding of mortality data was incomplete. In an exposed community, 62.5% of children under the age of 6 years had blood lead levels exceeding 10 microg/dl. The relationships between heavy-metal exposure and children's health and chronic disease in adults are suggestive that a more thorough investigation might be warranted. A number of possible environmental and health studies are suggested, including those focusing on possible central nervous system impacts. Unfortunately, the exposed population is dispersing. One lesson learned at this site is that health studies need to be conducted as soon as possible after an environmental problem is identified to both study the impact of the most acute exposures and to maximize study sample size-including those exposed to higher doses-and minimize the loss of individuals to follow-up.

  1. Pharmacogenomics in neurology: current state and future steps.

    PubMed

    Chan, Andrew; Pirmohamed, Munir; Comabella, Manuel

    2011-11-01

    In neurology, as in any other clinical specialty, there is a need to develop treatment strategies that allow stratification of therapies to optimize efficacy and minimize toxicity. Pharmacogenomics is one such method for therapy optimization: it aims to elucidate the relationship between human genome sequence variation and differential drug responses. Approaches have focused on candidate approaches investigating absorption-, distribution-, metabolism, and elimination (ADME)-related genes (pharmacokinetic pathways), and potential drug targets (pharmacodynamic pathways). To date, however, only few genetic variants have been incorporated into clinical algorithms. Unfortunately, a large number of studies have thrown up contradictory results due to a number of deficiencies, including small sample sizes, inadequate phenotyping, and genotyping strategies. Thus, there still exists an urgent need to establish biomarkers that could help to select for patients with an optimal benefit to risk relationship. Here we review recent advances, and limitations, in pharmacogenomics for agents used in neuroimmunology, neurodegenerative diseases, ischemic stroke, epilepsy, and primary headaches. Further work is still required in all of these areas, which really needs to progress on several fronts, including better standardized phenotyping, appropriate sample sizes through multicenter collaborations and judicious use of new technological advances such as genome-wide approaches, next generation sequencing and systems biology. In time, this is likely to lead to improvements in the benefit-harm balance of neurological therapies, cost efficiency, and identification of new drugs. Copyright © 2011 American Neurological Association.

  2. Accelerating sample preparation through enzyme-assisted microfiltration of Salmonella in chicken extract.

    PubMed

    Vibbert, Hunter B; Ku, Seockmo; Li, Xuan; Liu, Xingya; Ximenes, Eduardo; Kreke, Thomas; Ladisch, Michael R; Deering, Amanda J; Gehring, Andrew G

    2015-01-01

    Microfiltration of chicken extracts has the potential to significantly decrease the time required to detect Salmonella, as long as the extract can be efficiently filtered and the pathogenic microorganisms kept in a viable state during this process. We present conditions that enable microfiltration by adding endopeptidase from Bacillus amyloliquefaciens to chicken extracts or chicken rinse, prior to microfiltration with fluid flow on both retentate and permeate sides of 0.2 μm cutoff polysulfone and polyethersulfone hollow fiber membranes. After treatment with this protease, the distribution of micron, submicron, and nanometer particles in chicken extracts changes so that the size of the remaining particles corresponds to 0.4-1 μm. Together with alteration of dissolved proteins, this change helps to explain how membrane fouling might be minimized because the potential foulants are significantly smaller or larger than the membrane pore size. At the same time, we found that the presence of protein protects Salmonella from protease action, thus maintaining cell viability. Concentration and recovery of 1-10 CFU Salmonella/mL from 400 mL chicken rinse is possible in less than 4 h, with the microfiltration step requiring less than 25 min at fluxes of 0.028-0.32 mL/cm(2) min. The entire procedure-from sample processing to detection by polymerase chain reaction-is completed in 8 h. © 2015 American Institute of Chemical Engineers.

  3. Morphology and characterization of 3D micro-porous structured chitosan scaffolds for tissue engineering.

    PubMed

    Hsieh, Wen-Chuan; Chang, Chih-Pong; Lin, Shang-Ming

    2007-06-15

    This research studies the morphology and characterization of three-dimensional (3D) micro-porous structures produced from biodegradable chitosan for use as scaffolds for cells culture. The chitosan 3D micro-porous structures were produced by a simple liquid hardening method, which includes the processes of foaming by mechanical stirring without any chemical foaming agent added, and hardening by NaOH cross linking. The pore size and porosity were controlled with mechanical stirring strength. This study includes the morphology of chitosan scaffolds, the characterization of mechanical properties, water absorption properties and in vitro enzymatic degradation of the 3D micro-porous structures. The results show that chitosan 3D micro-porous structures were successfully produced. Better formation samples were obtained when chitosan concentration is at 1-3%, and concentration of NaOH is at 5%. Faster stirring rate would produce samples of smaller pore diameter, but when rotation speed reaches 4000 rpm and higher the changes in pore size is minimal. Water absorption would reduce along with the decrease of chitosan scaffolds' pore diameter. From stress-strain analysis, chitosan scaffolds' mechanical properties are improved when it has smaller pore diameter. From in vitro enzymatic degradation results, it shows that the disintegration rate of chitosan scaffolds would increase along with the processing time increase, but approaching equilibrium when the disintegration rate reaches about 20%.

  4. Estimating Dermal Transfer of Copper Particles from the ...

    EPA Pesticide Factsheets

    Lumber pressure-treated with micronized copper was examined for the release of copper and copper micro/nanoparticles using a surface wipe method to simulate dermal transfer. In 2003, the wood industry began replacing CCA treated lumber products for residential use with copper based formulations. Micronized copper (nano to micron sized particles) has become the preferred treatment formulation. There is a lack of information on the release of copper, the fate of the particles during dermal contact, and the copper exposure level to children from hand-to-mouth transfer. For the current study, three treated lumber products, two micronized copper and one ionic copper, were purchased from commercial retailers. The boards were left to weather outdoors for approximately 1 year. Over the year time period, hand wipe samples were collected periodically to determine copper transfer from the wood surfaces. The two micronized formulations and the ionic formulation released similar levels of total copper. The amount of copper released was high initially, but decreased to a constant level (~1.5 mg m-2) after the first month of outdoor exposure. Copper particles were identified on the sampling cloths during the first two months of the experiment, after which the levels of copper were insufficient to collect interpretable data. After 1 month, the particles exhibited minimal changes in shape and size. At the end of 2-months, significant deterioration of the particles was

  5. Low-Power SOI CMOS Transceiver

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene (Technical Monitor); Cheruiyot, K.; Cothern, J.; Huang, D.; Singh, S.; Zencir, E.; Dogan, N.

    2003-01-01

    The work aims at developing a low-power Silicon on Insulator Complementary Metal Oxide Semiconductor (SOI CMOS) Transceiver for deep-space communications. RF Receiver must accomplish the following tasks: (a) Select the desired radio channel and reject other radio signals, (b) Amplify the desired radio signal and translate them back to baseband, and (c) Detect and decode the information with Low BER. In order to minimize cost and achieve high level of integration, receiver architecture should use least number of external filters and passive components. It should also consume least amount of power to minimize battery cost, size, and weight. One of the most stringent requirements for deep-space communication is the low-power operation. Our study identified that two candidate architectures listed in the following meet these requirements: (1) Low-IF receiver, (2) Sub-sampling receiver. The low-IF receiver uses minimum number of external components. Compared to Zero-IF (Direct conversion) architecture, it has less severe offset and flicker noise problems. The Sub-sampling receiver amplifies the RF signal and samples it using track-and-hold Subsampling mixer. These architectures provide low-power solution for the short- range communications missions on Mars. Accomplishments to date include: (1) System-level design and simulation of a Double-Differential PSK receiver, (2) Implementation of Honeywell SOI CMOS process design kit (PDK) in Cadence design tools, (3) Design of test circuits to investigate relationships between layout techniques, geometry, and low-frequency noise in SOI CMOS, (4) Model development and verification of on-chip spiral inductors in SOI CMOS process, (5) Design/implementation of low-power low-noise amplifier (LNA) and mixer for low-IF receiver, and (6) Design/implementation of high-gain LNA for sub-sampling receiver. Our initial results show that substantial improvement in power consumption is achieved using SOI CMOS as compared to standard CMOS process. Potential advantages of SOI CMOS for deep-space communication electronics include: (1) Radiation hardness, (2) Low-power operation, and (3) System-on-Chip (SOC) solutions.

  6. Comparison among filter-based, impactor-based and continuous techniques for measuring atmospheric fine sulfate and nitrate

    NASA Astrophysics Data System (ADS)

    Nie, Wei; Wang, Tao; Gao, Xiaomei; Pathak, Ravi Kant; Wang, Xinfeng; Gao, Rui; Zhang, Qingzhu; Yang, Lingxiao; Wang, Wenxing

    2010-11-01

    Filter-based methods for sampling aerosols are subject to great uncertainty if the gas-particle interactions on filter substrates are not properly handled. Sampling artifacts depend on both meteorological conditions and the chemical mix of the atmosphere. Despite numerous of studies on the subject, very few have evaluated filter-based methods in the Asian environments. This paper reports the results of a comparison of the performances of two filter-based samplers, including a Thermo Anderson Chemical Speciation Monitor (RAAS) and a honeycomb denuder filter-pack system, a Micro Orifice Uniform Deposit Impactor (MOUDI) and a real-time ambient ion monitor (AIM, URG9000B) in measuring atmospheric concentrations of PM 2.5 sulfate and nitrate. Field studies were conducted at an urban site in Jinan, Shandong province, during the winter of 2007 and at a rural site near Beijing in the summer of 2008. The AIM was first compared with the honeycomb denuder filter-pack system which was considered to have minimal sampling artifacts. After some modifications made to it, the AIM showed good performance for both sulfate and nitrate measurement at the two sites and was then used to evaluate other instruments. For the un-denuded RAAS, the extent of sampling artifacts for nitrate on quartz filters was negligible, while that on Teflon filters was also minimal at high nitrate concentrations (>10 μgm -3); however, loss through evaporation was significant (˜75%) at low nitrate concentrations under hot summer conditions. The MOUDI using aluminum substrates suffered a significant loss of nitrate (50-70%) under summer conditions due to evaporation. Considering that the aluminum substrates are still being widely used to obtain size-resolved aerosol compositions because of their low cost and accurate mass weighed, caution should be taken about the potential significant under determination of semi-volatile components such as ammonium nitrate.

  7. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  8. MEMS for medical technology applications

    NASA Astrophysics Data System (ADS)

    Frisk, Thomas; Roxhed, Niclas; Stemme, Göran

    2007-01-01

    This paper gives an in-depth description of two recent projects at the Royal Institute of Technology (KTH) which utilize MEMS and microsystem technology for realization of components intended for specific applications in medical technology and diagnostic instrumentation. By novel use of the DRIE fabrication technology we have developed side-opened out-of-plane silicon microneedles intended for use in transdermal drug delivery applications. The side opening reduces clogging probability during penetration into the skin and increases the up-take area of the liquid in the tissue. These microneedles offer about 200µm deep and pain-free skin penetration. We have been able to combine the microneedle chip with an electrically and heat controlled liquid actuator device where expandable microspheres are used to push doses of drug liquids into the skin. The entire unit is made of low cost materials in the form of a square one cm-sized patch. Finally, the design, fabrication and evaluation of an integrated miniaturized Quartz Crystal Microbalance (QCM) based "electronic nose" microsystem for detection of narcotics is described. The work integrates a novel environment-to-chip sample interface with the sensor element. The choice of multifunctional materials and the geometric features of a four-component microsystem allow a functional integration of a QCM crystal, electrical contacts, fluidic contacts and a sample interface in a single system with minimal assembly effort, a potential for low-cost manufacturing, and a few orders of magnitude reduced in system size (12*12*4 mm 3) and weight compared to commercially available instruments. The sensor chip was successfully used it for the detection of 200 ng of narcotics sample.

  9. Drug carrier nanoparticles that penetrate human chronic rhinosinusitis mucus

    PubMed Central

    Lai, Samuel K.; Suk, Jung Soo; Pace, Amanda; Wang, Ying-Ying; Yang, Ming; Mert, Olcay; Chen, Jeane; Kim, Jean; Hanes, Justin

    2011-01-01

    No effective therapies currently exist for chronic rhinosinusitis (CRS), a persistent inflammatory condition characterized by the accumulation of highly viscoelastic mucus (CRSM) in the sinuses. Nanoparticle therapeutics offer promise for localized therapies for CRS, but must penetrate CRSM in order to avoid washout during sinus cleansing and to reach underlying epithelial cells. Prior research has not established whether nanoparticles can penetrate the tenacious CRSM barrier, or instead become trapped. Here, we first measured the diffusion rates of polystyrene nanoparticles and the same nanoparticles modified with muco-inert polyethylene glycol (PEG) coatings in fresh, minimally perturbed CRSM collected during endoscopic sinus surgery from CRS patients with and without nasal polyp. We found that uncoated polystyrene particles, previously shown to be mucoadhesive, were immobilized in all CRSM samples tested. In contrast, densely PEGylated particles as large as 200 nm were able to readily penetrate all CRSM samples from patients with CRS alone, and nearly half of CRSM samples from patients with nasal polyp. Based on the mobility of different sized PEGylated particles, we estimate the average pore size of fresh CRSM to be at least 150 ± 50 nm. Guided by these studies, we formulated mucus-penetrating particles (MPP) composed of PLGA and Pluronics, two materials with a long history of safety and use in humans. We showed that biodegradable MPP are capable of rapidly penetrating CRSM at average speeds up to only 20-fold slower than their theoretical speeds in water. Our findings strongly support the development of mucus-penetrating nanomedicines for the treatment of CRS. PMID:21665271

  10. An open-source and low-cost monitoring system for precision enology.

    PubMed

    Di Gennaro, Salvatore Filippo; Matese, Alessandro; Mancin, Mirko; Primicerio, Jacopo; Palliotti, Alberto

    2014-12-05

    Winemaking is a dynamic process, where microbiological and chemical effects may strongly differentiate products from the same vineyard and even between wine vats. This high variability means an increase in work in terms of control and process management. The winemaking process therefore requires a site-specific approach in order to optimize cellar practices and quality management, suggesting a new concept of winemaking, identified as Precision Enology. The Institute of Biometeorology of the Italian National Research Council has developed a wireless monitoring system, consisting of a series of nodes integrated in barrel bungs with sensors for the measurement of wine physical and chemical parameters in the barrel. This paper describes an open-source evolution of the preliminary prototype, using Arduino-based technology. Results have shown good performance in terms of data transmission and accuracy, minimal size and power consumption. The system has been designed to create a low-cost product, which allows a remote and real-time control of wine evolution in each barrel, minimizing costs and time for sampling and laboratory analysis. The possibility of integrating any kind of sensors makes the system a flexible tool that can satisfy various monitoring needs.

  11. Factors influencing real time internal structural visualization and dynamic process monitoring in plants using synchrotron-based phase contrast X-ray imaging

    PubMed Central

    Karunakaran, Chithra; Lahlali, Rachid; Zhu, Ning; Webb, Adam M.; Schmidt, Marina; Fransishyn, Kyle; Belev, George; Wysokinski, Tomasz; Olson, Jeremy; Cooper, David M. L.; Hallin, Emil

    2015-01-01

    Minimally invasive investigation of plant parts (root, stem, leaves, and flower) has good potential to elucidate the dynamics of plant growth, morphology, physiology, and root-rhizosphere interactions. Laboratory based absorption X-ray imaging and computed tomography (CT) systems are extensively used for in situ feasibility studies of plants grown in natural and artificial soil. These techniques have challenges such as low contrast between soil pore space and roots, long X-ray imaging time, and low spatial resolution. In this study, the use of synchrotron (SR) based phase contrast X-ray imaging (PCI) has been demonstrated as a minimally invasive technique for imaging plants. Above ground plant parts and roots of 10 day old canola and wheat seedlings grown in sandy clay loam soil were successfully scanned and reconstructed. Results confirmed that SR-PCI can deliver good quality images to study dynamic and real time processes such as cavitation and water-refilling in plants. The advantages of SR-PCI, effect of X-ray energy, and effective pixel size to study plant samples have been demonstrated. The use of contrast agents to monitor physiological processes in plants was also investigated and discussed. PMID:26183486

  12. Majorization Minimization by Coordinate Descent for Concave Penalized Generalized Linear Models

    PubMed Central

    Jiang, Dingfeng; Huang, Jian

    2013-01-01

    Recent studies have demonstrated theoretical attractiveness of a class of concave penalties in variable selection, including the smoothly clipped absolute deviation and minimax concave penalties. The computation of the concave penalized solutions in high-dimensional models, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm for computing the concave penalized solutions in generalized linear models. In contrast to the existing algorithms that use local quadratic or local linear approximation to the penalty function, the MMCD seeks to majorize the negative log-likelihood by a quadratic loss, but does not use any approximation to the penalty. This strategy makes it possible to avoid the computation of a scaling factor in each update of the solutions, which improves the efficiency of coordinate descent. Under certain regularity conditions, we establish theoretical convergence property of the MMCD. We implement this algorithm for a penalized logistic regression model using the SCAD and MCP penalties. Simulation studies and a data example demonstrate that the MMCD works sufficiently fast for the penalized logistic regression in high-dimensional settings where the number of covariates is much larger than the sample size. PMID:25309048

  13. Optimal power distribution for minimizing pupil walk in a 7.5X afocal zoom lens

    NASA Astrophysics Data System (ADS)

    Song, Wanyue; Zhao, Yang; Berman, Rebecca; Bodell, S. Yvonne; Fennig, Eryn; Ni, Yunhui; Papa, Jonathan C.; Yang, Tianyi; Yee, Anthony J.; Moore, Duncan T.; Bentley, Julie L.

    2017-11-01

    An extensive design study was conducted to find the best optimal power distribution and stop location for a 7.5x afocal zoom lens that controls the pupil walk and pupil location through zoom. This afocal zoom lens is one of the three components in a VIS-SWIR high-resolution microscope for inspection of photonic chips. The microscope consists of an afocal zoom, a nine-element objective and a tube lens and has diffraction limited performance with zero vignetting. In this case, the required change in object (sample) size and resolution is achieved by the magnification change of the afocal component. This creates strict requirements for both the entrance and exit pupil locations of the afocal zoom to couple the two sides successfully. The first phase of the design study looked at conventional four group zoom lenses with positive groups in the front and back and the stop at a fixed location outside the lens but resulted in significant pupil walk. The second phase of the design study focused on several promising unconventional four-group power distribution designs with moving stops that minimized pupil walk and had an acceptable pupil location (as determined by the objective and tube lens).

  14. A pilot cluster randomized controlled trial of structured goal-setting following stroke.

    PubMed

    Taylor, William J; Brown, Melanie; William, Levack; McPherson, Kathryn M; Reed, Kirk; Dean, Sarah G; Weatherall, Mark

    2012-04-01

    To determine the feasibility, the cluster design effect and the variance and minimal clinical importance difference in the primary outcome in a pilot study of a structured approach to goal-setting. A cluster randomized controlled trial. Inpatient rehabilitation facilities. People who were admitted to inpatient rehabilitation following stroke who had sufficient cognition to engage in structured goal-setting and complete the primary outcome measure. Structured goal elicitation using the Canadian Occupational Performance Measure. Quality of life at 12 weeks using the Schedule for Individualised Quality of Life (SEIQOL-DW), Functional Independence Measure, Short Form 36 and Patient Perception of Rehabilitation (measuring satisfaction with rehabilitation). Assessors were blinded to the intervention. Four rehabilitation services and 41 patients were randomized. We found high values of the intraclass correlation for the outcome measures (ranging from 0.03 to 0.40) and high variance of the SEIQOL-DW (SD 19.6) in relation to the minimally importance difference of 2.1, leading to impractically large sample size requirements for a cluster randomized design. A cluster randomized design is not a practical means of avoiding contamination effects in studies of inpatient rehabilitation goal-setting. Other techniques for coping with contamination effects are necessary.

  15. Collecting Quality Infrared Spectra from Microscopic Samples of Suspicious Powders in a Sealed Cell.

    PubMed

    Kammrath, Brooke W; Leary, Pauline E; Reffner, John A

    2017-03-01

    The infrared (IR) microspectroscopical analysis of samples within a sealed-cell containing barium fluoride is a critical need when identifying toxic agents or suspicious powders of unidentified composition. The dispersive nature of barium fluoride is well understood and experimental conditions can be easily adjusted during reflection-absorption measurements to account for differences in focus between the visible and IR regions of the spectrum. In most instances, the ability to collect a viable spectrum is possible when using the sealed cell regardless of whether visible or IR focus is optimized. However, when IR focus is optimized, it is possible to collect useful data from even smaller samples. This is important when a minimal sample is available for analysis or the desire to minimize risk of sample exposure is important. While the use of barium fluoride introduces dispersion effects that are unavoidable, it is possible to adjust instrument settings when collecting IR spectra in the reflection-absorption mode to compensate for dispersion and minimize impact on the quality of the sample spectrum.

  16. Analysis of antibody aggregate content at extremely high concentrations using sedimentation velocity with a novel interference optics.

    PubMed

    Schilling, Kristian; Krause, Frank

    2015-01-01

    Monoclonal antibodies represent the most important group of protein-based biopharmaceuticals. During formulation, manufacturing, or storage, antibodies may suffer post-translational modifications altering their physical and chemical properties. Such induced conformational changes may lead to the formation of aggregates, which can not only reduce their efficiency but also be immunogenic. Therefore, it is essential to monitor the amount of size variants to ensure consistency and quality of pharmaceutical antibodies. In many cases, antibodies are formulated at very high concentrations > 50 g/L, mostly along with high amounts of sugar-based excipients. As a consequence, all routine aggregation analysis methods, such as size-exclusion chromatography, cannot monitor the size distribution at those original conditions, but only after dilution and usually under completely different solvent conditions. In contrast, sedimentation velocity (SV) allows to analyze samples directly in the product formulation, both with limited sample-matrix interactions and minimal dilution. One prerequisite for the analysis of highly concentrated samples is the detection of steep concentration gradients with sufficient resolution: Commercially available ultracentrifuges are not able to resolve such steep interference profiles. With the development of our Advanced Interference Detection Array (AIDA), it has become possible to register interferograms of solutions as highly concentrated as 150 g/L. The other major difficulty encountered at high protein concentrations is the pronounced non-ideal sedimentation behavior resulting from repulsive intermolecular interactions, for which a comprehensive theoretical modelling has not yet been achieved. Here, we report the first SV analysis of highly concentrated antibodies up to 147 g/L employing the unique AIDA ultracentrifuge. By developing a consistent experimental design and data fit approach, we were able to provide a reliable estimation of the minimum content of soluble aggregates in the original formulations of two antibodies. Limitations of the procedure are discussed.

  17. An evaluation of the efficiency of minnow traps for estimating the abundance of minnows in desert spring systems

    USGS Publications Warehouse

    Peterson, James T.; Scheerer, Paul D.; Clements, Shaun

    2015-01-01

    Desert springs are sensitive aquatic ecosystems that pose unique challenges to natural resource managers and researchers. Among the most important of these is the need to accurately quantify population parameters for resident fish, particularly when the species are of special conservation concern. We evaluated the efficiency of baited minnow traps for estimating the abundance of two at-risk species, Foskett Speckled Dace Rhinichthys osculus ssp. and Borax Lake Chub Gila boraxobius, in desert spring systems in southeastern Oregon. We evaluated alternative sample designs using simulation and found that capture–recapture designs with four capture occasions would maximize the accuracy of estimates and minimize fish handling. We implemented the design and estimated capture and recapture probabilities using the Huggins closed-capture estimator. Trap capture probabilities averaged 23% and 26% for Foskett Speckled Dace and Borax Lake Chub, respectively, but differed substantially among sample locations, through time, and nonlinearly with fish body size. Recapture probabilities for Foskett Speckled Dace were, on average, 1.6 times greater than (first) capture probabilities, suggesting “trap-happy” behavior. Comparison of population estimates from the Huggins model with the commonly used Lincoln–Petersen estimator indicated that the latter underestimated Foskett Speckled Dace and Borax Lake Chub population size by 48% and by 20%, respectively. These biases were due to variability in capture and recapture probabilities. Simulation of fish monitoring that included the range of capture and recapture probabilities observed indicated that variability in capture and recapture probabilities in time negatively affected the ability to detect annual decreases by up to 20% in fish population size. Failure to account for variability in capture and recapture probabilities can lead to poor quality data and study inferences. Therefore, we recommend that fishery researchers and managers employ sample designs and estimators that can account for this variability.

  18. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. [Experimental analysis of some determinants of inductive reasoning].

    PubMed

    Ono, K

    1989-02-01

    Three experiments were conducted from a behavioral perspective to investigate the determinants of inductive reasoning and to compare some methodological differences. The dependent variable used in these experiments was the threshold of confident response (TCR), which was defined as "the minimal sample size required to establish generalization from instances." Experiment 1 examined the effects of population size on inductive reasoning, and the results from 35 college students showed that the TCR varied in proportion to the logarithm of population size. In Experiment 2, 30 subjects showed distinct sensitivity to both prior probability and base-rate. The results from 70 subjects who participated in Experiment 3 showed that the TCR was affected by its consequences (risk condition), and especially, that humans were sensitive to a loss situation. These results demonstrate the sensitivity of humans to statistical variables in inductive reasoning. Furthermore, methodological comparison indicated that the experimentally observed values of TCR were close to, but not as precise as the optimal values predicted by Bayes' model. On the other hand, the subjective TCR estimated by subjects was highly discrepant from the observed TCR. These findings suggest that various aspects of inductive reasoning can be fruitfully investigated not only from subjective estimations such as probability likelihood but also from an objective behavioral perspective.

  20. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  1. Characterization of silver nanoparticle-infused tissue adhesive for ophthalmic use.

    PubMed

    Yee, William; Selvaduray, Guna; Hawkins, Benjamin

    2015-03-01

    In this work, we demonstrate the successful enhancement of breaking strength, adhesive strength, and antibacterial efficacy of ophthalmic tissue adhesive (2-octyl cyanoacrylate) by doping with silver nanoparticles, and investigate the effects of nanoparticle size and concentration. Recent work has shown that silver nanoparticles are a viable antibacterial additive to many compounds, but their efficacy in tissue adhesives was heretofore untested. Our results indicate that doping the adhesive with silver nanoparticles reduced bacterial growth by an order of magnitude or more; nanoparticle size and concentration had minimal influence in the range tested. Tensile breaking strength of polymerized adhesive samples and adhesive strength between a T-shaped support and excised porcine sclera were measured using a universal testing machine according to ASTM (formerly American Society for Testing and Materials) standard techniques. Both tests showed significant improvement with the addition of silver nanoparticles. The enhanced mechanical strength and antibacterial efficacy of the doped adhesive supports the use of tissue adhesives as a viable supplement or alternative to sutures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. New mechanism for toughening ceramic materials. Final report, 15 March 1989-15 July 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cutler, R.A.; Virkar, A.V.; Cross, L.E.

    Ferroelastic toughening was identified as a viable mechanism for toughening ceramics. Domain structure and domain switching was identified by x-ray diffraction, transmission optical microscopy, and transmission electron microscopy in zirconia, lead zirconate titanate and gadolinium molybdata. Switching in compression was observed at stresses greater than 600 MPa and at 400 MPa in tension for polycrystalline t'-zirconia. Domain switching contributes to toughness, as evidenced by data for monoclinic zirconia, t'-zirconia, PZT and GMO. The magnitude of toughening varied between 0.6 MPa.ml/2 for GMO to 2-6 MPa-ml/2 for zirconia. Polycrystalline monoclinic and t'-zirconias, which showed no transformation toughening, had similar toughness valuesmore » as Y-TZP which exhibits transformation. Coarse-grained monoclinic and tetragonal (t') zirconia samples could be cooled to room temperature for mechanical property evaluation since fine domain size, not grain size, controlled transformation for t'-zirconia and minimized stress for m-ZrO2. LnAlO3, LnNbO4, and LnCrO3 were among the materials identified as high temperature ferroelastics.« less

  3. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  4. Relationship between Device Size and Body Weight in Dogs with Patent Ductus Arteriosus Undergoing Amplatz Canine Duct Occluder Deployment.

    PubMed

    Wesselowski, S; Saunders, A B; Gordon, S G

    2017-09-01

    Deployment of the Amplatz Canine Duct Occluder (ACDO) is the preferred method for minimally invasive occlusion of patent ductus arteriosus (PDA) in dogs, with appropriate device sizing crucial to successful closure. Dogs of any body weight can be affected by PDA. To describe the range of ACDO sizes deployed in dogs of various body weights for improved procedural planning and inventory selection and to investigate for correlation between minimal ductal diameter (MDD) and body weight. A total of 152 dogs undergoing ACDO deployment between 2008 and 2016. Body weight, age, breed, sex, and MDD obtained by angiography (MDD-A), MDD obtained by transesophageal echocardiography (MDD-TEE), and ACDO size deployed were retrospectively evaluated. Correlation between body weight and ACDO size, MDD-A and MDD-TEE was poor, with R-squared values of 0.4, 0.36, and 0.3, respectively. Femoral artery diameter in the smallest population of dogs placed inherent limitations on the use of larger device sizes, with no limitations on the wide range of device sizes required as patient size increased. The most commonly used ACDO devices were size 3 through 6, representing 57% of the devices deployed within the entire study population. Patent ductus arteriosus anatomy varies on an individual basis, with poor correlation between MDD and body weight. Weight-based assumptions about expected ACDO device size for a given patient are not recommended. Copyright © 2017 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  5. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  6. Cost-effectiveness of minimally invasive versus open transforaminal lumbar interbody fusion for degenerative spondylolisthesis associated low-back and leg pain over two years.

    PubMed

    Parker, Scott L; Adogwa, Owoicho; Bydon, Ali; Cheng, Joseph; McGirt, Matthew J

    2012-07-01

    Minimally invasive transforaminal lumbar interbody fusion (MIS-TLIF) for lumbar spondylolisthesis allows for surgical treatment of back and leg pain while theoretically minimizing tissue injury and accelerating overall recovery. Although the authors of previous studies have demonstrated shorter length of hospital stay and reduced blood loss with MIS versus open-TLIF, short- and long-term outcomes have been similar. No studies to date have evaluated the comprehensive health care costs associated with TLIF procedures or assessed the cost-utility of MIS- versus open-TLIF. As such, we set out to assess previously unstudied end points of health care cost and cost-utility associated with MIS- versus open-TLIF. Thirty patients undergoing MIS-TLIF (n=15) or open-TLIF (n=15) for grade I degenerative spondylolisthesis associated back and leg pain were prospectively studied. Total back-related medical resource use, missed work, and health-state values (quality-adjusted life years [QALYs], calculated from EQ-5D with U.S. valuation) were assessed after two-year follow-up. Two-year resource use was multiplied by unit costs on the basis of Medicare national allowable payment amounts (direct cost) and work-day losses were multiplied by the self-reported gross-of-tax wage rate (indirect cost). Difference in mean total cost per QALY gained for MIS- versus open-TLIF was assessed as incremental cost-effectiveness ratio (ICER: COSTmis-COSTopen/QALYmis-QALYopen). MIS versus open-TLIF cohorts were similar at baseline. By two years postoperatively, patients undergoing MIS- versus open-TLIF reported similar mean QALYs gained (0.50 vs. 0.41, P=0.17). Mean total two-year cost of MIS- and open-TLIF was $35,996 and $44,727, respectively. The $8,731 two-year cost savings of MIS- versus open-TLIF did not reach statistical significance (P=0.18) for this sample size. Although our limited sample size prevented statistical significance, MIS- versus open-TLIF was associated with reduced costs over two years while providing equivalent improvement in QALYs. MIS-TLIF allows patients to leave the hospital sooner, achieve narcotic independence sooner, and return to work sooner than open-TLIF. In our experience, MIS- versus open-TLIF is a cost reducing technology in the surgical treatment of medically refractory low-back and leg pain from grade I lumbar spondylolisthesis. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Scalable and balanced dynamic hybrid data assimilation

    NASA Astrophysics Data System (ADS)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them implemented as parallel model runs themselves. The only bottleneck in the process is the gathering and scattering of initial and final model state snapshots before and after the parallel runs which requires a very efficient and low-latency communication network. However, the volume of data communicated is small and the intervening minimization steps are only 3D-Var, which means their computational load is negligible compared with the fully parallel model runs. We present example results of scalable VEnKF with the 4D lake and shallow sea model COHERENS, assimilating simultaneously continuous in situ measurements in a single point and infrequent satellite images that cover a whole lake, with the fully scalable VEnKF.

  8. Energy and time determine scaling in biological and computer designs

    PubMed Central

    Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie

    2016-01-01

    Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524

  9. Energy and time determine scaling in biological and computer designs.

    PubMed

    Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie

    2016-08-19

    Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).

  10. [Differential diagnosis of the MDCT features between lung adenocarcinoma preinvasive lesions and minimally invasive adenocarcinoma appearing as ground-glass nodules].

    PubMed

    Liu, Jia; Li, Wenwu; Huang, Yong; Mu, Dianbin; Yu, Haiying; Li, Shanshan

    2015-08-01

    The aim of this study was to retrospectively investigate the multi-detector computed tomography (MDCT) features of preinvasive lesions and minimally invasive adenocarcinoma (MIA) appearing as ground-glass nodules (GGNs), and to analyze their significance in differential diagnosis. The pathological data and MDCT images of 111 GGNs in 93 patients were reviewed and analyzed retrospectively, to identify the differentiating CT features between preinvasive lesions and MIA and to evaluate their differentiating accuracy. In the 93 patients included in the study, there were 27 cases with preinvasive lesions (38 GGNs) and 66 cases with MIA (73 GGNs). No statistically significant difference was observed in terms of the gender, age and number of lesions between the two groups. There were significant differences (P<0.05) in the size of lesion, size of solid portion, content of solid portion, and morphological characteristics of the lesion edge between preinvasive lesions and MIA. ROC curve analysis showed that the optimal cut-off value of lesion size for differentiating preinvasive lesions from MIA was 13.0 mm (sensitivity, 83.0%; specificity, 80.0%), and that of solid portion size was 2.0 mm (sensitivity, 90.0%; specificity, 97.0%) and that of solid proportion was 12.0% (sensitivity, 88.0%; specificity, 97.0%). The analysis of CT morphological features showed that there were significant differences in the terms of lesion nature (pGGO, mGGO), presence or absence of lobulated sign and spiculated sign (P<0.05) between preinvasive lesions and MIA, but there were no significant differences in terms of the lesion edge, the presence or absence of vacuole sign, bubble lucency and pleural retraction (P>0.05). Preinvasive lesions can be accurately distinguished from MIA by the size of lesion, size of solid portion,solid proportion and morphological characteristics of the lesion edge. The size of lesion, size of solid portion, content of solid proportion and morphological characteristics of the lesion edge are of significance in the differential diagnosis of preinvasive lesions and minimally invasive adenocarcinoma of the lung.

  11. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  13. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  14. Tangential Flow Ultrafiltration: A “Green” Method for the Size Selection and Concentration of Colloidal Silver Nanoparticles

    PubMed Central

    Anders, Catherine B.; Baker, Joshua D.; Stahler, Adam C.; Williams, Austin J.; Sisco, Jackie N.; Trefry, John C.; Wooley, Dawn P.; Pavel Sizemore, Ioana E.

    2012-01-01

    Nowadays, AgNPs are extensively used in the manufacture of consumer products,1 water disinfectants,2 therapeutics,1, 3 and biomedical devices4 due to their powerful antimicrobial properties.3-6 These nanoparticle applications are strongly influenced by the AgNP size and aggregation state. Many challenges exist in the controlled fabrication7 and size-based isolation4,8 of unfunctionalized, homogenous AgNPs that are free from chemically aggressive capping/stabilizing agents or organic solvents.7-13 Limitations emerge from the toxicity of reagents, high costs or reduced efficiency of the AgNP synthesis or isolation methods (e.g., centrifugation, size-dependent solubility, size-exclusion chromatography, etc.).10,14-18 To overcome this, we recently showed that TFU permits greater control over the size, concentration and aggregation state of Creighton AgNPs (300 ml of 15.3 μg ml-1 down to 10 ml of 198.7 μg ml-1) than conventional methods of isolation such as ultracentrifugation.19 TFU is a recirculation method commonly used for the weight-based isolation of proteins, viruses and cells.20,21 Briefly, the liquid sample is passed through a series of hollow fiber membranes with pore size ranging from 1,000 kD to 10 kD. Smaller suspended or dissolved constituents in the sample will pass through the porous barrier together with the solvent (filtrate), while the larger constituents are retained (retentate). TFU may be considered a "green" method as it neither damages the sample nor requires additional solvent to eliminate toxic excess reagents and byproducts. Furthermore, TFU may be applied to a large variety of nanoparticles as both hydrophobic and hydrophilic filters are available. The two main objectives of this study were: 1) to illustrate the experimental aspects of the TFU approach through an invited video experience and 2) to demonstrate the feasibility of the TFU method for larger volumes of colloidal nanoparticles and smaller volumes of retentate. First, unfuctionalized AgNPs (4 L, 15.2 μg ml-1) were synthesized using the well-established Creighton method22,23 by the reduction of AgNO3 with NaBH4. AgNP polydispersity was then minimized via a 3-step TFU using a 50-nm filter (460 cm2) to remove AgNPs and AgNP-aggregates larger than 50 nm, followed by two 100-kD (200 cm2 and 20 cm2) filters to concentrate the AgNPs. Representative samples were characterized using transmission electron microscopy, UV-Vis absorption spectrophotometry, Raman spectroscopy, and inductively coupled plasma optical emission spectroscopy. The final retentate consisted of highly concentrated (4 ml, 8,539.9 μg ml-1) yet lowly aggregated and homogeneous AgNPs of 1-20 nm in diameter. This corresponds to a silver concentration yield of about 62%. PMID:23070148

  15. Tangential flow ultrafiltration: a "green" method for the size selection and concentration of colloidal silver nanoparticles.

    PubMed

    Anders, Catherine B; Baker, Joshua D; Stahler, Adam C; Williams, Austin J; Sisco, Jackie N; Trefry, John C; Wooley, Dawn P; Pavel Sizemore, Ioana E

    2012-10-04

    Nowadays, AgNPs are extensively used in the manufacture of consumer products,(1) water disinfectants,(2) therapeutics,(1, 3) and biomedical devices(4) due to their powerful antimicrobial properties.(3-6) These nanoparticle applications are strongly influenced by the AgNP size and aggregation state. Many challenges exist in the controlled fabrication(7) and size-based isolation(4,8) of unfunctionalized, homogenous AgNPs that are free from chemically aggressive capping/stabilizing agents or organic solvents.(7-13) Limitations emerge from the toxicity of reagents, high costs or reduced efficiency of the AgNP synthesis or isolation methods (e.g., centrifugation, size-dependent solubility, size-exclusion chromatography, etc.).(10,14-18) To overcome this, we recently showed that TFU permits greater control over the size, concentration and aggregation state of Creighton AgNPs (300 ml of 15.3 μg ml(-1) down to 10 ml of 198.7 μg ml(-1)) than conventional methods of isolation such as ultracentrifugation.(19) TFU is a recirculation method commonly used for the weight-based isolation of proteins, viruses and cells.(20,21) Briefly, the liquid sample is passed through a series of hollow fiber membranes with pore size ranging from 1,000 kD to 10 kD. Smaller suspended or dissolved constituents in the sample will pass through the porous barrier together with the solvent (filtrate), while the larger constituents are retained (retentate). TFU may be considered a "green" method as it neither damages the sample nor requires additional solvent to eliminate toxic excess reagents and byproducts. Furthermore, TFU may be applied to a large variety of nanoparticles as both hydrophobic and hydrophilic filters are available. The two main objectives of this study were: 1) to illustrate the experimental aspects of the TFU approach through an invited video experience and 2) to demonstrate the feasibility of the TFU method for larger volumes of colloidal nanoparticles and smaller volumes of retentate. First, unfuctionalized AgNPs (4 L, 15.2 μg ml(-1)) were synthesized using the well-established Creighton method(22,23) by the reduction of AgNO3 with NaBH4. AgNP polydispersity was then minimized via a 3-step TFU using a 50-nm filter (460 cm(2)) to remove AgNPs and AgNP-aggregates larger than 50 nm, followed by two 100-kD (200 cm(2) and 20 cm(2)) filters to concentrate the AgNPs. Representative samples were characterized using transmission electron microscopy, UV-Vis absorption spectrophotometry, Raman spectroscopy, and inductively coupled plasma optical emission spectroscopy. The final retentate consisted of highly concentrated (4 ml, 8,539.9 μg ml(-1)) yet lowly aggregated and homogeneous AgNPs of 1-20 nm in diameter. This corresponds to a silver concentration yield of about 62%.

  16. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  17. Irradiation treatment of minimally processed carrots for ensuring microbiological safety

    NASA Astrophysics Data System (ADS)

    Ashraf Chaudry, Muhammad; Bibi, Nizakat; Khan, Misal; Khan, Maazullah; Badshah, Amal; Jamil Qureshi, Muhammad

    2004-09-01

    Minimally processed fruits and vegetables are very common in developed countries and are gaining popularity in developing countries due to their convenience and freshness. However, minimally processing may result in undesirable changes in colour, taste and appearance due to the transfer of microbes from skin to the flesh. Irradiation is a well-known technology for elimination of microbial contamination. Food irradiation has been approved by 50 countries and is being applied commercially in USA. The purpose of this study was to evaluate the effect of irradiation on the quality of minimally processed carrots. Fresh carrots were peeled, sliced and PE packaged. The samples were irradiated (0, 0.5, 1.0, 2.0, 2.5, 3.0 kGy) and stored at 5°C for 2 weeks. The samples were analyzed for hardness, organoleptic acceptance and microbial load at 0, 7th and 15th day. The mean firmness of the control and all irradiated samples remained between 4.31 and 4.42 kg of force, showing no adverse effect of radiation dose. The effect of storage (2 weeks) was significant ( P< 0.05) with values ranging between 4.28 and 4.39 kg of force. The total bacterial counts at 5°C for non-irradiated and 0.5 kGy irradiated samples were 6.3×10 5 cfu/g, 3.0×10 2 and few colonies(>10) in all other irradiated samples(1.0, 2.0, 2.5 and 3.0 kGy) after 2 weeks storage. No coliform or E. coli were detected in any of the samples (radiated or control) immediately after irradiation and during the entire storage period in minimally processed carrots. A dose of 2.0 kGy completely controlled the fungal and bacterial counts. The irradiated samples (2.0 kGy) were also acceptable sensorially.

  18. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  19. Morphing Wing Weight Predictors and Their Application in a Template-Based Morphing Aircraft Sizing Environment II. Part 2; Morphing Aircraft Sizing via Multi-level Optimization

    NASA Technical Reports Server (NTRS)

    Skillen, Michael D.; Crossley, William A.

    2008-01-01

    This report presents an approach for sizing of a morphing aircraft based upon a multi-level design optimization approach. For this effort, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and/or increasing aspect ratio by as much as 200% from the lowest possible value. The top-level optimization problem seeks to minimize the gross weight of the aircraft by determining a set of "baseline" variables - these are common aircraft sizing variables, along with a set of "morphing limit" variables - these describe the maximum shape change for a particular morphing strategy. The sub-level optimization problems represent each segment in the morphing aircraft's design mission; here, each sub-level optimizer minimizes fuel consumed during each mission segment by changing the wing planform within the bounds set by the baseline and morphing limit variables from the top-level problem.

  20. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  1. Comparative responsiveness and minimal clinically important differences for idiopathic ulnar impaction syndrome.

    PubMed

    Kim, Jae Kwang; Park, Eun Soo

    2013-05-01

    Patient-reported questionnaires have been widely used to predict symptom severity and functional disability in musculoskeletal disease. Importantly, questionnaires can detect clinical changes in patients; however, this impact has not been determined for ulnar impaction syndrome. We asked (1) which of Patient-Rated Wrist Evaluation (PRWE), DASH, and other physical measures was more responsive to clinical improvements, and (2) what was the minimal clinically important difference for the PRWE and DASH after ulnar shortening osteotomy for idiopathic ulnar impaction syndrome. All patients who underwent ulnar shortening osteotomy between March 2008 and February 2011 for idiopathic ulnar impaction syndrome were enrolled in this study. All patients completed the PRWE and DASH questionnaires, and all were evaluated for grip strength and wrist ROM, preoperatively and 12 months postoperatively. We compared the effect sizes observed by each of these instruments. Effect size is calculated by dividing the mean change in a score of each instrument during a specified interval by the standard deviation of the baseline score. In addition, patient-perceived overall improvement was used as the anchor to determine the minimal clinically important differences on the PRWE and DASH 12 months after surgery. The average score of each item except for wrist flexion and supination improved after surgery. The PRWE was more sensitive than the DASH or than physical measurements in detecting clinical changes. The effect sizes and standardized response means of the outcome measures were as follows: PRWE (1.51, 1.64), DASH (1.12, 1.24), grip strength (0.59, 0.68), wrist pronation (0.33, 0.41), and wrist extension (0.28, 0.36). Patient-perceived overall improvement and score changes of the PRWE and DASH correlated significantly. Minimal clinically important differences were 17 points (of a possible 100) for the PRWE and 13.5 for the DASH (also of 100), and minimal detectable changes were 7.7 points for the PRWE and 9.3 points for the DASH. Although the PRWE and DASH were highly sensitive to clinical changes, the PRWE was more sensitive in terms of detecting clinical changes after ulnar shortening osteotomy for idiopathic ulnar impaction syndrome. A minimal change of 17 PRWE points or 13.5 DASH points was necessary to achieve a benefit that patients perceived as clinically important. The minimal clinically important differences using these instruments were higher than the values produced by measurement errors.

  2. Numerical study of the process parameters in spark plasma sintering (sps)

    NASA Astrophysics Data System (ADS)

    Chowdhury, Redwan Jahid

    Spark plasma sintering (SPS) is one of the most widely used sintering techniques that utilizes pulsed direct current together with uniaxial pressure to consolidate a wide variety of materials. The unique mechanisms of SPS enable it to sinter powder compacts at a lower temperature and in a shorter time than the conventional hot pressing, hot isostatic pressing and vacuum sintering process. One of the limitations of SPS is the presence of temperature gradients inside the sample, which could result in non-uniform physical and microstructural properties. Detailed study of the temperature and current distributions inside the sintered sample is necessary to minimize the temperature gradients and achieve desired properties. In the present study, a coupled thermal-electric model was developed using finite element codes in ABAQUS software to investigate the temperature and current distributions inside the conductive and non-conductive samples. An integrated experimental-numerical methodology was implemented to determine the system contact resistances accurately. The developed sintering model was validated by a series of experiments, which showed good agreements with simulation results. The temperature distribution inside the sample depends on some process parameters such as sample and tool geometry, punch and die position, applied current and thermal insulation around the die. The role of these parameters on sample temperature distribution was systematically analyzed. The findings of this research could prove very useful for the reliable production of large size sintered samples with controlled and tailored properties.

  3. ClustENM: ENM-Based Sampling of Essential Conformational Space at Full Atomic Resolution

    PubMed Central

    Kurkcuoglu, Zeynep; Bahar, Ivet; Doruker, Pemra

    2016-01-01

    Accurate sampling of conformational space and, in particular, the transitions between functional substates has been a challenge in molecular dynamic (MD) simulations of large biomolecular systems. We developed an Elastic Network Model (ENM)-based computational method, ClustENM, for sampling large conformational changes of biomolecules with various sizes and oligomerization states. ClustENM is an iterative method that combines ENM with energy minimization and clustering steps. It is an unbiased technique, which requires only an initial structure as input, and no information about the target conformation. To test the performance of ClustENM, we applied it to six biomolecular systems: adenylate kinase (AK), calmodulin, p38 MAP kinase, HIV-1 reverse transcriptase (RT), triosephosphate isomerase (TIM), and the 70S ribosomal complex. The generated ensembles of conformers determined at atomic resolution show good agreement with experimental data (979 structures resolved by X-ray and/or NMR) and encompass the subspaces covered in independent MD simulations for TIM, p38, and RT. ClustENM emerges as a computationally efficient tool for characterizing the conformational space of large systems at atomic detail, in addition to generating a representative ensemble of conformers that can be advantageously used in simulating substrate/ligand-binding events. PMID:27494296

  4. Environmental perturbations can be detected through microwear texture analysis in two platyrrhine species from Brazilian Amazonia.

    PubMed

    Estalrrich, Almudena; Young, Mariel B; Teaford, Mark F; Ungar, Peter S

    2015-11-01

    Recent dental microwear studies have shown that fossil species differ from one another in texture attributes-both in terms of central tendency and dispersion. Most comparative studies used to interpret these results have relied on poorly provenienced museum samples that are not well-suited to consideration of within species variation in diet. Here we present a study of two species of platyrrhine monkeys, Alouatta belzebul (n = 60) and Sapajus apella (n = 28) from Pará State in the Brazilian Amazon in order to assess effects of habitat variation on microwear (each species was sampled from forests that differ in the degree of disturbance from highly disturbed to minimally disturbed). Results indicate that microwear texture values vary between habitats-more for the capuchins than the howler monkeys. This is consistent with the notion that diets of the more folivorous A. belzebul are less affected by habitat disturbance than those of the more frugivorous S. apella. It also suggests that microwear holds the potential to reflect comparatively subtle differences in within-species variation in fossil taxa if sample size and control over paleohabitat allow. © 2015 Wiley Periodicals, Inc.

  5. Association of deletion in the chromosomal 8p21.3-23 region with the development of invasive head & neck squamous cell carcinoma in Indian patients.

    PubMed

    Bhattacharya, N; Tripathi, A; Dasgupta, S; Sabbir, Md G; Roy, A; Sengupta, A; Roy, B; Roychowdhury, S; Panda, C K

    2003-08-01

    Deletions in chromosome 8 (chr.8) have been shown to be necessary for the development of head and neck squamous cell carcinoma (HNSCC). Attempts have been made in this study to detect the minimal deleted region in chr.8 associated with the development of HNSCC in Indian patients and to study the association of clinicopathological features with the progression of the disease. The deletion mapping of chr.8 was done in samples from 10 primary dysplastic lesions and 43 invasive squamous cell carcinomas from the head and neck region of Indian patients to detect allelic alterations (deletion or size alteration) using 12 highly polymorphic microsatellite markers. The association of the highly deleted region was correlated with the tumour node metastasis (TNM) stages, nodal involvement, tobacco habit and human papilloma virus (HPV) infection of the samples. High frequency (49%) of loss of heterozygosity (LOH) was seen within 13.12 megabase (Mb) region of chromosomal 8p21.3-23 region in the HNSCC samples, whereas the dysplastic samples did not show any allelic alterations in this region. The highest frequency (17%) of microsatellite size alterations (MA) was observed in the chr.8p22 region. The loss of short arm or normal copy of chr.8 and rare bi-allelic alterations were seen in the stage II-IV tumours (939, 5184, 2772, 1319 and 598) irrespective of their primary sites. The highly deleted region did not show any significant association with any of the clinical parameters. However, HPV infection was significantly associated (P < 0.05) with the differentiation grades and overall allelic alterations (LOH/MA) of the samples. Our data indicate that the 13.12 Mb deleted region in the chromosomal 8p21.3-23 region could harbour candidate tumour suppressor gene(s) (TSGs) associated with the progression anti invasion of HNSCC tumours in Indian patients.

  6. Influence of Casting Defects on S- N Fatigue Behavior of Ni-Al Bronze

    NASA Astrophysics Data System (ADS)

    Sarkar, Aritra; Chakrabarti, Abhishek; Nagesha, A.; Saravanan, T.; Arunmuthu, K.; Sandhya, R.; Philip, John; Mathew, M. D.; Jayakumar, T.

    2015-02-01

    Nickel-aluminum bronze (NAB) alloys have been used extensively in marine applications such as propellers, couplings, pump casings, and pump impellers due to their good mechanical properties such as tensile strength, creep resistance, and corrosion resistance. However, there have been several instances of in-service failure of the alloy due to high cycle fatigue (HCF). The present paper aims at characterizing the casting defects in this alloy through X-ray radiography and X-ray computed tomography into distinct defect groups having particular defect size and location. HCF tests were carried out on each defect group of as-cast NAB at room temperature by varying the mean stress. A significant decrease in the HCF life was observed with an increase in the tensile mean stress, irrespective of the defect size. Further, a considerable drop in the HCF life was observed with an increase in the size of defects and proximity of the defects to the surface. However, the surface proximity indicated by location of the defect in the sample was seen to override the influence of defect size and maximum cyclic stress. This leads to huge scatter in S- N curve. For a detailed quantitative analysis of defect size and location, an empirical model is developed which was able to minimize the scatter to a significant extent. Further, a concept of critical distance is proposed, beyond which the defect would not have a deleterious consequence on the fatigue behavior. Such an approach was found to be suitable for generating S- N curves for cast NAB.

  7. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  8. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  9. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Photopolymerization Synthesis of Magnetic Nanoparticle Embedded Nanogels for Targeted Biotherapeutic Delivery

    NASA Astrophysics Data System (ADS)

    Denmark, Daniel J.

    Conventional therapeutic techniques treat the patient by delivering a biotherapeutic to the entire body rather than the target tissue. In the case of chemotherapy, the biotherapeutic is a drug that kills healthy and diseased cells indiscriminately which can lead to undesirable side effects. With targeted delivery, biotherapeutics can be delivered directly to the diseased tissue significantly reducing exposure to otherwise healthy tissue. Typical composite delivery devices are minimally composed of a stimuli responsive polymer, such as poly(N-isopropylacrylamide), allowing for triggered release when heated beyond approximately 32 °C, and magnetic nanoparticles which enable targeting as well as provide a mechanism for stimulus upon alternating magnetic field heating. Although more traditional methods, such as emulsion polymerization, have been used to realize these composite devices, the synthesis is problematic. Poisonous surfactants that are necessary to prevent agglomeration must be removed from the finished polymer, increasing the time and cost of the process. This study seeks to further explore non-toxic, biocompatible, non-residual, photochemical methods of creating stimuli responsive nanogels to advance the targeted biotherapeutic delivery field. Ultraviolet photopolymerization promises to be more efficient, while ensuring safety by using only biocompatible substances. The reactants selected for nanogel fabrication were N -isopropylacrylamide as monomer, methylene bisacrylamide as cross-linker, and Irgacure 2959 as ultraviolet photo-initiator. The superparamagnetic nanoparticles for encapsulation were approximately 10 nm in diameter and composed of magnetite to enable remote delivery and enhanced triggered release properties. Early investigations into the interactions of the polymer and nanoparticles employ a pioneering experimental setup, which allows for coincident turbidimetry and alternating magnetic field heating of an aqueous solution containing both materials. Herein, a low-cost, scalable, and rapid, custom ultraviolet photo-reactor with in-situ, spectroscopic monitoring system is used to observe the synthesis as the sample undergoes photopolymerization. This method also allows in-situ encapsulation of the magnetic nanoparticles simplifying the process. Size characterization of the resulting nanogels was performed by Transmission Electron Microscopy revealing size-tunable nanogel spheres between 50 and 800 nm by varying the ratio and concentration of the reactants. Nano-Tracking Analysis indicates that the nanogels exhibit minimal agglomeration as well as provides a temperature-dependent particle size distribution. Optical characterization utilized Fourier Transform Infrared and Ultraviolet Spectroscopy to confirm successful polymerization. When samples of the nanogels encapsulating magnetic nanoparticles were subjected to an alternating magnetic field a temperature increase was observed indicating that triggered release is possible. Furthermore, a model, based on linear response theory that innovatively utilizes size distribution data, is presented to explain alternating magnetic field heating results. The results presented here will advance targeted biotherapeutic delivery and have a wide range of applications in medical sciences like oncology, gene delivery, cardiology and endocrinology.

  11. Pore space connectivity and porosity using CT scans of tropical soils

    NASA Astrophysics Data System (ADS)

    Previatello da Silva, Livia; de Jong Van Lier, Quirijn

    2015-04-01

    Microtomography has been used in soil physics for characterization and allows non-destructive analysis with high-resolution, yielding a three-dimensional representation of pore space and fluid distribution. It also allows quantitative characterization of pore space, including pore size distribution, shape, connectivity, porosity, tortuosity, orientation, preferential pathways and is also possible predict the saturated hydraulic conductivity using Darcy's equation and a modified Poiseuille's equation. Connectivity of pore space is an important topological property of soil. Together with porosity and pore-size distribution, it governs transport of water, solutes and gases. In order to quantify and analyze pore space (quantifying connectivity of pores and porosity) of four tropical soils from Brazil with different texture and land use, undisturbed samples were collected in São Paulo State, Brazil, with PVC ring with 7.5 cm in height and diameter of 7.5 cm, depth of 10 - 30 cm from soil surface. Image acquisition was performed with a CT system Nikon XT H 225, with technical specifications of dual reflection-transmission target system including a 225 kV, 225 W high performance Xray source equipped with a reflection target with pot size of 3 μm combined with a nano-focus transmission module with a spot size of 1 μm. The images were acquired at specific energy level for each soil type, according to soil texture, and external copper filters were used in order to allow the attenuation of low frequency X-ray photons and passage of one monoenergetic beam. This step was performed aiming minimize artifacts such as beam hardening that may occur during the attenuation in the material interface with different densities within the same sample. Images were processed and analyzed using ImageJ/Fiji software. Retention curve (tension table and the pressure chamber methods), saturated hydraulic conductivity (constant head permeameter), granulometry, soil density and particle density were also performed in laboratory and results were compared with images analyzes.

  12. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  13. Curation and Analysis of Samples from Comet Wild-2 Returned by NASA's Stardust Mission

    NASA Technical Reports Server (NTRS)

    Nakamura-Messenger, Keiko; Walker, Robert M.

    2015-01-01

    The NASA Stardust mission returned the first direct samples of a cometary coma from comet 81P/Wild-2 in 2006. Intact capture of samples encountered at 6 km/s was enabled by the use of aerogel, an ultralow dense silica polymer. Approximately 1000 particles were captured, with micron and submicron materials distributed along mm scale length tracks. This sample collection method and the fine scale of the samples posed new challenges to the curation and cosmochemistry communities. Sample curation involved extensive, detailed photo-documentation and delicate micro-surgery to remove particles without loss from the aerogel tracks. This work had to be performed in highly clean facility to minimize the potential of contamination. JSC Curation provided samples ranging from entire tracks to micrometer-sized particles to external investigators. From the analysis perspective, distinguishing cometary materials from aerogel and identifying the potential alteration from the capture process were essential. Here, transmission electron microscopy (TEM) proved to be the key technique that would make this possible. Based on TEM work by ourselves and others, a variety of surprising findings were reported, such as the observation of high temperature phases resembling those found in meteorites, rarely intact presolar grains and scarce organic grains and submicrometer silicates. An important lesson from this experience is that curation and analysis teams must work closely together to understand the requirements and challenges of each task. The Stardust Mission also has laid important foundation to future sample returns including OSIRIS-REx and Hayabusa II and future cometary nucleus sample return missions.

  14. Improving small-angle X-ray scattering data for structural analyses of the RNA world

    PubMed Central

    Rambo, Robert P.; Tainer, John A.

    2010-01-01

    Defining the shape, conformation, or assembly state of an RNA in solution often requires multiple investigative tools ranging from nucleotide analog interference mapping to X-ray crystallography. A key addition to this toolbox is small-angle X-ray scattering (SAXS). SAXS provides direct structural information regarding the size, shape, and flexibility of the particle in solution and has proven powerful for analyses of RNA structures with minimal requirements for sample concentration and volumes. In principle, SAXS can provide reliable data on small and large RNA molecules. In practice, SAXS investigations of RNA samples can show inconsistencies that suggest limitations in the SAXS experimental analyses or problems with the samples. Here, we show through investigations on the SAM-I riboswitch, the Group I intron P4-P6 domain, 30S ribosomal subunit from Sulfolobus solfataricus (30S), brome mosaic virus tRNA-like structure (BMV TLS), Thermotoga maritima asd lysine riboswitch, the recombinant tRNAval, and yeast tRNAphe that many problems with SAXS experiments on RNA samples derive from heterogeneity of the folded RNA. Furthermore, we propose and test a general approach to reducing these sample limitations for accurate SAXS analyses of RNA. Together our method and results show that SAXS with synchrotron radiation has great potential to provide accurate RNA shapes, conformations, and assembly states in solution that inform RNA biological functions in fundamental ways. PMID:20106957

  15. Acupuncture injection for field amplified sample stacking and glass microchip-based capillary gel electrophoresis.

    PubMed

    Ha, Ji Won; Hahn, Jong Hoon

    2017-02-01

    Acupuncture sample injection is a simple method to deliver well-defined nanoliter-scale sample plugs in PDMS microfluidic channels. This acupuncture injection method in microchip CE has several advantages, including minimization of sample consumption, the capability of serial injections of different sample solutions into the same microchannel, and the capability of injecting sample plugs into any desired position of a microchannel. Herein, we demonstrate that the simple and cost-effective acupuncture sample injection method can be used for PDMS microchip-based field amplified sample stacking in the most simplified straight channel by applying a single potential. We achieved the increase in electropherogram signals for the case of sample stacking. Furthermore, we present that microchip CGE of ΦX174 DNA-HaeⅢ digest can be performed with the acupuncture injection method on a glass microchip while minimizing sample loss and voltage control hardware. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Further reduction of minimal first-met bad markings for the computationally efficient synthesis of a maximally permissive controller

    NASA Astrophysics Data System (ADS)

    Liu, GaiYun; Chao, Daniel Yuh

    2015-08-01

    To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.

  17. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Feedback Augmented Sub-Ranging (FASR) Quantizer

    NASA Technical Reports Server (NTRS)

    Guilligan, Gerard

    2012-01-01

    This innovation is intended to reduce the size, power, and complexity of pipeline analog-to-digital converters (ADCs) that require high resolution and speed along with low power. Digitizers are important components in any application where analog signals (such as light, sound, temperature, etc.) need to be digitally processed. The innovation implements amplification of a sampled residual voltage in a switched capacitor amplifier stage that does not depend on charge redistribution. The result is less sensitive to capacitor mismatches that cause gain errors, which are the main limitation of such amplifiers in pipeline ADCs. The residual errors due to mismatch are reduced by at least a factor of 16, which is equivalent to at least 4 bits of improvement. The settling time is also faster because of a higher feedback factor. In traditional switched capacitor residue amplifiers, closed-loop amplification of a sampled and held residue signal is achieved by redistributing sampled charge onto a feedback capacitor around a high-gain transconductance amplifier. The residual charge that was sampled during the acquisition or sampling phase is stored on two or more capacitors, often equal in value or integral multiples of each other. During the hold or amplification phase, all of the charge is redistributed onto one capacitor in the feedback loop of the amplifier to produce an amplified voltage. The key error source is the non-ideal ratios of feedback and input capacitors caused by manufacturing tolerances, called mismatches. The mismatches cause non-ideal closed-loop gain, leading to higher differential non-linearity. Traditional solutions to the mismatch errors are to use larger capacitor values (than dictated by thermal noise requirements) and/or complex calibration schemes, both of which increase the die size and power dissipation. The key features of this innovation are (1) the elimination of the need for charge redistribution to achieve an accurate closed-loop gain of two, (2) a higher feedback factor in the amplifier stage giving a higher closed-loop bandwidth compared to the prior art, and (3) reduced requirement for calibration. The accuracy of the new amplifier is mainly limited by the sampling networks parasitic capacitances, which should be minimized in relation to the sampling capacitors.

  19. High resolution microphotonic needle for endoscopic imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tadayon, Mohammad Amin; Mohanty, Aseema; Roberts, Samantha P.; Barbosa, Felippe; Lipson, Michal

    2017-02-01

    GRIN (Graded index) lens have revolutionized micro endoscopy enabling deep tissue imaging with high resolution. The challenges of traditional GRIN lenses are their large size (when compared with the field of view) and their limited resolution. This is because of the relatively weak NA in standard graded index lenses. Here we introduce a novel micro-needle platform for endoscopy with much higher resolution than traditional GRIN lenses and a FOV that corresponds to the whole cross section of the needle. The platform is based on polymeric (SU-8) waveguide integrated with a microlens micro fabricated on a silicon substrate using a unique molding process. Due to the high index of refraction of the material the NA of the needle is much higher than traditional GRIN lenses. We tested the probe in a fluorescent dye solution (19.6 µM Alexa Flour 647 solution) and measured a numerical aperture of 0.25, focal length of about 175 µm and minimal spot size of about 1.6 µm. We show that the platform can image a sample with the field of view corresponding to the cross sectional area of the waveguide (80x100 µm2). The waveguide size can in principle be modified to vary size of the imaging field of view. This demonstration, combined with our previous work demonstrating our ability to implant the high NA needle in a live animal, shows that the proposed system can be used for deep tissue imaging with very high resolution and high field of view.

  20. Note: Four-port microfluidic flow-cell with instant sample switching

    NASA Astrophysics Data System (ADS)

    MacGriff, Christopher A.; Wang, Shaopeng; Tao, Nongjian

    2013-10-01

    A simple device for high-speed microfluidic delivery of liquid samples to a surface plasmon resonance sensor surface is presented. The delivery platform is comprised of a four-port microfluidic cell, two ports serve as inlets for buffer and sample solutions, respectively, and a high-speed selector valve to control the alternate opening and closing of the two outlet ports. The time scale of buffer/sample switching (or sample injection rise and fall time) is on the order of milliseconds, thereby minimizing the opportunity for sample plug dispersion. The high rates of mass transport to and from the central microfluidic sensing region allow for SPR-based kinetic analysis of binding events with dissociation rate constants (kd) up to 130 s-1. The required sample volume is only 1 μL, allowing for minimal sample consumption during high-speed kinetic binding measurement.

  1. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  2. Biophysical considerations for optimizing energy delivery during Erbium:YAG laser vitreoretinal surgery

    NASA Astrophysics Data System (ADS)

    Berger, Jeffrey W.; Bochow, Thomas W.; Kim, Rosa Y.; D'Amico, Donald J.

    1996-05-01

    Er:YAG laser-mediated tissue disruption and removal results from both direct ablation and the acousto-mechanical sequelae of explosive vaporization of the tissue water. We investigated the scaling laws for photoablative and photodisruptive interactions, and interpret these results towards optimizing energy delivery for vitreoretinal surgical maneuvers. Experimental studies were performed with a free-running Er:YAG laser (100 - 300 microseconds FWHM, 0.5 - 20 mJ, 1 - 30 Hz). Energy was delivered by fiberoptic to a custom-made handpiece with a 75 - 600 micrometer quartz tip, and applied to excised, en bloc samples of bovine vitreous or model systems of saline solution. Sample temperature was measured with 33 gauge copper- constantan thermocouples. Expansion and collapse of the bubble following explosive vaporization of tissue water was optically detected. The bubble size was calculated from the period of the bubble oscillation and known material properties. A model for bubble expansion is presented based on energy principles and adiabatic gas expansion. Pressure transients associated with bubble dynamics are estimated following available experimental and analytical data. The temperature rise in vitreous and model systems depends on the pulse energy and repetition rate, but is independent of the probe-tip diameter at constant laser power; at moderate repetition rates, the temperature rise depends only on the total energy (mJ) delivered. The maximum bubble diameter increases as the cube root of the pulse energy with a reverberation period of 110 microseconds and a maximum bubble diameter of 1.2 mm following one mJ delivery to saline through a 100 micrometer tip. Our modeling studies generate predictions similar to experimental data and predicts that the maximum bubble diameter increases as the cube root of the pulse energy. We demonstrate that tissue ablation depends on radiant exposure (J/cm2), while temperature rise, bubble size, and pressure depends on total pulse energy. Further, we show that mechanical injury should be minimized by delivering low pulse energy, through small diameter probe tips, at high repetition rates. These results allow for optimization strategies relevant to achieving vitreoretinal surgical goals while minimizing the potential for unintentional injury.

  3. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  4. Optimum target sizes for a sequential sawing process

    Treesearch

    H. Dean Claxton

    1972-01-01

    A method for solving a class of problems in random sequential processes is presented. Sawing cedar pencil blocks is used to illustrate the method. Equations are developed for the function representing loss from improper sizing of blocks. A weighted over-all distribution for sawing and drying operations is developed and graphed. Loss minimizing changes in the control...

  5. Microbial Evaluation of Fresh, Minimally-processed Vegetables and Bagged Sprouts from Chain Supermarkets

    PubMed Central

    Jeddi, Maryam Zare; Yunesian, Masud; Gorji, Mohamad Es'haghi; Noori, Negin; Pourmand, Mohammad Reza

    2014-01-01

    ABSTRACT The aim of this study was to evaluate the bacterial and fungal quality of minimally-processed vegetables (MPV) and sprouts. A total of 116 samples of fresh-cut vegetables, ready-to-eat salads, and mung bean and wheat sprouts were randomly collected and analyzed. The load of aerobic mesophilic bacteria was minimum and maximum in the fresh-cut vegetables and fresh mung bean sprouts respectively, corresponding to populations of 5.3 and 8.5 log CFU/g. E. coli O157:H7 was found to be absent in all samples; however,  other E. coli strains were detected in 21 samples (18.1%), and Salmonella spp. were found in one mung bean (3.1%) and one ready-to-eat salad sample (5%). Yeasts were the predominant organisms and were found in 100% of the samples. Geotrichum, Fusarium, and Penicillium spp. were the most prevalent molds in mung sprouts while Cladosporium and Penicillium spp. were most frequently found in ready-to-eat salad samples. According to results from the present study, effective control measures should be implemented to minimize the microbiological contamination of fresh produce sold in Tehran, Iran. PMID:25395902

  6. Microbial evaluation of fresh, minimally-processed vegetables and bagged sprouts from chain supermarkets.

    PubMed

    Jeddi, Maryam Zare; Yunesian, Masud; Gorji, Mohamad Es'haghi; Noori, Negin; Pourmand, Mohammad Reza; Khaniki, Gholam Reza Jahed

    2014-09-01

    The aim of this study was to evaluate the bacterial and fungal quality of minimally-processed vegetables (MPV) and sprouts. A total of 116 samples of fresh-cut vegetables, ready-to-eat salads, and mung bean and wheat sprouts were randomly collected and analyzed. The load of aerobic mesophilic bacteria was minimum and maximum in the fresh-cut vegetables and fresh mung bean sprouts respectively, corresponding to populations of 5.3 and 8.5 log CFU/g. E. coli O157:H7 was found to be absent in all samples; however,  other E. coli strains were detected in 21 samples (18.1%), and Salmonella spp. were found in one mung bean (3.1%) and one ready-to-eat salad sample (5%). Yeasts were the predominant organisms and were found in 100% of the samples. Geotrichum, Fusarium, and Penicillium spp. were the most prevalent molds in mung sprouts while Cladosporium and Penicillium spp. were most frequently found in ready-to-eat salad samples. According to results from the present study, effective control measures should be implemented to minimize the microbiological contamination of fresh produce sold in Tehran, Iran.

  7. Minimizing target interference in PK immunoassays: new approaches for low-pH-sample treatment.

    PubMed

    Partridge, Michael A; Pham, John; Dziadiv, Olena; Luong, Onson; Rafique, Ashique; Sumner, Giane; Torri, Albert

    2013-08-01

    Quantitating total levels of monoclonal antibody (mAb) biotherapeutics in serum using ELISA may be hindered by soluble targets. We developed two low-pH-sample-pretreatment techniques to minimize target interference. The first procedure involves sample pretreatment at pH <3.0 before neutralization and analysis in a target capture ELISA. Careful monitoring of acidification time is required to minimize potential impact on mAb detection. The second approach involves sample dilution into mild acid (pH ∼4.5) before transferring to an anti-human capture-antibody-coated plate without neutralization. Analysis of target-drug and drug-capture antibody interactions at pH 4.5 indicated that the capture antibody binds to the drug, while the drug and the target were dissociated. Using these procedures, total biotherapeutic levels were accurately measured when soluble target was >30-fold molar excess. These techniques provide alternatives for quantitating mAb biotherapeutics in the presence of a target when standard acid-dissociation procedures are ineffective.

  8. Nanoliter microfluidic hybrid method for simultaneous screening and optimization validated with crystallization of membrane proteins

    PubMed Central

    Li, Liang; Mustafi, Debarshi; Fu, Qiang; Tereshko, Valentina; Chen, Delai L.; Tice, Joshua D.; Ismagilov, Rustem F.

    2006-01-01

    High-throughput screening and optimization experiments are critical to a number of fields, including chemistry and structural and molecular biology. The separation of these two steps may introduce false negatives and a time delay between initial screening and subsequent optimization. Although a hybrid method combining both steps may address these problems, miniaturization is required to minimize sample consumption. This article reports a “hybrid” droplet-based microfluidic approach that combines the steps of screening and optimization into one simple experiment and uses nanoliter-sized plugs to minimize sample consumption. Many distinct reagents were sequentially introduced as ≈140-nl plugs into a microfluidic device and combined with a substrate and a diluting buffer. Tests were conducted in ≈10-nl plugs containing different concentrations of a reagent. Methods were developed to form plugs of controlled concentrations, index concentrations, and incubate thousands of plugs inexpensively and without evaporation. To validate the hybrid method and demonstrate its applicability to challenging problems, crystallization of model membrane proteins and handling of solutions of detergents and viscous precipitants were demonstrated. By using 10 μl of protein solution, ≈1,300 crystallization trials were set up within 20 min by one researcher. This method was compatible with growth, manipulation, and extraction of high-quality crystals of membrane proteins, demonstrated by obtaining high-resolution diffraction images and solving a crystal structure. This robust method requires inexpensive equipment and supplies, should be especially suitable for use in individual laboratories, and could find applications in a number of areas that require chemical, biochemical, and biological screening and optimization. PMID:17159147

  9. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  10. WE-G-204-03: Photon-Counting Hexagonal Pixel Array CdTe Detector: Optimal Resampling to Square Pixels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, S; Vedantham, S; Karellas, A

    Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels provides high-resolution imaging suitable for fluoroscopy.« less

  11. Mini-Membrane Evaporator for Contingency Spacesuit Cooling

    NASA Technical Reports Server (NTRS)

    Makinen, Janice V.; Bue, Grant C.; Campbell, Colin; Craft, Jesse; Lynch, William; Wilkes, Robert; Vogel, Matthew

    2014-01-01

    The next-generation Advanced Extravehicular Mobility Unit (AEMU) Portable Life Support System (PLSS) is integrating a number of new technologies to improve reliability and functionality. One of these improvements is the development of the Auxiliary Cooling Loop (ACL) for contingency crewmember cooling. The ACL is a completely redundant, independent cooling system that consists of a small evaporative cooler--the Mini Membrane Evaporator (Mini-ME), independent pump, independent feedwater assembly and independent Liquid Cooling Garment (LCG). The Mini-ME utilizes the same hollow fiber technology featured in the full-sized AEMU PLSS cooling device, the Spacesuit Water Membrane Evaporator (SWME), but Mini-ME occupies only 25% of the volume of SWME, thereby providing only the necessary crewmember cooling in a contingency situation. The ACL provides a number of benefits when compared with the current EMU PLSS contingency cooling technology, which relies upon a Secondary Oxygen Vessel; contingency crewmember cooling can be provided for a longer period of time, more contingency situations can be accounted for, no reliance on a Secondary Oxygen Vessel (SOV) for contingency cooling--thereby allowing a reduction in SOV size and pressure, and the ACL can be recharged-allowing the AEMU PLSS to be reused, even after a contingency event. The first iteration of Mini-ME was developed and tested in-house. Mini-ME is currently packaged in AEMU PLSS 2.0, where it is being tested in environments and situations that are representative of potential future Extravehicular Activities (EVA's). The second iteration of Mini-ME, known as Mini- ME2, is currently being developed to offer more heat rejection capability. The development of this contingency evaporative cooling system will contribute to a more robust and comprehensive AEMU PLSS.

  12. Morphology and nano-structure analysis of soot particles sampled from high pressure diesel jet flames under diesel-like conditions

    NASA Astrophysics Data System (ADS)

    Jiang, Hao; Li, Tie; Wang, Yifeng; He, Pengfei

    2018-04-01

    Soot particles emitted from diesel engines have a significant impact on the atmospheric environment. Detailed understanding of soot formation and oxidation processes is helpful for reducing the pollution of soot particles, which requires information such as the size and nano-structure parameters of the soot primary particles sampled in a high-temperature and high-pressure diesel jet flame. Based on the thermophoretic principle, a novel sampling probe minimally disturbing the diesel jet flame in a constant volume combustion vessel is developed for analysing soot particles. The injected quantity of diesel fuel is less than 10 mg, and the soot particles sampled by carriers with a transmission electron microscope (TEM) grid and lacey TEM grid can be used to analyse the morphologies of soot aggregates and the nano-structure of the soot primary particles, respectively. When the quantity of diesel fuel is more than 10 mg, in order to avoid burning-off of the carriers in higher temperature and pressure conditions, single-crystal silicon chips are employed. Ultrasonic oscillations and alcohol extraction are then implemented to obtain high quality soot samples for observation using a high-resolution transmission electron microscope. An in-house Matlab-based code is developed to extract the nano-structure parameters of the soot particles. A complete sampling and analysis procedure of the soot particles is provided to study the formation and oxidation mechanism of soot.

  13. Minimally invasive PCNL-MIP.

    PubMed

    Zanetti, Stefano Paolo; Boeri, Luca; Gallioli, Andrea; Talso, Michele; Montanari, Emanuele

    2017-01-01

    Miniaturized percutaneous nephrolithotomy (mini-PCNL) has increased in popularity in recent years and is now widely used to overcome the therapeutic gap between conventional PCNL and less-invasive procedures such as shock wave lithotripsy (SWL) or flexible ureterorenoscopy (URS) for the treatment of renal stones. However, despite its minimally invasive nature, the superiority in terms of safety, as well as the similar efficacy of mini-PCNL compared to conventional procedures, is still under debate. The aim of this chapter is to present one of the most recent advancements in terms of mini-PCNL: the Karl Storz "minimally invasive PCNL" (MIP). A literature search for original and review articles either published or e-published up to December 2016 was performed using Google and the PubMed database. Keywords included: minimally invasive PCNL; MIP. The retrieved articles were gathered and examined. The complete MIP set is composed of different sized rigid metallic fiber-optic nephroscopes and different sized metallic operating sheaths, according to which the MIP is categorized into extra-small (XS), small (S), medium (M) and large (L). Dilation can be performed either in one-step or with a progressive technique, as needed. The reusable devices of the MIP and vacuum cleaner efect make PCNL with this set a cheap procedure. The possibility to shift from a small to a larger instrument within the same set (Matrioska technique) makes MIP a very versatile technique suitable for the treatment of almost any stone. Studies in the literature have shown that MIP is equally effective, with comparable rates of post-operative complications, as conventional PCNL, independently from stone size. MIP does not represent a new technique, but rather a combination of the last ten years of PCNL improvements in a single system that can transversally cover all available techniques in the panorama of percutaneous stone treatment.

  14. The Quality of the Embedding Potential Is Decisive for Minimal Quantum Region Size in Embedding Calculations: The Case of the Green Fluorescent Protein.

    PubMed

    Nåbo, Lina J; Olsen, Jógvan Magnus Haugaard; Martínez, Todd J; Kongsted, Jacob

    2017-12-12

    The calculation of spectral properties for photoactive proteins is challenging because of the large cost of electronic structure calculations on large systems. Mixed quantum mechanical (QM) and molecular mechanical (MM) methods are typically employed to make such calculations computationally tractable. This study addresses the connection between the minimal QM region size and the method used to model the MM region in the calculation of absorption properties-here exemplified for calculations on the green fluorescent protein. We find that polarizable embedding is necessary for a qualitatively correct description of the MM region, and that this enables the use of much smaller QM regions compared to fixed charge electrostatic embedding. Furthermore, absorption intensities converge very slowly with system size and inclusion of effective external field effects in the MM region through polarizabilities is therefore very important. Thus, this embedding scheme enables accurate prediction of intensities for systems that are too large to be treated fully quantum mechanically.

  15. Minimization of bacterial size allows for complement evasion and is overcome by the agglutinating effect of antibody

    PubMed Central

    Dalia, Ankur B.; Weiser, Jeffrey N.

    2011-01-01

    SUMMARY The complement system, which functions by lysing pathogens directly or by promoting their uptake by phagocytes, is critical for controlling many microbial infections. Here we show that in Streptococcus pneumoniae, increasing bacterial chain length sensitizes this pathogen to complement deposition and subsequent uptake by human neutrophils. Consistent with this, we show that minimizing chain length provides wild-type bacteria with a competitive advantage in vivo in a model of systemic infection. Investigating how the host overcomes this virulence strategy, we find that antibody promotes complement-dependent opsonophagocytic killing of Streptococcus pneumoniae and lysis of Haemophilus influenzae independent of Fc-mediated effector functions. Consistent with the agglutinating effect of antibody, F(ab′)2 but not Fab could promote this effect. Therefore, increasing pathogen size, whether by natural changes in cellular morphology or via antibody-mediated agglutination, promotes complement-dependent killing. These observations have broad implications for how cell size and morphology can affect virulence among pathogenic microbes. PMID:22100164

  16. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  17. Application of Ultrasound-Guided Core Biopsy to Minimal-Invasively Diagnose Supraclavicular Fossa Tumors and Minimize the Requirement of Invasive Diagnostic Surgery

    PubMed Central

    Chen, Chun-Nan; Lin, Che-Yi; Chi, Fan-Hsiang; Chou, Chen-Han; Hsu, Ya-Ching; Kuo, Yen-Lin; Lin, Chih-Feng; Chen, Tseng-Cheng; Wang, Cheng-Ping; Lou, Pei-Jen; Ko, Jenq-Yuh; Hsiao, Tzu-Yu; Yang, Tsung-Lin

    2016-01-01

    Abstract Tumors of the supraclavicular fossa (SC) is clinically challenging because of anatomical complexity and tumor pathological diversity. Because of varied diseases entities and treatment choices of SC tumors, making the accurate decision among numerous differential diagnoses is imperative. Sampling by open biopsy (OB) remains the standard procedure for pathological confirmation. However, complicated anatomical structures of SC always render surgical intervention difficult to perform. Ultrasound-guided core biopsy (USCB) is a minimally invasive and office-based procedure for tissue sampling widely applied in many diseases of head and neck. This study aims to evaluate the clinical efficacy and utility of using USCB as the sampling method of SC tumors. From 2009 to 2014, consecutive patients who presented clinical symptoms and signs of supraclavicular tumors and were scheduled to receive sampling procedures for diagnostic confirmation were recruited. The patients received USCB or OB respectively in the initial tissue sampling. The accurate diagnostic rate based on pathological results was 90.2% for USCB, and 93.6% for OB. No significant difference was noted between USCB and OB groups in terms of diagnostic accuracy and the percentage of inadequate specimens. All cases in the USCB group had the sampling procedure completed within 10 minutes, but not in the OB group. No scars larger than 1 cm were found in USCB. Only patients in the OB groups had the need to receive general anesthesia and hospitalization and had scars postoperatively. Accordingly, USCB can serve as the first-line sampling tool for SC tumors with high diagnostic accuracy, minimal invasiveness, and low medical cost. PMID:26825877

  18. Considerations for successful cosmogenic 3He dating in accessory phases

    NASA Astrophysics Data System (ADS)

    Amidon, W. H.; Farley, K. A.; Rood, D. H.

    2008-12-01

    We have been working to develop cosmogenic 3He dating of phases other than the commonly dated olivine and pyroxene, especially apatite and zircon. Recent work by Dunai et al. underscores that cosmogenic 3He dating is complicated by 3He production via 6Li(n,α) 3H --> 3He. The reacting thermal neutrons can be produced from three distinct sources; nucleogenic processes (3Henuc), muon interactions (3Hemu), and by high-energy "cosmogenic" neutrons (3Hecn). Accurate cosmogenic 3He dating requires determination of the relative fractions of Li-derived and spallation derived 3He. An important complication for the fine-grained phases we are investigating is that both spallation and the 6Li reaction eject high energy particles, with consequences for redistribution of 3He among phases in a rock. Although shielded samples can be used to estimate 3Henuc, they do not conatin the 3Hecn component produced in the near surface. To calculate this component, we propose a procedure in which the bulk rock chemistry, helium closure age, 3He concentration, grain size and Li content of the target mineral are measured in a shielded sample. The average Li content of the adjacent minerals can then be calculated, which in turn allows calculation of the 3Hecn component in surface exposed samples of the same lithology. If identical grain sizes are used in the shielded and surface exposed samples, then "effective" Li can be calculated directly from the shielded sample, and it may not be necessary to measure Li at all. To help validate our theoretical understanding of Li-3He production, and to constrain the geologic contexts in which cosmogenic 3He dating with zircon and apatite is likely to be successful, results are presented from four different field locations. For example, results from ~18 Ky old moraines in the Sierra Nevada show that the combination of low Li contents and high closure ages (>50 My) creates a small 3Hecn component (2%) but a large 3Henuc component (40-70%) for zircon and apatite. In contrast the combination of high Li contents and a young closure age (0.6 My) in rhyolite from the Coso volcanic field leads to a large 3Hecn component (30%) and small 3Henuc component (5%) in zircon. Analysis of samples from a variety of lithologies shows that zircon and apatite tend to be low in Li (1-10 ppm), but are vulnerable to implantation of 3He from adjacent minerals due to their small grain size, especially from minerals like biotite and hornblende. This point is well illustrated by data from both the Sierra Nevada and Coso examples, in which there is a strong correlation between grain size and 3He concentration for zircons due to implantation. In contrast, very large zircons (150>125 um width) obtained from shielded samples of the Shoshone Falls rhyolite (SW Idaho) do not contain a significant implanted component. Thus, successful 3He dating of accessory phases requires low Li content (<10 ppm) in the target mineral and either 1) low Li in adjacent minerals, or 2) the use of large grain sizes (>100 um). In high-Li cases, the fraction of 3Henuc is minimized in samples with young helium closure ages or longer duration of exposure. However because the 3Hecn/3Hespall ratio is fixed for a given Li content, longer exposure will not reduce the fraction of 3Hecn.

  19. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  20. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

Top